Kerckhoffs's principle
Also known as : Kerckhoffs's law · Shannon's maxim
Kerckhoffs’s principle is one of the conceptual pillars of modern cryptography. Formulated in 1883 by the Dutch linguist and military cryptographer Auguste Kerckhoffs in his article La cryptographie militaire (Journal des sciences militaires), it states that the security of a cryptographic system must rest solely on the secrecy of the key, and never on the secrecy of the procedure. Everything else — algorithm, machine, specification, source code — can be public without affecting security.
Six principles, not just one
Kerckhoffs’s article actually listed six principles; posterity remembered only the second. Here they are in their original (modernized) form:
- The system must be practically, if not mathematically, indecipherable.
- It must not require secrecy, and it must be possible for it to fall into enemy hands without inconvenience.
- The key must be communicable and memorable without written notes, and changeable at will by the correspondents.
- It must be applicable to telegraph correspondence.
- It must be portable, and its operation must not require the involvement of several people.
- Finally, given the circumstances of its application, the system must be easy to use, requiring neither mental strain nor knowledge of a long set of rules.
Principle #2 is the one we now call simply “Kerckhoffs’s principle”. The others remain widely relevant — particularly #3 (key rotation) and #6 (usability), which have cost wars to many poorly designed cryptographic operations.
Shannon’s maxim
Reformulated a century later, in 1949, by Claude Shannon in his foundational paper Communication Theory of Secrecy Systems: “The enemy knows the system.” This became Shannon’s maxim, and it’s the formulation people typically quote in practice. It says the same thing as Kerckhoffs in sharper terms: don’t speculate about the adversary’s ignorance — assume they know everything but the key.
Why the principle matters
-
Secret algorithms always end up leaking: defectors, reverse engineering, intrusion, dumpster diving, stolen laptop, disgruntled fired employee. A system that collapses at the first leak is fragile by construction. History is full of ciphers thought secret that leaked — Enigma was reverse-engineered by the Poles before WWII, Purple was broken by Friedman.
-
A public algorithm can be studied by the entire academic community. Weaknesses are found and patched before the motivated attacker. AES was chosen in 2001 after a five-year public competition, where fifteen candidates were hand-cryptanalyzed by every researcher in the world. The survivors win — and that’s what builds confidence.
-
A key can be changed much more easily than an algorithm. If a key leaks, you issue a new one. If the algorithm leaks, you have to redeploy everything — rewrite the code, redistribute the hardware, retrain the operators. Astronomical cost.
-
Security through obscurity is a poor signal of quality. A vendor that refuses to publish its algorithm hides something else: either they can’t write one, or they know it doesn’t hold up to scrutiny.
Concrete consequences
- All modern standards (AES, RSA, ChaCha20, SHA-256, Argon2) are public, standardized (NIST, IETF), and their reference code is open. They have been studied for decades by thousands of researchers.
- Be wary of any product that boasts “proprietary” or “secret” cryptography. It’s the classic red flag of snake oil in security. Bruce Schneier maintained a long-running blog dedicated to dismantling these products.
- Open audit: any serious crypto library (OpenSSL, libsodium, BouncyCastle) is open-source and subject to public audit.
- Open source doesn’t mean bug-free: Heartbleed (2014) was in OpenSSL, public code since 1998. But the bug was found because the code was public — six years, granted, but found.
A counter-example: the GSM A5/1 saga
The mobile telephony standard GSM (1987) shipped with a stream cipher called A5/1, whose design was kept secret for over a decade. Then in 1999 the algorithm leaked, and within months academic cryptanalysts (Biryukov, Shamir, Wagner) had broken it — recovering call keys in seconds on a desktop computer. By 2009, end-to-end attack tools were public, and entire networks of GSM phone calls could be passively decrypted. Had A5/1 been published openly in 1987, the weaknesses would have been found before deployment, and a stronger design would have shipped from day one. Instead, billions of phone calls over a quarter century were vulnerable. Textbook violation of Kerckhoffs, textbook consequences.
CipherChronicle and Kerckhoffs
On CipherChronicle, every cipher on display has its algorithm in lib/ciphers/registry.ts, open on GitHub. It’s their key that protects them — not the procedure. Puzzles rely on the cryptanalytic difficulty of a cipher whose algorithm is known. The platform’s pedagogy itself rests on Kerckhoffs: we don’t try to hide “how Caesar works”, we demonstrate that knowing the procedure doesn’t help you crack a specific instance without the key.
Key takeaways:
- “The enemy knows the system” (Shannon, 1949). Assume everything but the key is public.
- Six original principles in Kerckhoffs; the second is best remembered, but the others (rotation, usability) remain essential.
- Consequence: serious algorithms are public and standardized. Beware of “proprietary crypto”.
- Open source doesn’t guarantee bug-free — but it’s the only known way to have a chance of finding bugs.