
The timeline below provides the regulatory context for the rest of this article. It shows how the discussion around post-quantum migration evolved from general guidance to concrete transition pressure, which is especially relevant for teams that need to prioritize implementation work.

Today’s digital communication relies on strong cryptographic protections: signatures prove the authenticity of data such as files and software updates, while key-exchange algorithms allow two parties to establish a symmetric secret over an untrusted channel using asymmetric cryptography.
These encryption algorithms are based on mathematical problems for which no efficient solution is known on classical computers. Two prominent examples are the factorization problem (RSA) and the discrete logarithm problem (e.g., Diffie–Hellman, DSA, and elliptic-curve cryptography).
While these algorithms are currently considered secure against attacks using classical computers, the rapid progress in quantum technology raises an important question: Will my data remain confidential for a sufficiently long period of time?
Depending on the threat model, an adversary may already be intercepting and storing encrypted traffic today with the intention of decrypting it later, once access to a sufficiently powerful quantum computer becomes available.
In this blog post, we address these concerns with both theory and practical migration guidance.
This article is written as two connected reading paths. If your main goal is to understand what to do in practice, you can start with BSI TR-02102, Version 2026-01: What’s Changed? , continue with Implementing PQC in Java (24+) , and then move to the deployment-oriented sections on PQC for OpenSSH , PQC for your VPN , and PQC for Kubernetes . If, instead, you want to understand why widely deployed public-key algorithms are affected by quantum computers, begin with The Factorization Problem and The Discrete Logarithm Problem , continue with Shor’s Algorithm and What about symmetric encryption? , and then use What is different about ML-KEM and ML-DSA? as the bridge into the migration-focused part of the article.
Basically, the Factorization Problem can be broken down to a single sentence: No efficient method is known for decomposing an arbitrary number into its prime factors.
Every integer can be decomposed into its prime factors. As an example:
$$ 60 = 2² \times 3\times 5$$with 2, 3 and 5 being the prime factors of 60.
Various algorithms are known that serve the purpose of decomposing numbers into their prime factors. However, none of them are efficient, especially for very big numbers. Below are some prominent algorithms listed with a rough comparison of their running times:
| Algorithm | Rough runtime | Holds under / depends on |
|---|---|---|
| Pollard’s rho | Expected $O(\sqrt p)$ modular multiplications; hence $O(n^{1/4})$ if $p\approx \sqrt n$ | $p$ = smallest prime factor; heuristic randomness assumptions ( cs.mcgill.ca ) |
| Pollard’s $(p-1)$ | $O!\left(\frac{B\ln n}{\ln B}\right)$ modular multiplications | Works well if some factor $p$ has $(p-1)$ $B$-smooth ( cs.mcgill.ca ) |
| Fermat | $\approx \frac{c+d}{2}-\sqrt N = \frac{(\sqrt d-\sqrt c)^2}{2}$ steps; worst-case $O(N)$ | Depends strongly on factor closeness; prime case is $O(N)$ ( Wikipedia ) |
The RSA algorithm makes use of this unsolved issue by using the product of two large prime numbers, $p$ and $q$, as the modulus for operations.
The discrete logarithm problem is usually defined in the multiplicative group $\mathbb{Z}_p^*$ for a prime $p$. If $g$ is a primitive root modulo $p$, then every element $a \in \mathbb{Z}_p^*$ can be written as
$$a \equiv g^x \pmod p$$for some integer $x$. In this setting, the discrete logarithm of $a$ to the base $g$ modulo $p$ is the exponent $x$ for which this relation holds.
As an example,
$$3^7 = 2187 \equiv 11 \pmod{17}$$so the discrete logarithm of $11$ to the base $3$ modulo $17$ is $7$.
Like factoring, this problem is easy to state but hard to solve efficiently for large parameters. This is exactly why discrete-logarithm-based systems such as Diffie–Hellman and elliptic-curve cryptography have been practical for classical computers.
Below are some well-known approaches for solving discrete logarithms on classical computers:
| Algorithm | Rough runtime | Typical setting |
|---|---|---|
| Baby-step giant-step | $O(\sqrt p)$ group operations and memory | Generic groups |
| Pollard’s rho for DLP | Expected $O(\sqrt p)$ group operations with low memory | Generic groups |
| Index calculus | Sub-exponential for suitable finite fields | Large finite-field groups |
Shor’s algorithm was developed by Peter Shor in 1994. In simple terms, it enables the efficient solution of integer factorization and discrete logarithm problems on a quantum computer. On a sufficiently large, fault-tolerant quantum computer, an organization could factor large integers and compute discrete logarithms in polynomial time, e.g. roughly:
$$\mathcal{O}((\log N)^3)$$You can read more about Shor’s Algorithm here .
You might have noticed that the mathematical problems that can be solved efficiently using Shor’s algorithm form the foundation of many asymmetric cryptographic algorithms. Why is that?
The simple answer is that symmetric encryption, such as AES-256, does not rely on factoring or discrete logarithms. Instead, the same secret key is used for encryption and decryption, enabling robust and efficient protection.
Importantly, AES does not rely on a single “hard mathematical problem” in the way RSA relies on factorization. Its security is based on a different principle: resistance against all currently known forms of cryptanalysis. Concretely, AES is constructed as a substitution–permutation network that combines:
The security assumption behind AES is that, without knowledge of the secret key, it behaves like a pseudo-random permutation. In other words, the best known attack against full AES is essentially exhaustive key search (brute force).
For quantum adversaries, the most relevant attack is Grover’s algorithm. Unlike Shor’s algorithm, Grover does not provide an exponential speedup. It only gives a quadratic speedup for brute-force search:
$$2^n \rightarrow 2^{n/2}$$In other words, Grover’s algorithm effectively halves the cryptographic strength of these algorithms. Since modern security recommendations target at least 120–128 bits of security, it is advisable to migrate from AES-128 to AES-256 in order to maintain an adequate security margin in a post-quantum setting.
You can read more about Grover’s algorithm here
ML-KEM (standardized in FIPS 203) and ML-DSA (standardized in FIPS 204) are both module-lattice-based post-quantum primitives, but they solve different tasks. ML-KEM is a key-encapsulation mechanism for establishing a shared symmetric secret over a public channel, while ML-DSA is a digital signature algorithm for authenticity and integrity. In the NIST standards, ML-KEM security is tied to the hardness of Module-LWE, and ML-DSA security is tied to Module-SIS/SelfTargetMSIS and Module-LWE assumptions. Because these schemes are not based on factoring or discrete logarithms, Shor’s algorithm does not directly break them.
Learning with Errors (LWE) is the hardness assumption behind many lattice schemes. Informally, you get many noisy modular equations
$$b = A \cdot s + e \pmod q \in \mathbb{Z}_q^m$$and must recover the secret
$$s \in_R \mathbb{Z}_q^n$$or distinguish these samples from random values. Here,
$$A \in_R \mathbb{Z}_q^{m \times n}$$is public and
$$e \in_R [-B, B]^m$$is a small random error vector. Without the error term, the system is easy linear algebra; the noise is what makes the problem hard. In ML-KEM, this is used in its structured module form (Module-LWE), which keeps the same security idea while enabling practical key sizes and performance.
Noise size $B$: is the bound on the entries of the error vector $e$. It controls how “noisy” the LWE samples are: more noise generally increases hardness, but too much noise can break correctness (decryption/rounding errors), so $B$ must be chosen to keep the total noise well below $q/2$.
The (homogeneous) Short Integer Solution (SIS) problem asks:
Given a uniformly random matrix
$$A \in_R \mathbb{Z}_q^{n \times m}$$,
find a nonzero integer vector
$$z \in \mathbb{Z}^m$$such that
$$A z \equiv 0 \pmod q,$$and $z$ is short, meaning each coordinate is bounded:
$$z \in [-B, B]^m,$$with typical parameter regime $B \ll q/2$.
If $n \ge m$, a random $A$ is expected to have only the trivial solution $z=0$ to $Az\equiv 0 \pmod q$; so we typically assume $n < m$.
A standard existence guarantee uses a counting (pigeonhole) argument: if the number of “candidate short vectors” exceeds the number of possible outputs in $\mathbb{Z}_q^n$, then two candidates collide and their difference is a nonzero short solution. The following condition ensures that an SIS solution exists:
,
Let’s look at the example $n=3, m=5, q=13, B=3$, construct an $A$, solve $Az\equiv 0 \pmod{13}$ via elimination, and then enumerate which solutions lie within $[-3,3]^5$. This shows the workflow: linear algebra over $\mathbb{Z}_q$ finds the full solution space, then the SIS constraint filters for small-norm solutions.
A key application is a collision-resistant hash function:
Why collision resistance reduces to SIS:
If an attacker finds $z_1\ne z_2$ with $H_A(z_1)=H_A(z_2)$, then $A(z_1-z_2)\equiv 0 \pmod q$.
And $z=z_1-z_2$ is nonzero and short (entries in $([-1,1])$), giving an SIS solution with $B=1$.
So: efficient collisions ⇒ efficient SIS solver, which is why SIS hardness implies collision resistance.
The source for this paragraph is Cryptography 101 with Alfred Menezes , which is a great resource on the topic of PQC.
With these building blocks in mind, we can now move from cryptographic intuition to migration guidance: first to the regulatory impact of the new BSI recommendations, and then to concrete implementation and deployment options in common developer and infrastructure stacks.
Executive summary. The BSI now gives organizations a much clearer transition signal: classical asymmetric mechanisms without post-quantum protection are no longer a long-term option, hybrid approaches have become strategically important, and teams should start migration planning now rather than waiting for broad product support to appear on its own.
The most notable change in the new version of the technical guideline is the deprecation of classical asymmetric encryption algorithms that do not provide protection against attacks by quantum computers. The BSI provides an explicit target date for the transition: by the end of 2031. In the case of high protection needs, the transition should happen by the end of 2030.
For many teams, the practical takeaway can be summarized as follows:
The transition table below complements the timeline at the beginning of this article and gives a concrete view of the algorithms that are approaching end-of-life in the BSI guidance.
Below is a screenshot of a table from the
BSI document
, giving an overview of the algorithms that are approaching end-of-life:

A compact decision view for common engineering questions looks like this:
| Area | What to do now | Strategic direction |
|---|---|---|
| Key exchange | Prefer hybrid key exchange where your products already support it | Move toward post-quantum or hybrid defaults |
| Signatures | Keep classical signatures only where required by current ecosystems | Evaluate ML-DSA, SLH-DSA, LMS/HSS, and XMSS migration paths |
| Symmetric encryption | Prefer AES-256 and modern hash functions | Maintain a strong symmetric baseline during the transition |
| Operations | Build a crypto inventory and a staged migration plan | Complete migration according to BSI timelines and protection needs |
The BSI has also updated its recommendations for post-quantum cryptographic algorithms. One of the most relevant changes is that, starting with Version 2026-01, the BSI explicitly recommends hybrid approaches in practice.
The following list summarizes the recommended algorithm families:
Quantum-safe:
Quantum-safe:
Classical (not quantum-safe):
Current status. Since Java 24, the Java Enhancement Proposals (JEPs) 496 and 497 have been delivered, which means they are available in a stable state. JEP 497 enhances the cryptographic security of Java applications by providing the Module-Lattice-Based Digital Signature Algorithm (ML-DSA), while JEP 496 adds the Quantum-Resistant Module-Lattice-Based Key Encapsulation Mechanism (ML-KEM).
What you can do today. ML-KEM allows developers to establish shared symmetric keys over insecure communication channels using public-key cryptography, while ML-DSA provides a post-quantum signature algorithm for authenticity and integrity.
Example Code ML-KEM
import java.security.KeyPair;
import java.security.KeyPairGenerator;
import java.security.Key;
import java.security.NoSuchAlgorithmException;
import java.security.PublicKey;
import java.security.PrivateKey;
import java.security.InvalidKeyException;
import java.security.MessageDigest;
import java.util.Base64;
import javax.crypto.KEM;
import javax.crypto.DecapsulateException;
import javax.crypto.SecretKey;
public class ML_KEM {
private final KeyPairGenerator g;
private final KeyPair kp;
public ML_KEM() throws NoSuchAlgorithmException {
this.g = KeyPairGenerator.getInstance("ML-KEM-1024");
this.kp = g.generateKeyPair();
}
public PublicKey getPublicKey() {
return this.kp.getPublic();
}
public PrivateKey getPrivateKey() {
return this.kp.getPrivate();
}
private static void printKey(String label, Key key) {
byte[] encoded = key.getEncoded();
System.out.println("\n" + label + " (" + key.getAlgorithm() +
", " + key.getFormat() + ", " + encoded.length + " bytes):");
System.out.println(Base64.getEncoder().encodeToString(encoded));
}
public static void main(String[] args)
throws NoSuchAlgorithmException, InvalidKeyException, DecapsulateException {
ML_KEM kp = new ML_KEM();
// Encapsulate ML-KEM key (sender)
KEM ks = KEM.getInstance("ML-KEM");
KEM.Encapsulator enc = ks.newEncapsulator(kp.getPublicKey());
KEM.Encapsulated encapsulated = enc.encapsulate();
// Decapsulate ML-KEM key (receiver)
KEM.Decapsulator dec = ks.newDecapsulator(kp.getPrivateKey());
SecretKey decapsulatedKey = dec.decapsulate(encapsulated.encapsulation());
System.out.println("\nEncapsulation bytes: " + encapsulated.encapsulation().length);
System.out.println("Shared secret matches: "
+ MessageDigest.isEqual(encapsulated.key().getEncoded(), decapsulatedKey.getEncoded()));
}
}
Output
Encapsulation bytes: 1568
Shared secret matches: true
An alternative to these JEPs is Bouncy Castle for Java. Their documentation can be found here .
Example code for ML-DSA
import java.nio.charset.StandardCharsets;
import java.security.InvalidKeyException;
import java.security.KeyPair;
import java.security.KeyPairGenerator;
import java.security.NoSuchAlgorithmException;
import java.security.PrivateKey;
import java.security.PublicKey;
import java.security.Signature;
import java.security.SignatureException;
public class ML_DSA {
private final KeyPairGenerator g;
private final KeyPair kp;
public ML_DSA() throws NoSuchAlgorithmException {
// Generate an ML-DSA-87 key pair
this.g = KeyPairGenerator.getInstance("ML-DSA-87");
this.kp = g.generateKeyPair();
}
public PublicKey getPublicKey() {
return this.kp.getPublic();
}
public PrivateKey getPrivateKey() {
return this.kp.getPrivate();
}
public static void main(String[] args)
throws NoSuchAlgorithmException, InvalidKeyException, SignatureException {
ML_DSA kp = new ML_DSA();
// Small message to sign and verify.
byte[] msg = "ML-DSA demo message".getBytes(StandardCharsets.UTF_8);
// Create a signature over the message using the private key.
Signature ss = Signature.getInstance("ML-DSA");
ss.initSign(kp.getPrivateKey());
ss.update(msg);
byte[] sig = ss.sign();
// Verify the signature with the corresponding public key.
Signature sv = Signature.getInstance("ML-DSA");
sv.initVerify(kp.getPublicKey());
sv.update(msg);
boolean verified = sv.verify(sig);
// Print a compact summary of the demo result.
System.out.println("\nMessage bytes: " + msg.length);
System.out.println("Signature bytes: " + sig.length);
System.out.println("Signature verified: " + verified);
}
}
Output:
Message bytes: 19
Signature bytes: 4627
Signature verified: true
If you need post-quantum support on older Java runtimes or want broader provider flexibility, Bouncy Castle remains a relevant alternative.
The OpenSSH team released version 10.0 of the OpenSSH server in early April 2025. In this release, methods beginning with diffie-hellman-group* and diffie-hellman-group-exchange-* were removed from the default key exchange set. The release also makes mlkem768x25519-sha256 the default for key exchange. This hybrid algorithm combines ML-KEM with the classical elliptic-curve method X25519, providing post-quantum-resistant properties while maintaining compatibility and performance.
The server’s default key exchange set is now:
What you can do today. Out of these options, the first three are hybrid post-quantum algorithms. If you want your server to exclusively prefer post-quantum-capable key exchange, you should restrict the configured KEX set to avoid silently falling back to non-PQ alternatives.
Important limitation. This hardens the SSH key exchange, but it does not by itself mean that every aspect of SSH authentication is already post-quantum. Host keys, user authentication, clients, and surrounding tooling may still rely on classical algorithms.
The following can be applied in your /etc/ssh/sshd_config to further strengthen the cryptographic security of your SSH connections:
# /etc/ssh/sshd_config
# Restrict key exchange (KEX) to OpenSSH post-quantum hybrid KEX only
KexAlgorithms mlkem768x25519-sha256,sntrup761x25519-sha512,sntrup761x25519-sha512@openssh.com
WireGuard currently uses Curve25519 (X25519) for the handshake key exchange and does not yet provide a standardized post-quantum key exchange in upstream implementations.
If an additional layer of post-quantum resistance is required, WireGuard supports an optional pre-shared key (PSK) that is mixed into the handshake key derivation; if the PSK is established via a post-quantum mechanism, the resulting setup becomes a hybrid approach. One example is Rosenpass , an external post-quantum key-exchange protocol intended to be used with WireGuard. Rosenpass states that it works with WireGuard and uses post-quantum key exchange methods.
Limitation: This is currently an add-on approach rather than a native upstream post-quantum handshake in WireGuard itself.
Here is an excerpt from the quick start guide for Rosenpass + WireGuard, as described in the Rosenpass documentation :
rp genkey server.rosenpass-secret
rp pubkey server.rosenpass-secret server.rosenpass-public
On the client:
rp genkey client.rosenpass-secret
rp pubkey client.rosenpass-secret client.rosenpass-publicsudo rp exchange server.rosenpass-secret \
dev rosenpass0 \
listen $SERVERIP:9999 \
peer client.rosenpass-public \
allowed-ips 192.168.21.0/24
sudo rp exchange client.rosenpass-secret \
dev rosenpass0 \
peer server.rosenpass-public \
endpoint $SERVERIP:9999 \
allowed-ips 192.168.21.0/24
As of today, there is no official out-of-the-box post-quantum OpenVPN deployment profile.
Since OpenVPN is based on OpenSSL, you can use a crypto provider that enables post-quantum key exchange algorithms. With OpenSSL version >= 3.5 and an additional crypto provider, you can use PQ KEX algorithms to harden your tunnel against future quantum attacks.
Limitation: his depends on provider integration and on the exact TLS stack that your OpenVPN deployment actually uses in production.
You may want to take a look at the oqs-provider by openquantumsafe .
Starting with strongSwan 6.0.0, released in December 2024, the protocol supports ML_KEM_512, ML_KEM_768, and ML_KEM_1024 as key exchange algorithms via the
AWS-LC
crypto library. Since strongSwan 6.0.2, the protocol also supports ML-KEM via OpenSSL 3.5+.
If your deployment stack already uses a compatible crypto backend, you can configure hybrid or post-quantum-capable IKE proposals directly in /etc/swanctl/swanctl.conf.
Limitation: As with all VPN deployments, interoperability depends on both peers supporting the same algorithms and crypto backend features.
It can be configured in the proposals part of your /etc/swanctl/swanctl.conf:
connections {
pq-tunnel {
version = 2
proposals = aes256gcm16-prfsha384-x25519-ke1_mlkem768
local_addrs = 10.0.0.1
remote_addrs = 10.0.0.2
local {
auth = psk
id = server
}
remote {
auth = psk
id = client
}
children {
net {
local_ts = 10.0.0.0/24
remote_ts = 10.1.0.0/24
esp_proposals = aes256gcm16
}
}
}
}
Kubernetes components are written in Go, so post-quantum-related TLS behavior is influenced by the capabilities of the underlying Go toolchain and TLS libraries. Starting with Go 1.23, the hybrid KEX algorithm X25519Kyber768Draft00 became the default, and in Go 1.24 this was replaced by X25519MLKEM768.
Prefer kubectl, controllers, and control-plane components that are built with Go 1.24 or newer if you want the newer hybrid default in the underlying TLS stack. In managed distributions, verify the actual Go version and TLS behavior in the packaged binaries rather than assuming upstream defaults automatically apply.
Limitation: This should be understood as an underlying Go/TLS capability, not as a simple Kubernetes-wide “PQC enabled” switch. Real-world behavior still depends on the Kubernetes distribution, surrounding proxies, ingress components, and the concrete client/server path.
You can find out more about this topic on the Kubernetes blog .
Apache HTTP Server can be configured to use hybrid post-quantum key exchange algorithms with the OQS-OpenSSL provider.
If you operate your own OpenSSL stack and can validate client interoperability, you can experiment with hybrid KEX configuration in controlled environments and then roll it out gradually.
Limitation: Production readiness depends on client compatibility, TLS intermediaries, certificate handling, and the exact provider build you ship.
A full setup guide can be found here .
4627 bytes, which can noticeably increase message size in signed protocols and artifacts.The timeline for the post-quantum transition has been set by NIST and BSI. It is now up to developers and organizations to follow the recommendations and apply the corresponding best practices. While official support for post-quantum key exchange and signature algorithms is still limited in some products, we can already observe that a growing number of protocols and platforms are starting to support PQ options natively, allowing a smoother and more production-oriented transition.
Maintaining an up-to-date inventory of cryptographic assets is an important step toward sustaining security and enabling effective crypto agility. For this reason, we will soon publish a blog article on how to keep track of your crypto inventory, so stay tuned!