Quantum computers capable of breaking RSA and elliptic-curve cryptography are estimated to arrive somewhere between 2028 and 2033. A peer-reviewed study published in Computers (MDPI, December 2025) found that small tech companies — under 500 employees — will likely need 5 to 7 years to complete a post-quantum cryptography (PQC) migration. Those two windows overlap.
Here is the thing that makes it urgent right now, not later: harvest-now-decrypt-later (HNDL) attacks are already happening. Adversaries are archiving encrypted traffic today with the intention of decrypting it once a cryptographically relevant quantum computer arrives. If your company holds data that will still be sensitive in the 2030s — customer PII, financial records, health data, IP — that data is already in the exposure window.
The most common reaction to all this is scepticism. We’re 80 people. Do we really need a formal migration programme? Yes — because HNDL targets data value, not company size.
This article lays out a four-phase roadmap — Phase 0: Cryptographic Inventory → Phase 1: Hybrid TLS → Phase 2: Authentication Layer → Phase 3: Cryptographic Agility — calibrated for 50 to 500-person tech companies. The answer to “have we missed the window?” is no — but only if the inventory starts now. This guide is part of our comprehensive series on the post-quantum threat that makes migration a present-tense priority.
Why does a PQC migration take 5–7 years, and have SMBs already missed the window?
A PQC migration takes 5 to 7 years for a small enterprise because it is not a software update. Think of it as a global synchronisation exercise involving hardware, software, vendors, and partners all at once.
Four things make it long. The cryptographic inventory alone takes 1 to 3 years. Vendors add PQC support 2 to 3 years after NIST standards are finalised. HSM replacement has procurement cycles you cannot compress. And there is a global shortage of PQC-familiar engineers creating bottlenecks for everyone simultaneously.
The 2028 lower bound is the scenario to plan against. If a cryptographically relevant quantum computer arrives in 2028 and you start your inventory today, you have roughly two years before active migration must be underway. Defer Phase 0 another 12 to 18 months and you cannot credibly claim you will finish before the upper bound of the window.
Start now and you can realistically complete migration before 2033. Wait and you cannot. Understanding why the harvest-now-decrypt-later attack makes this urgent is the essential context for committing to a migration programme — HNDL makes this a present-tense, accumulating risk right now, not at Q-Day.
What is a cryptographic inventory, and why does it take 1–3 years even for a small company?
A cryptographic inventory is a systematic map of every place cryptographic algorithms are in use across your business: application code, TLS configurations, certificate stores, HSMs, SaaS vendor dependencies, CI/CD pipeline signing, internal service authentication, and data-at-rest encryption. It is Phase 0. You cannot skip it.
You need automated discovery and manual reporting working together. Automated scanners like QuantumGate’s Crypto Discovery Tool can map cryptographic primitives in production code and traffic. But what they cannot replace is the manual review of vendor contracts, HSM specs, and legacy system documentation. Scanning is the foundation of Phase 0, not the completion of it.
HSMs are the highest-complexity item in the whole inventory. PQC algorithms introduce larger key and signature sizes that some HSM firmware simply cannot handle — others require hardware replacement outright. Identifying your HSMs and their upgrade paths in Phase 0 is what turns Phase 2 into a planned cost rather than a nasty surprise.
Before you commit to a full inventory, do the PQCMM self-assessment. The PQC Maturity Model, published by the PKI Consortium at pkic.org, is free, takes a few hours, and gives you a communicable baseline you can put in front of a board or an auditor.
One other thing worth flagging: cloud providers handle platform-level TLS but not application-layer cryptography. JWTs, API keys, database encryption at rest, internal service authentication — those are all your responsibility regardless of which cloud you’re on.
The NIST NCCoE Special Publication 1800-38 (“Migration to Post-Quantum Cryptography”) is the authoritative free guide for organisations without dedicated cryptographic engineering teams. It’s available at nccoe.nist.gov.
What are the Phase 1 hybrid TLS quick wins a small company can deploy this quarter?
Phase 1 is the first concrete migration action — and for many SMBs it is already done without their knowing it. The task is deploying hybrid post-quantum key exchange (X25519MLKEM768) on all external TLS 1.3 connections. It combines classical X25519 with post-quantum ML-KEM-768 (FIPS 203). Security holds if either algorithm is unbroken.
If your infrastructure is proxied through Cloudflare, Phase 1 is already done. Cloudflare deployed X25519MLKEM768 across all TLS 1.3 connections from October 2022 at zero configuration cost. Your action is just verification: confirm your traffic is proxied and the proxy is active.
The broader ecosystem has moved anyway. iOS 26 (September 2025) enabled X25519MLKEM768 by default. Over 60% of human-generated TLS traffic to Cloudflare’s network now uses hybrid ML-KEM. Your users are already generating post-quantum traffic whether your backend has caught up or not.
For Microsoft-stack infrastructure, PQC APIs are now generally available in Windows Server 2025, Windows 11, and .NET 10 via the CNG libraries. For other stacks, OpenSSL 3.x with the oqs-provider plugin works. For the specific Cloudflare and Microsoft tooling available for Phase 1 deployment, see our article on post-quantum cryptography already running in production.
Phase 1 is a bridge, not a destination. Hybrid TLS protects transport-layer key exchange but does not touch authentication credentials, code signing, stored data, or internal service cryptography. Phase 2 picks up where Phase 1 leaves off. For a deeper look at the architecture behind Phase 1 hybrid deployment, our architecture article covers the technical foundations and how they extend into later migration phases.
What does Phase 2 authentication layer migration involve, and why is it harder than Phase 1?
Phase 2 migrates identity systems, certificates, certificate authority infrastructure, code signing, and authentication protocol cryptography to post-quantum algorithms. The primary algorithm you’re working with here is ML-DSA (FIPS 204, formerly CRYSTALS-Dilithium).
The size implications are real and they matter at an infrastructure level. ML-DSA produces signatures of 2.4 to 4.6 KB compared to roughly 70 bytes for ECDSA. Certificate data accounts for close to 40% of all data transferred on non-resumed TLS connections. This is an infrastructural challenge, not just an algorithmic one.
Phase 2 follows Phase 1 for a straightforward reason: HNDL attacks target confidentiality, which means key exchange is the relevant exposure. Authentication attacks become urgent as Q-Day approaches, not before. Build the team’s capability in Phase 1 first, then tackle this.
The Phase 2 scope covers TLS certificate PKI migration, JWT signing key rotation, API authentication credentials, code signing, and internal service-to-service tokens. For Microsoft-stack companies, PQC support in Active Directory Certificate Services (ADCS) is targeted for early 2026 and is a key enabler. HSMs are the principal budget trigger here — which is exactly why identifying firmware upgrade paths in Phase 0 is so important. It makes Phase 2 a planned cost, not a shock.
What is cryptographic agility, and why is it the real long-term goal of a PQC migration?
Cryptographic agility is the ability to swap, update, or migrate cryptographic algorithms across your systems without a full redesign. It is both a technical design pattern and a governance posture — and it is the real point of all this work.
The technical pattern looks like this: cryptographic components are swappable rather than hard-coded; key management is configuration-driven; certificate lifecycle management is centralised; and new code gets automatically scanned for algorithm usage. When the algorithm changes, you update configuration, not the system itself.
Phase 3 is not “all systems now use PQC.” It is “we can update our cryptographic primitives without emergency rewrites.” This matters because the PQC migration is not the last cryptographic transition your business will face. MD5 caused problems more than 20 years after its deprecation. SHA-1 took over a decade to retire. Agility reduces the cost of every future transition, not just this one.
Meta Engineering’s five-level PQC Migration Levels ladder maps the journey: PQ-Unaware → PQ-Aware → PQ-Ready → PQ-Hardened → PQ-Enabled. Most SMBs sit at PQ-Unaware or PQ-Aware today. Phase 3 is PQ-Enabled — and the ladder gives you a communicable position to put in front of your board.
If you are running a Zero Trust Architecture programme, integrate PQC migration into that governance structure rather than treating it as a separate workload. The inventory workstreams overlap and the executive sponsorship requirements are identical. For the architecture behind hybrid deployment and cryptographic agility patterns, see our article on how hybrid post-quantum cryptography works.
How do you prioritise which systems to migrate first?
Apply this from the moment Phase 0 begins. The framework is a two-axis matrix: data sensitivity versus migration complexity. High-sensitivity, low-complexity goes first.
High-sensitivity, low-complexity: External-facing TLS on web and API endpoints — Phase 1, already done if you are Cloudflare-proxied.
High-sensitivity, high-complexity: PKI and certificate infrastructure, HSM-protected keys, internal authentication systems. These have the longest HNDL exposure windows. Plan them in Phase 0.
Low-sensitivity, low-complexity: Internal developer tooling, staging environments, low-risk internal services. These migrate last.
Customer PII, financial records, health data, and IP define the high-sensitivity tier regardless of system complexity. For legacy systems that cannot be re-engineered, network-level cipher translation — the approach Palo Alto Networks takes — can make legacy devices appear quantum-safe at the network boundary without application-level changes. It is a bridge, not a substitute.
The vendor-managed boundary is worth being clear about: cloud provider platform-level TLS is covered. Your application code’s cryptographic library calls, customer-managed keys at rest, code-signing infrastructure, and HSM-protected keys are not.
How do you form a PQC migration task force and make the case to your board?
A minimum viable SMB task force needs four roles: a technical lead (CTO or senior engineer), a security function representative, a legal and compliance representative, and an executive sponsor with budget authority. Vendor management is a fifth distinct responsibility — do not fold it into general engineering.
The executive sponsor must have budget authority. Without it, the programme stalls the first time an HSM replacement quote lands on someone’s desk.
The board case is risk management, not a technology upgrade. Three frames work well here. First, HNDL as data breach liability: identify your three most sensitive data assets, estimate their retention period, and calculate exposure against the 2028 to 2033 window. Second, regulatory exposure: GDPR, HIPAA, SOC 2, and ISO 27001 create compliance risk even before a quantum computer exists. Third, competitive risk: the PQCMM baseline is what enterprise customers are starting to request in procurement.
For SaaS companies with federal agency customers, this one is non-negotiable: CNSA 2.0 compliance becomes mandatory at point of acquisition from 1 January 2027. Waiting until 2027 is already too late. For regulatory depth, see our companion article on the regulatory deadlines that set external timeline pressure on your migration.
Why do small tech companies face a disproportionate PQC migration challenge — and how do they compensate?
SMBs face four structural disadvantages. There is no dedicated security team, so migration competes with product engineering for the same people. HSM budget is tighter, so Phase 2 hardware costs hit harder per employee. Vendor dependency is more concentrated, so each vendor’s PQC posture has outsized impact. And SMBs have reduced vendor leverage — large enterprises can contractually demand roadmap commitments; most small companies cannot.
Around half of IT leaders in a Computing UK poll expected to be completely or pretty reliant on vendors for their PQC migration. That is not necessarily a problem, but you need to be clear about where vendor coverage ends and your responsibility begins. Cloud providers handle platform-level TLS. Application-layer cryptography is yours regardless.
Five strategies compensate for all of this:
- Phase 1 via Cloudflare or Microsoft — no in-house TLS expertise required. Cloudflare-proxied? Phase 1 is already done.
- PQCMM self-assessment at pkic.org scopes the programme before any budget is committed — see the inventory section.
- NIST NCCoE SP 1800-38 is free and authoritative, available at nccoe.nist.gov.
- Open-source PQC libraries — liboqs, BouncyCastle, wolfSSL, OpenSSL 3.x with the oqs-provider plugin — reduce implementation costs significantly.
- Zero Trust convergence reduces governance overhead by combining two overlapping programmes under unified sponsorship.
The one structural advantage SMBs have is timeline. A smaller estate means 5 to 7 years rather than 12 to 15 — but only if the programme starts promptly.
What to do this week: (1) Complete the PQCMM self-assessment at pkic.org; (2) verify whether your web and API traffic is Cloudflare-proxied; (3) identify your three most sensitive data stores and their retention periods against the 2028 to 2033 CRQC window; (4) schedule the executive sponsor conversation — frame it as risk management with a calculable liability, not a technology upgrade.
Frequently asked questions
We’re only 80 people — do we really need a formal migration programme?
Yes. HNDL attacks target data value, not headcount. GDPR, HIPAA, and PCI-DSS do not scale by company size. “Formal” at SMB scale means a named task force, a Phase 0 scope commitment, and an executive sponsor — not a dedicated security department.
Can we just rely on our cloud provider (AWS, Azure, GCP) to handle the quantum encryption migration?
Partially. Platform-level TLS is handled at zero configuration cost — Cloudflare-proxied traffic already uses hybrid key exchange by default. What cloud providers do not handle: your application code’s cryptographic library calls (JWTs, HMAC-based API keys, custom encryption), customer-managed keys at rest, code-signing infrastructure, and HSM-protected keys. If you manage your own keys anywhere, those assets are in your scope.
What is the difference between a cryptographic inventory and a security audit?
A security audit assesses whether your controls are correctly configured. A cryptographic inventory maps every cryptographic primitive in use — algorithms, key sizes, protocols, certificates, libraries — without a pass/fail judgement. It is discovery and mapping, not assessment. A security audit will not produce the migration-ready map that Phase 0 requires.
Should we do hybrid or pure PQC first?
Hybrid now. Deploy X25519MLKEM768 for Phase 1 TLS. Security holds if either algorithm is unbroken, and it is backward-compatible. Pure PQC is the Phase 3 destination. The industry consensus from Cloudflare, Google, NIST, and Apple is hybrid for Phase 1.
What does NIST NCCoE SP 1800-38 provide?
NIST NCCoE Special Publication 1800-38 covers cryptographic inventory, migration scope prioritisation, hybrid implementation, and full PQC deployment planning — calibrated for organisations without dedicated cryptographic engineering teams. Freely available at nccoe.nist.gov. The most comprehensive free reference for SMBs planning a migration.
What is the PQC Maturity Model (PQCMM)?
The PQCMM, published by the PKI Consortium, assesses your current PQC posture before you commit to a programme. It is both a structured self-assessment and a communicable baseline for boards or auditors. Free at pkic.org, takes a few hours. Do it before scoping the inventory.
How does Zero Trust Architecture relate to PQC migration?
They are convergent programmes. Every Zero Trust pillar eventually requires quantum-resistant cryptographic primitives. If you are running a Zero Trust programme, integrate PQC migration into that governance structure — two separate workloads cost more. If not, the PQC task force is a natural foundation to build Zero Trust on.
What is CNSA 2.0 and does it apply to our SMB?
CNSA 2.0 is the NSA’s cryptographic mandate for national security systems. For most SMBs it does not apply directly. Where it matters: SaaS and tech companies selling to U.S. federal agencies face indirect compliance pressure — CNSA 2.0 compliance becomes mandatory at point of acquisition from 1 January 2027. Your federal customers will start requiring it in procurement before that deadline.
How does HNDL risk apply to data we hold today?
Adversaries capture and archive encrypted data today, intending to decrypt it when a quantum computer arrives. Multiple national security agencies have confirmed this as active. If your data retention period extends into the 2030s, data you create today is already in the HNDL risk window.
What open-source libraries are available for PQC implementation?
liboqs (Python, Go, Java, .NET), BouncyCastle (Java/C# with ML-KEM and ML-DSA support), wolfSSL (C, embedded/IoT), and OpenSSL 3.x with the oqs-provider plugin. Library selection is a Phase 1–2 detail — do not let it block Phase 0.
The migration clock is not a future concern. The data your company encrypts today is already in the harvest window. A 5 to 7 year migration timeline against a 2028 to 2033 CRQC arrival window means the only safe moment to begin Phase 0 was last year. The next best moment is this week.