Insights Business| SaaS| Technology Which Companies Are Already Running Rust in Production and What Were the Results
Business
|
SaaS
|
Technology
Apr 29, 2026

Which Companies Are Already Running Rust in Production and What Were the Results

AUTHOR

James A. Wondrasek James A. Wondrasek
Graphic representation of companies running Rust in production and their results

The Rust adoption debate keeps coming back to one question: has anyone actually done this at scale, and what happened? The answer is yes — Google, AWS, ClickHouse, and Brave Browser all have Rust in production, with documented, quantified results.

But the picture is more useful than the headlines suggest. The most detailed public account of real-world Rust adoption shows that 98% of one major codebase is still written in C++ after several years of deliberate effort. That’s not a failure. That’s the most honest data point available about what incremental adoption actually looks like.

In this article we’re going to go through the production case studies, lay out the numbers, and pull out the one pattern that holds across every successful deployment: incremental adoption, not big-bang rewrite. This guide is part of our comprehensive coverage of Microsoft’s billion-line Rust rewrite — the industry-level direction driving these decisions is covered there.


What Do the Numbers Actually Show for Companies Running Rust in Production?

Here’s the thing about memory safety vulnerabilities: they account for roughly 70% of all security CVEs in C/C++ codebases. Not a theoretical risk — a documented, consistent property of large-scale systems software measured across Microsoft Windows, Chrome, and Android for over a decade. Every case study here is a response to that same problem.

The 2026 production evidence base covers four distinct adoption archetypes:

Not one company in this evidence base did a full rewrite of an existing system.

Google’s vulnerability density figure — 0.2 memory safety vulnerabilities per million lines of Rust versus approximately 1,000 per million lines of C/C++ — is the single most important quantitative benchmark you’ll find. AWS Firecracker’s performance numbers (125 ms boot time, 150 microVMs per second per host, under 5 MiB overhead per VM) establish that Rust delivers cloud-scale performance from a greenfield starting point. And ClickHouse’s 98% C++ figure is the antidote to both hype and dismissal — it tells you what “incremental” actually means across several years in a serious codebase.


How Did Google Reduce Android Memory Vulnerabilities by 1000x Using Rust?

Google started introducing Rust into Android in 2019 with a simple policy: all new systems-language code gets written in Rust, nothing existing gets rewritten. By 2025, Android had approximately 5 million lines of Rust.

The results were striking. Memory safety vulnerability density dropped from approximately 1,000 per million lines of code in C/C++ to 0.2 per million lines in Rust — a 1,000-fold improvement. The annual count of memory safety vulnerabilities in Android fell from 223 in 2019 to fewer than 50 in 2024, even as the codebase grew.

And it wasn’t just security. Google measured engineering quality via DORA metrics too. Rust changes in Android have a 4x lower rollback rate than equivalent C++ changes and require around 25% less code review time — efficiency gains that translate directly to lower maintenance cost.

The honest counterweight: the CrabbyAVIF near-miss (CVE-2025-48530), a vulnerability in an AVIF image parser containing unsafe Rust code. Approximately 4% of Android’s Rust codebase uses unsafe blocks — necessary for FFI and hardware access, but unverifiable by the compiler. Android’s Scudo hardened allocator rendered it non-exploitable before it could be weaponised.

The 1,000x figure reflects vulnerability density across the Rust portion of the codebase, not the elimination of all risk. Defence-in-depth — sanitisers, fuzzing, hardened allocators — remains essential. What Rust changes is the baseline density of the problem you’re defending against.


What Did AWS Build with Rust and What Performance Did Firecracker Achieve?

AWS Firecracker is a Virtual Machine Monitor written entirely in Rust, built for serverless workloads. Greenfield — no legacy C++ to integrate or migrate from. Every AWS Lambda invocation runs on it. Firecracker also powers AWS Fargate. This is Rust load-bearing at hyperscaler production scale.

The performance figures: 125 millisecond boot time to userspace, up to 150 microVMs created per second per host, memory overhead under 5 MiB per virtual machine. The entire Firecracker codebase is roughly 50,000 lines of Rust. QEMU — the general-purpose virtualisation tool it was designed to replace — runs to nearly 2 million lines of C.

No FFI complexity, no migration friction, no mixed-language build system. Firecracker is what Rust produces with no legacy constraint: production infrastructure running a material fraction of global cloud computing at 125 millisecond cold-starts and sub-5 MiB per VM.


What Did ClickHouse Actually Do with Rust — and Why Is 98% Still C++?

ClickHouse is an open-source analytic database with 1.5 million lines of C++ — a codebase with a full sanitiser suite, coverage-guided fuzzing, and a culture of rigorous engineering.

ClickHouse CTO Alexey Milovidov presented the Rust journey at FOSDEM 2026. His account is the most detailed and candid public record of real-world C++/Rust integration you’re going to find.

The adoption sequence went like this:

  1. BLAKE3 cryptographic hash library (2022): First Rust integration. Low-risk, well-isolated. Primarily a test of whether the build system could handle a Rust dependency at all.
  2. CLI history navigation: An external contributor added Rust-backed terminal history. Another low-friction test.
  3. PRQL (Pipelined Relational Query Language): A Rust-native query language ClickHouse wanted to support. This is where production friction first appeared.
  4. delta-kernel-rs (Databricks Delta Lake library): The first “actually needed” component — a Rust library existed for Delta Lake integration before any C++ alternative would be written.

The integration mechanism: Corrosion — an open-source CMake plugin that compiles and links Rust crates into C++ CMake projects without rewriting the build system.

Two friction points worth examining closely.

The PRQL panic bug. The fuzzer found a panic-inducing crash shortly after integration. Rust’s panic mechanism terminates the running thread or process — different from C++ exceptions, and in a server application it can bring down the entire process. ClickHouse had to audit and patch the panic paths before shipping. Fuzzing is not optional in a mixed Rust/C++ server codebase.

The delta-kernel-rs sanitiser incompatibility. Two days before Milovidov’s FOSDEM 2026 presentation, the team discovered the Delta kernel library was incompatible with ClickHouse’s memory sanitiser. Each new Rust dependency requires explicit verification that it works in the same sanitised build environment as the surrounding C++ code.

Milovidov’s framing: “Introducing it incrementally, we did not rewrite ClickHouse in Rust. We just opened the door for Rust.”

98% C++ is not a failure. It means ClickHouse gets Rust where Rust delivers value — in new libraries, in external integrations — without the cost and risk of rewriting working, production-tested code. If the ClickHouse model maps to your situation and you want to build the business case for your own migration, that framework is developed in detail separately.


What Did Brave Browser Achieve by Targeting a Single Rust Component?

Brave targeted a single performance-critical component — the adblock engine — rather than adopting Rust across the browser.

The engineering effort, described publicly by Shivan Kaul Sahib, VP of Privacy and Security at Brave, was an architectural refactor within Rust: replacing heap-allocated data structures with FlatBuffers serialisation, a zero-copy library that eliminates parsing overhead via direct memory access to serialised binary data.

Starting from 162 MB in version 1.79.118 (May 2025), the memory footprint dropped to 104 MB by version 1.85.118 — a 75% reduction across Android, iOS, and desktop.

The strategic logic: Rule of Two compliance — a Chromium security policy stating any code processing untrusted input at scale in a high-privilege context should satisfy at least two of memory safety, sandboxing, or formal verification. An adblock engine processes untrusted internet content at high volume. Memory safety is a requirement, not a preference. And the result speaks for itself: 75% memory reduction in months, rest of the browser codebase untouched.


What Pattern Do All Successful Production Rust Deployments Share?

Put all four cases side by side. Google Android: new-code-only policy, five years (2019–2024), 1,000x improvement in vulnerability density. AWS Firecracker: 100% greenfield Rust, 125 millisecond boot times from day one. ClickHouse: four Rust libraries over several years, 98% still C++, Delta Lake integration and sanitiser suite maintained. Brave: one component, 75% memory reduction, months not years.

The mechanism is consistent across all of them. Identify a component where memory safety risk is high, where a new dependency has no C++ alternative, or where performance gains are quantifiable. Introduce Rust for that component via FFI or greenfield. Measure results. Expand.

FFI (Foreign Function Interface) is the interoperability mechanism in every mixed-language deployment — the bridge that makes incremental adoption possible, and where integration complexity concentrates. Sanitiser compatibility, panic handling, and memory allocation ownership all require explicit verification at that boundary.

The ClickHouse adoption sequence — BLAKE3 → CLI tooling → PRQL → delta-kernel-rs — is a reproducible template for any large C++ codebase. Each step verified build system and sanitiser compatibility before moving to higher-stakes integrations.

The consistent friction points:

None of these are blockers. But “integrate a Rust library” is a verification process, not a one-afternoon task. For larger-scale migrations, AI-assisted translation as the scale mechanism is an emerging option worth watching.

The practical implication: the question is not “do we rewrite?” It’s “which component do we start with?”


What Does the Production Evidence Tell You About Evaluating Rust Adoption?

Five years of production Rust deployments across four organisations point to conclusions that hold regardless of scale.

On security outcomes: Google Android provides the most defensible board-level metric — 0.2 memory safety vulnerabilities per million lines of Rust versus approximately 1,000 per million lines in C/C++. For any organisation where a single CVE has board-level or regulatory consequences, that figure alone justifies evaluation.

On engineering quality: Rust changes in Android have a 4x lower rollback rate and 25% less code review time than C++ equivalents — a developer productivity investment, not merely a security expenditure. Budget for a 2–3 month onboarding dip.

On realistic timelines: ClickHouse’s 98% C++ sets the correct expectation for incremental adoption in a large codebase. Brave’s 75% memory reduction is the benchmark for targeted component work — measurable within a single release cycle. Platform-scale security improvements require multi-year commitment.

On risks and limits: Unsafe Rust blocks, panic handling in integrated libraries, sanitiser compatibility — all solvable with operational process. Known complexity of a mixed-language environment, not Rust-specific dealbreakers.

The recommended starting frame: Not “should we rewrite in Rust?” but “where does memory safety risk justify a targeted Rust component?” Apply the Rule of Two. Look for components where a Rust-native library already exists and no C++ alternative is actively maintained. Start there.

The formal business case — grounded in cost, timeline, and risk — is developed in the business case and migration plan for moving legacy C++ to Rust. For the strategic case for Rust migration and the industry shift Microsoft’s announcement represents, that context is covered there.


Frequently Asked Questions

Has any company done a full Rust rewrite rather than incremental adoption?

AWS Firecracker is the closest equivalent — written entirely in Rust from the ground up. But Firecracker is greenfield; there was no existing codebase to rewrite. No major production system has completed a full rewrite of a large existing C/C++ codebase into Rust. Google, ClickHouse, and Brave all chose incremental adoption.

What is the biggest technical challenge teams report when adopting Rust?

FFI boundary management and panic handling in integrated Rust libraries. ClickHouse encountered both — a panic-inducing crash in one integration, a memory sanitiser incompatibility in another two days before a major conference presentation. Build system integration is a secondary friction point; Corrosion solves it for CMake codebases.

How long does it typically take before Rust adoption delivers measurable results?

Depends on scope. Brave’s 75% memory reduction was measurable within a single release cycle — months. Google’s 1,000x vulnerability density improvement took five years of compounding new-code adoption (2019–2024). A targeted component can produce a measurable result within a quarter; platform-scale security impact requires multi-year commitment.

Did Google really get a 1000x security improvement from Rust?

The 1,000x figure refers to vulnerability density — vulnerabilities per million lines of code — not an overall security improvement. Rust code in Android produces approximately 0.2 per million lines; C/C++ produces approximately 1,000. Five years of production data (2019–2024). Google changed the density of the problem class that constitutes 70% of its CVE workload.

Is Rust safe from all memory safety bugs?

No. The CrabbyAVIF near-miss (CVE-2025-48530) demonstrates that unsafe Rust blocks — necessary for FFI and hardware interaction — can still produce memory safety bugs the compiler cannot catch. Approximately 4% of Android’s Rust code uses unsafe blocks. Sanitiser and fuzzing coverage is necessary for any unsafe code you write or depend on.

Why did ClickHouse only integrate four Rust libraries after several years of adoption?

Each integration had to pass the full sanitiser suite (AddressSanitizer, ThreadSanitizer, MemorySanitizer, UndefinedBehaviourSanitizer) and extensive fuzzing infrastructure. Adding a Rust library means verifying it works correctly in the same instrumented build environment as the surrounding C++ code. Rigour, not reluctance.

How does Rust interoperate with C++ in a production mixed-language codebase?

Via FFI (Foreign Function Interface): Rust exposes C-compatible function signatures that C++ can call, and vice versa. In practice, most teams use Corrosion to manage build system integration, then write thin C-compatible wrapper interfaces around Rust library APIs. The FFI boundary is where sanitiser and panic-handling verification matters most.

What is the Rule of Two and how does it apply to Rust adoption?

The Rule of Two is a Chromium security policy: any code processing untrusted input in a high-privilege context should satisfy at least two of — memory safety, sandboxing, or formal verification. Use it as a component selection heuristic — it identifies where Rust adoption delivers the highest risk reduction per unit of engineering effort.

Is Rust worth adopting for a company that is not a hyperscaler?

Brave’s result — 75% memory reduction, measurable within months, no change to the rest of the browser — is the relevant data point. The question is not “are we big enough?” It’s “do we have a component where memory safety risk justifies the training investment?” For any product handling untrusted input or sensitive data, the answer is almost certainly yes.

What metrics should a CTO use to make the case for Rust adoption internally?

Three defensible board-level metrics: (1) Vulnerability density reduction — Google’s 0.2 versus 1,000 per million lines translates to expected CVE reduction for any C/C++ codebase; (2) Rollback rate — Google’s 4x lower rollback rate frames adoption as a developer reliability investment; (3) Component-level metrics (Brave’s 75%, Firecracker’s 125 ms boot time) provide before/after figures from a bounded scope that boards can evaluate.

Does adopting Rust require hiring Rust developers?

Not according to Google’s documented approach. Google trained existing systems engineers — primarily those with C++ backgrounds — rather than hiring externally. C++ experience is the strongest predictor of fast Rust adoption. Budget for a 2–3 month learning curve per developer.

What happened with the PRQL library panic bug in ClickHouse?

The fuzzer found a panic-inducing crash: a specific query string caused the server process to terminate. Rust’s panic mechanism — an unrecoverable error — behaves differently from C++ exceptions and must be explicitly handled at the FFI boundary in server applications. Fuzzing is necessary in mixed Rust/C++ codebases to catch panic paths the compiler does not prevent.

AUTHOR

James A. Wondrasek James A. Wondrasek

SHARE ARTICLE

Share
Copy Link

Related Articles

Need a reliable team to help achieve your software goals?

Drop us a line! We'd love to discuss your project.

Offices Dots
Offices

BUSINESS HOURS

Monday - Friday
9 AM - 9 PM (Sydney Time)
9 AM - 5 PM (Yogyakarta Time)

Monday - Friday
9 AM - 9 PM (Sydney Time)
9 AM - 5 PM (Yogyakarta Time)

Sydney

SYDNEY

55 Pyrmont Bridge Road
Pyrmont, NSW, 2009
Australia

55 Pyrmont Bridge Road, Pyrmont, NSW, 2009, Australia

+61 2-8123-0997

Yogyakarta

YOGYAKARTA

Unit A & B
Jl. Prof. Herman Yohanes No.1125, Terban, Gondokusuman, Yogyakarta,
Daerah Istimewa Yogyakarta 55223
Indonesia

Unit A & B Jl. Prof. Herman Yohanes No.1125, Yogyakarta, Daerah Istimewa Yogyakarta 55223, Indonesia

+62 274-4539660
Bandung

BANDUNG

JL. Banda No. 30
Bandung 40115
Indonesia

JL. Banda No. 30, Bandung 40115, Indonesia

+62 858-6514-9577

Subscribe to our newsletter