The EU Data Act (Regulation 2023/2854) became fully applicable September 12, 2025. It mandates cloud switching procedures and eliminates vendor lock-in barriers. At the same time, the Digital Markets Act is investigating AWS and Microsoft Azure as potential gatekeepers, which would impose additional interoperability obligations.
If you’re running tech infrastructure at an SMB—50 to 500 employees, SaaS, FinTech, HealthTech, whatever—you’re dealing with compliance requirements. You need functional equivalence for IaaS migrations and open interfaces for PaaS and SaaS. Egress fee elimination happens by January 12, 2027.
Here’s the fundamental challenge: the US CLOUD Act‘s extraterritorial data access contradicts EU data sovereignty principles and GDPR Article 48 requirements. US cloud providers remain subject to American legal demands regardless of where your data sits.
This guide covers the technical implementation roadmap—switching procedures, proportionate charges, transitional period negotiations, contractual terms, and risk mitigation architectures. You need to avoid compliance penalties, enable customer switching rights, understand revenue recognition impacts, and navigate US-EU regulatory tensions. Let’s get into it. Understanding these requirements is essential for technology leaders navigating the European digital sovereignty movement.
The EU Data Act entered into force on January 11, 2024, and became fully applicable September 12, 2025. It establishes cloud switching and interoperability requirements for data processing services—IaaS, PaaS, SaaS—to reduce vendor lock-in effects.
There’s a timeline you need to track. Entry into force happened in January 2024. Full applicability hit September 2025. Egress fee elimination comes January 2027. These are three different dates with three different compliance checkpoints.
Data processing services are defined to include IaaS, PaaS, SaaS, Storage as a Service, and Database as a Service. Basically, if you’re providing cloud infrastructure or software, you’re covered.
The goal is eliminating lock-in effects. Technical requirements mean functional equivalence. Commercial requirements mean no egress fees. Contractual requirements mean notice periods. The European Commission enforces it through standard contractual clauses.
Companies are advised to consider implications for business models and contract design given the immediate September 2025 applicability. The European Commission was due to publish non-mandatory standard contractual clauses for data processing service contracts by September 12, 2025.
The Digital Markets Act designates large platforms as gatekeepers when they serve as important gateways between businesses and consumers with dominant market positions. On November 18, 2025, the European Commission initiated three separate market investigations concerning cloud computing services. AWS and Microsoft Azure are under investigation for potential gatekeeper designation.
If investigations confirm gatekeeper status, these cloud services would be added to existing core platform designations for both companies. Areas of examination include interoperability barriers between cloud services, data access restrictions for business users, service tying and bundling practices, and potentially imbalanced contract terms.
Gatekeeper obligations include interoperability requirements, data access provisions, bundling restrictions, and fair contract terms. This complements the Data Act by targeting market concentration and competitive barriers at the platform level.
Using potential gatekeepers means navigating regulatory requirements from both DMA and Data Act. Both target switching barriers, but DMA focuses on dominant platform behaviour while the Data Act provides universal switching rights.
Functional equivalence requires IaaS providers to ensure minimum functionality levels are maintained in new environments following cloud switching. Your workloads must operate with equivalent capabilities after migration without significant degradation in performance, features, or reliability. This applies specifically to Infrastructure as a Service under Data Act Article 29. These regulatory standards support the broader European digital sovereignty movement by establishing technical guardrails for platform migration.
The technical definition is minimum functionality preservation, not feature-for-feature parity. For IaaS, compute, storage, and networking must function equivalently post-migration. Both incumbent and receiving providers share responsibility for verification.
It’s different from data portability for PaaS and SaaS, which requires open interfaces and machine-readable formats instead.
Here’s the gap: there are no standardised testing protocols yet defined. You need contractual specification of methodology and acceptance criteria.
Any loss of functionality during migration, such as inability to use certain analytics features after export, must be transparently documented and objectively justified. Silent degradation is not acceptable.
Migration planning must include equivalence verification and parallel running for testing.
The US CLOUD Act from 2018 grants American authorities power to compel US-based service providers to provide access to data stored abroad. Regardless of data storage location globally. Even if that data belongs to non-US persons and resides in EU data centres. This directly contradicts GDPR Article 48, which requires international agreements for foreign authority data access. For a comprehensive analysis of how these CLOUD Act conflicts create broader geopolitical compliance tensions, see our detailed risk assessment.
All US cloud providers—AWS, Azure, Google Cloud—remain subject to the CLOUD Act even with EU data centre locations.
GDPR Article 48 states that court orders from third countries are only valid if based on international agreements such as Mutual Legal Assistance Treaties. The CLOUD Act bypasses MLATs altogether.
This creates an unavoidable compliance conflict. Meet US legal obligations or meet EU data protection requirements. You can’t do both.
As long as a cloud provider is headquartered in the US or controlled by a US parent company, it remains subject to the CLOUD Act regardless of where data is stored. Microsoft’s own chief legal officer in France admitted under oath before the French Senate that the company cannot guarantee EU data is safe from US access requests. That’s a problem.
The European Data Protection Board has made clear that service providers subject to EU law cannot legally base data transfers to the US solely on CLOUD Act requests. GDPR violations carry penalties up to €20 million or 4% of global annual revenue.
Risk mitigation strategies include customer-managed encryption, contractual safeguards, hybrid architectures, and EU provider alternatives. This drives demand for EU-owned cloud alternatives like Exoscale and Gaia-X initiatives.
Data Act Chapter VI applies to all data processing services including Infrastructure as a Service, Platform as a Service, Software as a Service, Storage as a Service, and Database as a Service.
The technical and pricing requirements vary significantly. For IaaS requirements, you’re looking at functional equivalence testing, workload migration support, and infrastructure parity. PaaS requirements include open interfaces with documentation, development platform portability, and API specifications. SaaS requirements mean structured machine-readable data export formats, complete data retrieval capabilities, and metadata inclusion.
Universal requirements applying to all types: egress fee elimination timeline, notice period maximums of two months, proportionate early termination fees, and transitional period provisions.
Cloud providers must ensure users can retrieve all digital assets including structured and unstructured data, metadata, and configurations in structured, machine-readable format.
Egress fees—data transfer charges for exporting data from cloud platforms—are permitted as “cost-covering charges” until January 12, 2027. They’re completely prohibited from January 12, 2027 onwards for all data processing services. For a detailed analysis of how egress fee elimination affects switching cost calculations and economic planning, see our ROI modelling guide.
Premium service fees for migration acceleration or data format conversion remain permissible as optional paid services. Providers must adjust pricing models before the 2027 deadline to maintain revenue without egress charges.
Currently, egress fees are a barrier to switching. Multi-thousand dollar charges for large data volumes are common. The regulatory timeline has the Data Act effective September 2025, but the egress fee ban is delayed until January 2027 for provider adjustment.
What’s prohibited: mandatory charges for data extraction and transfer during the switching process. What’s permitted: optional premium services like faster transfer speeds or format conversion assistance if the customer chooses.
SaaS companies must redesign revenue recognition. Cloud providers must find alternative monetisation. Compliance preparation means updating contracts by September 2025 with an egress fee phase-out plan and implementing new pricing by January 2027.
Between September 2025 and January 2027, egress fees must be “cost-covering” not punitive.
According to research, egress fees account for an average of 6% of organisations’ cloud storage costs. Cloud providers charge up to $0.09 USD per GB transferred out of storage. That adds up fast.
Define minimum functionality requirements covering compute performance, storage capabilities, networking features, and availability levels required for workloads. Document testing methodology with measurable criteria before migration—benchmark performance, feature verification, reliability thresholds. Execute parallel running during transitional period (30 days to 7 months) to verify equivalent operation. Both incumbent and receiving providers must cooperate in good faith to facilitate testing and resolve compatibility issues. For detailed migration planning frameworks and vendor lock-in assessment methodologies, consult our strategic migration guide.
The transitional period enables parallel running for equivalence verification. This allows side-by-side comparison, gradual traffic shifting, and rollback capability if equivalence is not achieved.
Documentation requirements fall on both providers. The receiving provider must document capabilities, the incumbent must provide workload specifications, and both share test results. There’s no EU-mandated testing protocol yet, requiring contractual specification.
Challenges include proprietary service dependencies like AWS Lambda or Azure Functions, networking architecture differences, and monitoring tool migrations. Success criteria: workloads operate with comparable performance and reliability, no feature loss, and acceptable operational effort.
Interoperability is a mandatory regulatory obligation under the Data Act. If a customer moves from one cloud provider to another, the new provider must be able to ingest exported data without requiring extensive manual reformatting.
Many cloud applications are built using provider-specific APIs, proprietary services, or orchestration layers requiring rearchitecting components or introducing abstraction layers. That’s the technical debt you need to work through.
Conduct compliance assessment determining which services fall under Data Act scope (IaaS, PaaS, SaaS) and current gap analysis. Revise contracts incorporating standard contractual clauses with mandatory terms: notice periods (max 2 months), transitional periods, switching charge disclosures, data portability specifications. Implement technical capabilities: functional equivalence support for IaaS, open interfaces for PaaS, machine-readable exports for SaaS. Update pricing models preparing for egress fee elimination roadmap towards January 2027 full prohibition.
Providers must disclose available switching procedures and technical limitations. Details of data structures and formats for exportable data are required.
Designated national supervisory bodies bear responsibility for enforcement with penalties including warnings, reprimands, orders for compliance, and fines. The European Commission maintains a publicly available register of penalties.
Ongoing obligations include maintaining switching capabilities, honouring notice periods and transitional periods, and conducting annual compliance reviews.
Proportionate charges are reasonable, cost-based early termination fees permitted during contract periods under EU Data Act standards. They must reflect actual costs incurred by the provider—unused capacity, operational expenses—rather than punitive penalties for switching. Different from switching fees which cover migration process costs (data extraction, technical assistance) and are increasingly restricted. The regulatory language creates ambiguity requiring explicit contractual definition and good faith negotiation.
Proportionate charges apply to contract termination, while egress fees cover data transfer costs. The Data Act bans egress fees from January 2027. Proportionate charges cover early contract termination.
No guidance exists on what a proportionate early termination fee would be. Some providers insist full payments due for the remainder of a fixed term are the termination fee, just accelerated on switching. Customers may object that proportionality implies some reduction.
Significant uncertainty exists about how courts will interpret these provisions. For now it’s a matter for negotiation with companies filling in gaps left by vague drafting of legislation.
Contractual specification is necessary. Regulatory ambiguity requires an explicit formula or methodology in service agreements to prevent disputes. Negotiation considerations include customer size and contract value, notice period length reducing impact, and alternative commitment structures.
Compliance risk: excessive charges could be challenged as anti-competitive lock-in contrary to Data Act objectives. Best practices mean transparent disclosure of calculation methodology, good faith negotiation, and consideration of customer migration timeline.
A key concern for cloud providers is whether the switching right undermines ability to recognise revenue. US GAAP is particularly sensitive to customer termination rights. You’ll need to talk to your accountant about this.
The Data Act provides 30 days standard transitional period for parallel running during cloud switching, extendable up to seven months for exceptional complexity. Transitional periods must be specified in contracts with clear technical and operational parameters defining parallel running scope. Negotiate based on workload complexity, data volumes, integration dependencies, testing requirements, and risk tolerance. Both incumbent and receiving providers must cooperate in good faith.
30 days is default for straightforward migrations. Up to 7 months for complex multi-service environments requiring extensive testing. Contractual specification requirements mean define parallel running capabilities, specify support obligations, document cutover procedures, and clarify cost allocation.
Factors justifying longer periods: large data volumes requiring gradual migration, complex integration dependencies, mission-critical workloads requiring extensive testing, multi-region deployments, and regulatory requirements in financial services or healthcare.
Parallel running scope includes read-only replication versus full bidirectional sync, production traffic percentage shifting, monitoring and alerting coverage, and rollback procedures.
Provider obligations during transition: the incumbent must maintain service levels and provide migration support, the receiving provider must demonstrate functional equivalence, and both must remove obstacles.
Cost considerations: who pays for parallel infrastructure, data transfer costs during transition, and premium services for acceleration.
Three periods are mandated: maximum notice period of two months, transitional period during which switching happens set at maximum 30 days unless customer or supplier wants it longer, and data retrieval period of minimum 30 days.
There’s an overriding obligation of good faith on both source and destination providers to make the switching process effective. There’s a general positive obligation on providers to remove technical barriers to switching. Not just “don’t obstruct”, but actively help.
Yes. The Data Act applies to data processing service providers offering services to customers in the EU regardless of provider location. If you serve EU customers with IaaS, PaaS, or SaaS offerings, switching obligations apply. Consider EU market presence, customer contracts, and enforcement jurisdiction when assessing applicability.
Non-compliance risks include contractual disputes with customers exercising switching rights, potential European Commission enforcement action, competitive disadvantage versus compliant providers, customer churn to alternatives offering easier switching, and reputational damage. France’s SREN Law includes fines up to 3% of annual global turnover, increased to 5% for certain repeat breaches. Germany’s draft implementation act permits fines up to 4% of annual global turnover or €5 million, whichever is higher. Early preparation is recommended given technical implementation complexity.
Yes. While egress fees are banned from January 2027 and proportionate charges are restricted, optional premium services remain permissible. These include migration acceleration, data format conversion assistance, dedicated technical support, and consultation—as long as customers can choose the free standard switching process instead.
Evaluate on multiple criteria: technical capabilities and feature maturity (hyperscalers typically stronger), CLOUD Act exposure and sovereignty concerns (EU providers have the advantage), switching ease and Data Act compliance posture, pricing including total cost with egress fees, and risk tolerance for US-EU regulatory conflicts. Microsoft 365 EU Data Boundary, Amazon’s European Sovereign Cloud, and Google’s Sovereign Controls provide illusion of control while remaining subject to US legal demands. Search activity shows European alternatives averaging 2,400 monthly searches with 660% year-over-year increase. Hybrid architectures can balance trade-offs.
GDPR Article 20 provides limited portability for personal data in structured, machine-readable formats. Data Act Chapter VI is broader: covers all data types (not just personal data), includes functional equivalence for IaaS, mandates provider cooperation, eliminates egress fees, and specifies transitional periods. The Data Act significantly expands on the GDPR portability foundation.
No. Functional equivalence specifically applies to IaaS services under Data Act Article 29. PaaS services must provide open, cost-free interfaces with documentation. SaaS services must provide structured, machine-readable data exports with metadata. Requirements differ by service model—review relevant Article provisions for your service type.
Options include selecting EU-owned providers avoiding CLOUD Act jurisdiction entirely, implementing customer-managed encryption where the provider cannot access keys, using hybrid architectures with sensitive data on EU infrastructure, negotiating contractual protections and notification provisions, or accepting residual risk with hyperscaler capabilities. No perfect solution exists—balance technical and legal trade-offs.
Mandatory terms include notice period specifications (max 2 months for termination), transitional period provisions (30 days to 7 months based on complexity), data portability formats and procedures, switching charge disclosures and calculation methodologies, functional equivalence commitments for IaaS, good faith cooperation obligations, and egress fee elimination roadmap to January 2027.
No. Data Act Article 28 caps notice periods at “no more than two months” to prevent excessive contractual lock-in regardless of customer agreement. This is a mandatory maximum that cannot be extended through bilateral negotiation. Providers must adapt business models to this constraint rather than relying on long notice periods.
Providers must redesign revenue models by January 2027 to maintain profitability without egress charges. Options include increased base subscription prices incorporating data transfer costs, consumption-based pricing on compute and storage usage, premium tier structures, value-added service monetisation, and operational efficiency improvements. Some providers are dropping exit fees well in advance of the January 2027 date. Start transition planning now to smooth customer communication.
For IaaS services: detailed workload specifications including compute, storage, and networking requirements, current performance baselines and benchmarks, dependency mappings and integration points, monitoring and alerting configurations, and acceptance criteria defining equivalent functionality. Both incumbent and receiving providers share documentation obligations to enable verification.
While the Data Act doesn’t specify penalty amounts directly, Article 28 requires “good faith cooperation” and prohibits obstacles to switching. Providers creating artificial barriers risk contractual breach claims, European Commission enforcement action, reputational damage affecting customer retention, and potential DMA gatekeeper sanctions if designated. Compliance requires genuine facilitation not technical obstruction.
What Digital Sovereignty Means and Why European Technology Independence MattersYou’re hosting your data in Frankfurt. You’ve set up everything in AWS‘s European data centres. Your data never leaves the EU. Sorted, right?
Not quite. There’s a jurisdictional catch. The CLOUD Act gives US authorities the power to access your data regardless of where it physically sits. The jurisdiction follows the provider’s legal home, not where their servers are.
This guide is part of our comprehensive European digital sovereignty movement resource, where we explore the strategic, regulatory, and practical dimensions of achieving technology independence. Digital sovereignty is the framework for dealing with this. It’s built on three pillars: technical sovereignty (you control the infrastructure), data sovereignty (you control legal jurisdiction over your data), and operational sovereignty (you have strategic autonomy in your technology choices).
The EuroStack initiative provides a seven-layer technology stack architecture for European independence. Gaia-X implements a federated data model with over 180 sectoral data spaces already up and running. The geopolitical drivers are real: over 80% infrastructure import dependency, regulatory conflicts between CLOUD Act and GDPR Article 48 (addressed in our EU Data Act compliance requirements guide), vendor lock-in risks.
So you need to understand sovereignty principles to assess your current exposure, evaluate European cloud provider alternatives, and implement hybrid architecture patterns that align with compliance requirements.
Here’s the formal definition: digital sovereignty is “the ability of a nation, organisation or individual to control and govern their own digital assets, infrastructure and data independently, free from undue external influence or dependency”.
In practice, it covers three things: control over your data flows, control over your IT systems and software, and control over your operational decision-making processes. Strategically, it’s about independence from foreign technological, economic, and political influence.
This matters for European tech companies because the US CLOUD Act creates a compliance conflict with GDPR. If US authorities issue a disclosure order to your cloud provider, that provider faces a nasty choice: comply with US law or face GDPR penalties of up to €20 million or 4% of global revenue. You’re stuck in the middle of that conflict. For a comprehensive analysis of these CLOUD Act exposure risks and geopolitical threats, see our detailed risk assessment.
The dependency statistics paint the picture. Over 80% of Europe’s digital infrastructure is imported from US or Chinese providers. This creates system-wide vulnerability to foreign policy decisions and vendor leverage. When one provider controls your infrastructure, they have leverage over your pricing, your terms, and your operational flexibility.
The business risks break down into three categories:
Jurisdictional exposure – foreign governments can access your data through legal mechanisms that bypass your local data protection laws.
Vendor leverage – lock-in costs make switching providers prohibitively expensive, giving vendors power over pricing and terms.
Regulatory compliance costs – post-Schrems II, organisations must conduct Transfer Impact Assessments evaluating risks whenever they use US providers. That’s an ongoing compliance burden.
The sovereignty framework gives you an assessment methodology. You can quantify your exposure by workload type and work out which systems need migration priority. 84% of decision-makers now consider digital sovereignty a factor in vendor selection. That’s not a political preference—it’s a risk management calculation.
The three-pillar framework breaks sovereignty into bits you can actually act on.
Technical Sovereignty means control over your digital infrastructure and software stack without foreign proprietary restrictions. This is about open-source tools for transparency, customisation, and reduced vendor lock-in. When you run Kubernetes, you can audit the code and avoid single-provider dependencies. When you run AWS ECS, you’re locked into Amazon’s proprietary implementation.
Data Sovereignty is legal control over your data, governed by the jurisdiction where it resides. This goes beyond just physical location. It includes data residency (controlling where your data physically sits) plus jurisdictional authority over who can access it and how it’s processed. GDPR Article 48 requires international agreements (MLAT process) for foreign data access requests. That protection matters.
Operational Sovereignty is the freedom to make independent operational decisions. This includes vendor choice, deployment methods, and data processing locations. Vendor lock-in threatens this through proprietary APIs and architectures that make switching prohibitively expensive. When you build on vendor-specific services, you lose the ability to negotiate or leave. Interoperable standards and open-source ecosystems give you strategic autonomy.
The three pillars interconnect. Technical sovereignty enables data sovereignty—if you control the infrastructure, you control where data sits and who can access it. Both support operational sovereignty—when you’re not locked in technically or jurisdictionally, you maintain strategic flexibility.
You can use this framework to assess your current position across each pillar. Where do you have control? Where are you exposed? That analysis tells you where to prioritise improvements.
Here’s a practical example. If you’re running everything on AWS using Lambda, DynamoDB, and API Gateway, you have zero technical sovereignty (proprietary services), questionable data sovereignty (US jurisdiction via CLOUD Act), and limited operational sovereignty (high switching costs). If you’re running Kubernetes on a European provider with PostgreSQL and open-source message queues, you’ve got technical sovereignty (portable stack), data sovereignty (EU jurisdiction), and operational sovereignty (low switching costs).
Understanding how the CLOUD Act undermines data sovereignty helps you assess your current exposure.
The CLOUD Act (Clarifying Lawful Overseas Use of Data Act, 2018) is US federal law giving extraterritorial power to demand data from US-based providers regardless of where it’s stored.
Here’s how it works: jurisdiction follows the provider, not the server location. Your customer data sitting in AWS Frankfurt data centres remains subject to US legal authority because Amazon is a US company.
The CLOUD Act bypasses the Mutual Legal Assistance Treaty (MLAT) process that GDPR Article 48 requires for foreign government data access. US authorities can issue disclosure orders directly to US providers without international agreements or notification to European governments. The provider gets the order, complies, and your data goes to US authorities. The comity challenge provision theoretically allows providers to contest orders conflicting with foreign laws, but it’s rarely successful, discretionary, and complex.
This creates a compliance dilemma for your organisation. If the US issues a disclosure order, your cloud provider complies with US law. But that transfer might violate GDPR, exposing you to penalties. You didn’t make the decision, but you face the regulatory consequences.
The data residency marketing from US hyperscalers doesn’t fix this. AWS EU Data Boundary and Microsoft EU sovereign cloud offerings provide data residency (storage in EU data centres) but don’t change jurisdictional authority. The provider remains a US company subject to US law.
European cloud providers—Exoscale, OVHcloud, Deutsche Telekom—reduce this exposure through EU jurisdiction. They’re European companies subject to European law. US authorities could still request data through MLAT diplomatic channels, but that process gives European governments oversight and refusal ability.
Post-Schrems II, organisations must conduct Transfer Impact Assessments evaluating CLOUD Act risks when using US providers. The European Data Protection Board made clear that service providers subject to EU law cannot legally base data transfers to the US solely on CLOUD Act requests.
The risk varies by data type. If you’re processing health data, financial information, or personal identifiable information, your exposure is higher. If you’re running development environments with synthetic data, your risk is lower. The assessment needs to be workload-specific.
Data residency refers to the geographic location where data physically resides—specific data centres. Data sovereignty focuses on legal jurisdiction and control authority—who can access that data under what legal framework.
Residency addresses “where”. Sovereignty addresses “who controls”.
US hyperscalers market “EU Data Boundary” offerings that emphasise residency without sorting out jurisdictional control. The data sits in European data centres, but the provider remains subject to US law.
European cloud providers offer both: residency (EU data centres) and sovereignty (EU legal jurisdiction).
When you’re evaluating providers, assess the provider’s legal domicile and parent company nationality, not just data centre locations. A US company operating EU data centres remains subject to US law. A European company operating EU data centres gives you both residency and sovereignty.
You need both for genuine protection. Residency handles latency and compliance checkboxes. Sovereignty handles jurisdictional independence.
EuroStack is an architectural framework for European technology independence. It’s a comprehensive seven-layer technology stack designed to achieve autonomy across the digital value chain. As we detail in our sovereignty landscape overview, EuroStack represents Europe’s most ambitious response to platform dependency.
The scope is ambitious: approximately €300 billion investment over one decade to reduce current dependency on US and Chinese technology. The initiative addresses Europe’s technological lag—70% of foundational AI models originate in the United States.
The seven layers define the complete stack:
Layer 1: Critical Resources – energy and raw materials for technology manufacturing
Layer 2: Chips – semiconductor manufacturing reducing dependency on Asian and US fabrication
Layer 3: Networks – pan-European connectivity infrastructure
Layer 4: IoT & Devices – trusted device systems
Layer 5: Cloud Infrastructure – secure platform services
Layer 6: Software – open-source application frameworks
Layer 7: Data and AI – AI models and federated data exchange
Gaia-X operates as a specific implementation of EuroStack’s data infrastructure layer. The relationship is hierarchical: EuroStack defines the overall architecture, Gaia-X implements the federated data model within that architecture.
Current adoption includes German state governments migrating infrastructure, automotive sector implementing Catena-X, healthcare implementing GAIA-X Health, and energy sector implementing EONA-X data spaces. These are production deployments, not pilot projects.
For technology leaders, EuroStack provides a roadmap showing which European capabilities will be production-ready when, helping you work out vendor lock-in risks and migration timing.
Gaia-X is a European federated secure data infrastructure project enabling data sharing while users retain control. The project operates with over 180 data spaces operational as of 2025.
The federated architecture is the key innovation. Rather than a centralised hyperscaler model where one provider controls everything, Gaia-X uses decentralised governance where multiple independent nodes interconnect via open standards.
Gaia-X is GDPR-compliant by design. Data sovereignty requirements are built into the architecture.
Sectoral data spaces provide industry-specific implementations:
Catena-X (automotive) enables secure data exchange between manufacturers and suppliers
GAIA-X Health (healthcare) provides patient data exchange while maintaining privacy
EONA-X (energy) coordinates grid data and renewable energy systems
Here’s how it works technically: participants maintain sovereignty over their data while enabling controlled sharing through federated access policies and encryption. You define who can access your data, under what conditions, for what purposes. The federation enforces those policies across nodes.
Open-source foundations and interoperable standards enable vendor independence. You’re not locked into a single provider’s proprietary APIs. If you don’t like one node operator, you can move to another while maintaining access to the federation.
Sectoral data spaces provide industry-specific collaboration infrastructure without surrendering control. Catena-X gives automotive companies supply chain data exchange capabilities. GAIA-X Health provides healthcare patient data interoperability without centralised control.
There are 10 major European cloud providers: OVHcloud, STACKIT, Cyso Cloud, Open Telekom Cloud, IONOS, Scaleway, UpCloud, Exoscale, ELASTX, and Nine. All offer both residency and sovereignty, reducing CLOUD Act exposure. For a detailed comparison of European cloud alternatives and platform independence options, see our comprehensive evaluation guide.
Since 2025, customers have actively asked to use European cloud providers. That’s a shift from theoretical concern to procurement criteria.
The major providers:
OVHcloud (France) – mature provider with 43 data centres across 9 countries. Full IaaS, managed Kubernetes, databases, and PaaS services.
STACKIT (Germany) – officially launched in 2024, hosts SAP RISE. Strong enterprise workload focus.
Open Telekom Cloud (Germany) – operated by Deutsche Telekom AG. Leverages Deutsche Telekom’s infrastructure.
Exoscale (Austria) – specialises in DBaaS including Kafka and OpenSearch.
Scaleway (France) – offers managed AI and serverless services.
Interest in “European alternatives” has risen 660% year over year. The portal european-alternatives.eu tracked 384 alternatives across 58 categories and saw 1,100% traffic growth in 2025. That’s mainstream procurement activity, not fringe interest.
Key things to evaluate: EU jurisdiction (confirm European legal domicile), native GDPR compliance, technical capabilities matching workload requirements, relevant certifications (Gaia-X Labels, ISO 27001, SOC 2), transparent pricing, and industry-specific sectoral data space integration.
Common services across providers include IaaS, managed Kubernetes, managed databases, PaaS services, and AI and serverless capabilities. The capability gap with US hyperscalers has narrowed. You’re not sacrificing functionality for sovereignty.
Genuine sovereign solutions meet five standards: EU jurisdiction, open-source transparency, strong encryption, enterprise identity integration, and sustainable vendor ecosystem. Use those as evaluation filters.
Many organisations implement hybrid strategies: European providers for sensitive data (customer PII, financial records), US hyperscalers for non-sensitive workloads (development environments, content delivery). That risk-based approach lets you optimise for sovereignty where it matters.
The maturity has reached production-ready status. German state governments are migrating infrastructure. Automotive sector is implementing Catena-X on Gaia-X.
Yes, if your provider is a US company. The CLOUD Act jurisdiction follows the provider’s legal domicile, not server location. AWS Frankfurt data centres remain subject to US legal authority because Amazon is a US company. European cloud providers reduce this exposure through EU jurisdiction.
The concern is legitimate but doesn’t require wholesale migration. High-risk workloads like PII and healthcare data have greater CLOUD Act exposure and require Transfer Impact Assessments. Many organisations implement hybrid strategies: European providers for sensitive data, US hyperscalers for non-sensitive workloads.
In practical terms, it means control over data flows, IT infrastructure, and operational decisions without foreign dependency. Focus on the operational aspects: technical sovereignty (infrastructure control), data sovereignty (jurisdictional authority), and operational sovereignty (vendor independence).
Azure complies with GDPR data protection requirements but remains subject to CLOUD Act extraterritorial jurisdiction as a US company. Microsoft’s EU Data Boundary provides data residency but not data sovereignty. Post-Schrems II, organisations must conduct Transfer Impact Assessments evaluating CLOUD Act risks for Azure usage.
Three drivers: CLOUD Act compliance conflict with GDPR creating regulatory exposure, vendor lock-in limiting operational sovereignty, and strategic dependency on foreign infrastructure. EuroStack and Gaia-X initiatives provide European alternatives through €300 billion investment in domestic capabilities.
EuroStack is a comprehensive seven-layer technology stack framework requiring €300 billion investment over a decade. Gaia-X is a specific federated data infrastructure initiative operating as one component within EuroStack’s data layer.
Major providers include Exoscale (Austria), OVHcloud (France), Deutsche Telekom (Germany), and Scaleway (France). All offer IaaS, managed Kubernetes, databases, and PaaS services comparable to hyperscaler offerings.
No. AWS Sovereign Cloud and Microsoft EU Data Boundary provide data residency but don’t resolve jurisdictional authority. The providers remain US companies subject to CLOUD Act extraterritorial reach.
Customer-managed encryption provides additional technical control but doesn’t override legal jurisdiction. CLOUD Act disclosure orders could compel the provider to deliver encrypted data to US authorities. Encryption is valuable but not a substitute for jurisdictional sovereignty.
EuroStack roadmap spans approximately one decade (2025-2035) with phased delivery. Current maturity varies by layer: cloud infrastructure has production European providers today (OVHcloud, STACKIT, Exoscale). Chips layer requires multi-year investment. Data and AI capabilities are operational through Gaia-X sectoral data spaces.
Understanding European Digital Sovereignty and the Movement Toward Independent Cloud InfrastructureEuropean digital sovereignty provides a strategic framework for maintaining operational control over data location, access rights, and platform independence within European legal jurisdictions. This guide provides comprehensive coverage of the sovereignty landscape, from foundational concepts through practical implementation.
Whether you’re responding to September 2025 Data Act switching mandates, reducing vendor lock-in vulnerabilities, or building long-term technology resilience, this hub connects you to detailed analysis and actionable frameworks across seven focused articles addressing each decision stage from awareness through execution.
You’ll learn how to navigate EU regulatory requirements like the Data Act and Digital Markets Act, assess geopolitical risks including CLOUD Act exposure, evaluate European platform alternatives, calculate migration costs and ROI, and execute phased sovereignty adoption. The movement toward European digital independence has grown from aspirational policy to operational necessity, driven by conflicts between US and EU data protection laws, vendor lock-in economics that trap organisations in proprietary ecosystems, and geopolitical tensions that expose platform dependencies.
This pillar page organises everything you need to understand, evaluate, and implement digital sovereignty strategies. Use the navigation throughout to access detailed guides on specific topics, from fundamental concepts through technical deployment procedures.
European digital sovereignty means maintaining operational control over where your data resides, who can legally access it, and which platforms your business depends on—within European legal jurisdictions free from US extraterritorial oversight. It matters because current dependency on AWS, Azure, and Google Cloud exposes European organisations to three distinct risks: CLOUD Act data access demands, vendor lock-in economics, and potential geopolitical service disruptions. Sovereignty strategies address these through regulatory protection and platform diversification.
Digital sovereignty provides three distinct forms of control. Data location governance means knowing where information physically resides—not just “somewhere in the cloud” but specific jurisdictions with defined legal protections. Access rights management determines who holds legal authority to demand data disclosure, preventing scenarios where complying with one government’s laws violates another’s requirements. Platform independence avoids proprietary API lock-in that prevents migration, ensuring you retain the technical ability to switch providers when business needs change.
The movement accelerated following CLOUD Act enforcement cases demonstrating that physical EU data centre location doesn’t prevent US government access to data stored by American companies. This creates GDPR compliance conflicts when US and EU legal requirements contradict each other.
Over 80% of Europe’s digital infrastructure and technologies are imported, creating systemic dependencies that undermine regional innovation capacity. European companies represent just 7% of global research spending on software and internet technologies. This dependency exposes organisations to geopolitical disruption scenarios including service withdrawal threats, pricing weaponisation, and selective enforcement used as diplomatic leverage.
For technology companies, sovereignty provides defensive insurance against these scenarios while enabling compliance with EU regulations mandating platform switching capabilities and interoperability standards. The alternative, as sovereignty advocates frame it, is digital colonialism—European industries hollowed out, citizens under foreign surveillance, critical infrastructure controlled by entities operating under foreign jurisdictional authority.
The EuroStack initiative proposes comprehensive European technology independence requiring seven infrastructure layers from physical data centres through AI sovereignty stacks, with substantial investment requirements over the next decade. While this macro-level investment goal exceeds most organisations’ direct scope, it provides context for the regulatory and technical developments reshaping cloud economics and platform choices.
Learn more: What Digital Sovereignty Means and Why European Technology Independence Matters provides comprehensive coverage of sovereignty principles, the EuroStack architecture, Gaia-X federated model, and digital colonialism framework positioning sovereignty as pragmatic risk management. For detailed analysis of the threats driving this movement, see Evaluating CLOUD Act Exposure and Geopolitical Risks in Technology Platform Dependencies.
The EU Data Act restructures cloud economics by mandating functional equivalence standards (ensuring workloads perform identically on destination platforms) for migrations, eliminating egress fees completely by January 2027, requiring proportionate switching charges, and establishing transitional periods for complex migrations. These provisions transform cloud relationships from lock-in-dependent economics to portability-based competition, directly targeting vendor barriers that previously prevented sovereignty adoption.
September 2025 marks the deadline when cloud providers must implement Data Act switching procedures including data export capabilities, migration support obligations, and compatibility testing protocols that make platform independence technically and economically feasible. The regulation entered force January 11, 2024, giving providers time to develop compliant processes while organisations assess current lock-in exposure and plan sovereignty strategies.
Functional equivalence requirements create legal standards for migration quality. Providers must validate that workloads perform identically on destination platforms before switching completion. This means performance benchmarks, feature completeness, and integration compatibility testing becomes mandatory, protecting customers from degraded post-migration experiences that historically discouraged platform changes. The standard applies specifically to Infrastructure-as-a-Service, where technical switching feasibility is highest.
The egress fee elimination timeline changes ROI calculations for sovereignty investments. Currently, cloud providers charge substantial data export fees—often thousands to millions for large-scale transfers—creating economic penalties that trap organisations even when alternative platforms offer better pricing or capabilities. Full enforcement January 12, 2027, removes these switching costs entirely for data transfer, though proportionate charges reflecting actual technical migration effort remain permissible.
Proportionate charges mean providers can recover reasonable costs for migration assistance, format conversion, and extended support periods, but cannot charge strategic deterrence fees designed to prevent customer departure. The standard creates regulatory oversight for switching economics, allowing customers to challenge excessive charges and leverage provider non-compliance to negotiate better terms.
Transitional periods recognise that complex workloads require extended migration timelines. Standard 30-day transitions can extend to seven months in exceptional circumstances involving technical complexity, compliance requirements, or operational continuity needs. During these periods, providers maintain support obligations, preventing scenarios where customers face service withdrawal before migrations complete.
The Data Act complements broader regulatory pressure from the Digital Markets Act, which designates AWS, Azure, and Google Cloud as gatekeepers subject to interoperability mandates and non-discriminatory access requirements. Combined enforcement reshapes cloud economics from lock-in dependency toward portability-based competition.
Learn more: Navigating EU Data Act and Digital Markets Act Cloud Compliance Requirements provides authoritative regulatory interpretation covering switching procedure deadlines, functional equivalence standards, proportionate charges calculation, and CLOUD Act conflict resolution. Once you understand your obligations, Assessing Cloud Vendor Lock-in and Planning Strategic Migration to European Platforms guides you through quantifying dependencies and planning migration approaches.
US platform dependency exposes European organisations to CLOUD Act extraterritorial data access, service disruption scenarios driven by geopolitical tensions, and compliance conflicts where European data protection laws contradict American jurisdictional claims. These risks manifest as legal exposure when complying with US demands violates GDPR, operational vulnerability from platform withdrawal threats, and strategic leverage loss when vendors control critical infrastructure.
The CLOUD Act enables US government access to any data controlled by American companies regardless of where servers physically operate. Enacted in 2018, the Clarifying Lawful Overseas Use of Data Act allows US authorities to demand data from US-based service providers with warrants, bypassing Mutual Legal Assistance Treaties that traditionally governed cross-border data access. Physical EU data location provides no protection—jurisdiction follows ownership, making European data centres operated by US companies subject to US law despite server location. This creates scenarios where complying with US demands violates GDPR Article 48, which requires foreign authorities to use international agreements for accessing EU data.
Companies face impossible compliance scenarios. Refusing US government data demands risks legal penalties in the United States, including contempt charges and operational restrictions. Complying with those demands violates GDPR requirements protecting European citizens’ privacy rights, exposing organisations to regulatory enforcement including substantial fines and reputational damage.
Recent geopolitical tensions demonstrate how technology platforms become weaponised during policy disputes. Historical precedents show service degradation, selective enforcement, and withdrawal threats used as diplomatic leverage. These risks are particularly acute for organisations in regulated industries like finance and healthcare, where operational continuity and data confidentiality represent existential requirements rather than convenience preferences.
Digital colonialism frameworks describe power imbalances where European digital infrastructure operates under foreign control, creating strategic autonomy costs beyond immediate economic dependency. This dependency creates strategic vulnerabilities including pricing leverage loss, innovation roadmap misalignment with European needs, and inflexible service level commitments. When trapped by proprietary dependencies, you lose ability to negotiate competitive pricing, accept unfavourable terms including price increases and service changes, and face strategic vulnerability when business-critical infrastructure sits within single-vendor control.
Risk assessment requires modelling probability-weighted scenarios. What’s the likelihood of US government data demands affecting your specific customer data? What would service disruption cost in operational continuity, customer trust, and regulatory compliance? How does vendor lock-in constrain strategic flexibility when technology needs evolve? These questions frame sovereignty not as aspirational policy commitment but as insurance premium against tail risks that prudent risk management addresses.
For some organisations, these risks justify immediate sovereignty investment. Government contractors, defence industry participants, and handlers of classified information face regulatory mandates requiring European jurisdiction. For others, gradual sovereignty adoption through incremental platform migration or multicloud risk distribution provides pragmatic middle ground between full dependency and complete independence.
Learn more: Evaluating CLOUD Act Exposure and Geopolitical Risks in Technology Platform Dependencies provides strategic risk analysis with scenario planning, risk modelling frameworks, and insurance value quantification helping you determine when sovereignty investment is justified. To understand which alternatives exist, explore Comparing European Cloud Providers and Open Source Alternatives to US Platforms.
European IaaS/PaaS alternatives include OVHcloud (French), STACKIT (German), Deutsche Telekom Cloud (enterprise-focused), and 1&1 Ionos (SMB-oriented), offering CLOUD Act immunity through European legal jurisdiction despite smaller service portfolios compared to AWS/Azure. Open-source sovereignty options include NextCloud (file sharing), Mattermost (team collaboration), and LibreOffice (productivity), providing transparency-based trust through auditability plus European hosting control. Gaia-X federated implementations like Catena-X (automotive) and EONA-X (healthcare) demonstrate sector-specific data spaces operational in 2025.
European cloud providers emphasise compliance capabilities and jurisdictional protection over feature completeness. Native GDPR compliance, Data Act switching readiness, and immunity to CLOUD Act demands make them suitable for sovereignty-prioritising workloads even when service portfolios lag AWS maturity levels. OVHcloud operates 43 data centres across nine countries, providing mature infrastructure with substantial European footprint. STACKIT launched officially in 2024 with high momentum, hosting SAP RISE and demonstrating enterprise-scale capability.
Common services across European providers include Infrastructure-as-a-Service covering virtual machines, storage, and networking; managed Kubernetes and containers for modern application deployment; managed databases supporting PostgreSQL, MySQL, and MongoDB; higher-level Platform-as-a-Service offerings; and emerging AI and serverless capabilities. The breadth doesn’t match AWS’s service catalogue, but covers core requirements for most workloads.
Open-source platforms enable sovereignty through transparency and hosting flexibility. NextCloud provides file sync and sharing replacing Google Drive and Dropbox. Mattermost offers team collaboration alternative to Slack and Microsoft Teams, supporting on-premises hosting with integration capabilities through webhooks and APIs. LibreOffice delivers productivity suite functionality, though workflow adaptation and macro compatibility require migration planning.
Open-source platforms offer transparency advantages through auditability. When you can audit source code, you verify the absence of backdoors, understand data handling practices, and ensure compliance with sovereignty requirements through technical validation rather than vendor assurances.
Real-world implementations prove European alternatives viable at scale. Gaia-X federated data spaces operational across automotive supply chains through Catena-X and healthcare networks via EONA-X provide concrete evidence that sovereignty strategies deliver functional infrastructure, not just policy aspirations. These sector-specific implementations maintain distributed sovereignty control while enabling cross-organisation data flows, showing federated architecture works beyond theoretical frameworks.
European AI alternatives including Mistral AI and OpenEuroLLM address sovereignty concerns in artificial intelligence layers. As foundational AI models originate predominantly in the United States, European alternatives provide jurisdictional control over training data, model weights, and inference infrastructure, though capabilities lag OpenAI and Anthropic offerings in some areas.
Feature gaps exist. European platforms offer smaller service portfolios, fewer managed services, less mature ecosystems, and smaller developer communities compared to AWS, Azure, and Google Cloud. These trade-offs matter—you gain sovereignty and compliance advantages while accepting reduced convenience and potentially higher operational burden. The calculus depends on whether your workloads prioritise sovereignty over feature breadth.
Learn more: Comparing European Cloud Providers and Open Source Alternatives to US Platforms delivers evidence-based comparative analysis with performance benchmarks, technical specifications, real-world migration case studies, and Gaia-X implementation details. Before selecting platforms, use Assessing Cloud Vendor Lock-in and Planning Strategic Migration to European Platforms to evaluate your current dependencies and determine which migration approach suits your situation.
Vendor lock-in assessment requires inventorying proprietary APIs lacking standard alternatives, calculating data export restrictions in formats and tooling, quantifying egress fees at current data volumes, identifying compatibility gaps between source and destination platforms, and estimating switching timeline complexity. This analysis reveals migration barriers, cost drivers, and dependency depth, informing whether incremental sovereignty adoption, complete platform replacement, or multicloud risk distribution best fits your situation.
Proprietary API inventory identifies technical dependencies creating lock-in. AWS Lambda, Azure Functions, and Google Cloud-specific services without standardised equivalents require replacement during migration, not simple workload transfer. Integration patterns including API Gateway configurations, IAM policies, and managed database specifics need reconfiguration. The depth of proprietary service usage directly correlates with switching difficulty—shallow dependencies migrate cleanly, deep integration demands substantial architecture changes.
Current egress fee calculation at realistic data volumes demonstrates economic lock-in severity. A typical migration involving terabytes of data often costs thousands to tens of thousands in data transfer charges at current AWS, Azure, and Google Cloud pricing. These fees intentionally discourage switching by making migration economically punitive even when destination platforms offer better ongoing costs. Data Act elimination January 2027 changes this calculation for future switching windows, but planning migrations before that deadline requires including substantial egress costs in ROI models.
Compatibility gap analysis compares feature completeness between European alternatives and current US platforms. Which workloads migrate cleanly because they use standard compute and storage? Which require architecture changes to replace proprietary service dependencies? This assessment guides phased migration sequencing that prioritises low-risk transitions first, building expertise and organisational confidence before committing critical systems to European platforms.
Data export restrictions beyond egress fees create technical barriers. Proprietary formats, incomplete metadata export, and missing relationship information prevent functional equivalence on destination platforms. CRM systems may export basic contact details but not full relationship histories or automation rules. Understanding what data portability actually means for your specific service configuration determines migration feasibility regardless of cost considerations.
Process and user experience lock-in emerges from organisational familiarity with tool interfaces. Users become deeply familiar with specific platforms, creating productivity drops during switches that economic analyses overlook. Training requirements, workflow adaptation, and temporary efficiency losses during transition periods represent hidden costs requiring attention in migration planning.
Risk assessment frameworks help quantify lock-in exposure. On a scale measuring technical dependency depth, economic switching costs, and operational continuity requirements, where does your current platform dependency sit? High lock-in risk with high sovereignty requirements argues for immediate migration planning. Low lock-in risk suggests maintaining current arrangements while monitoring regulatory developments. Mixed scenarios benefit from incremental approaches starting with easiest workloads.
Learn more: Assessing Cloud Vendor Lock-in and Planning Strategic Migration to European Platforms provides actionable methodology for quantifying dependencies, phased migration frameworks, and functional equivalence testing protocols. For financial analysis of your migration decision, see Calculating Cloud Migration Costs and Modelling Return on Investment for Sovereignty.
Migration requires five implementation phases: sovereignty risk assessment quantifying current lock-in and CLOUD Act exposure, platform selection and compatibility analysis evaluating European alternatives against technical requirements, functional equivalence testing validating performance benchmarks and feature completeness, phased workload migration starting with low-risk file sharing and collaboration before critical infrastructure, and post-migration validation confirming identical functionality and documenting compliance. This methodology balances sovereignty goals with operational stability through incremental adoption allowing skill development and risk mitigation.
Functional equivalence testing creates acceptance criteria ensuring migrated workloads perform identically on destination platforms. This includes performance benchmarking measuring latency, throughput, and reliability metrics; feature completeness verification ensuring all capabilities present; integration testing validating API compatibility and data flow continuity; and rollback preparation for scenarios where equivalence standards aren’t met. The Data Act makes functional equivalence a legal requirement for IaaS switching, but adopting it as internal standard for all migrations protects operational continuity.
Incremental migration sequences workloads by risk and complexity. Starting with file storage migration from Google Drive to NextCloud and team collaboration shifts from Slack to Mattermost builds expertise before addressing databases, compute infrastructure, and proprietary service replacements. This approach allows learning from early phases, developing team capabilities with European platform tooling, and demonstrating success that justifies broader organisational commitment.
Early workload selection prioritises visibility over criticality. Choose systems where successful migration proves sovereignty viability to stakeholders without risking business continuity. Board communications, research and development environments, and development infrastructure provide high-impact demonstrations while maintaining production system stability. These pilot implementations test migration procedures, validate European platform capabilities, and identify unexpected challenges before committing critical workloads.
Data export procedures differ by provider and must comply with Data Act portability requirements. AWS S3 transfer tools, Azure Blob Storage interfaces, and Google Cloud transfer services each implement proprietary export mechanisms. Extracting not just data but metadata, configurations, and relationship information needed for functional equivalence requires provider-specific procedures, documented through audit trails for compliance verification. Post-January 2027, providers must standardise these processes, but current migrations face vendor-specific complexity.
Skills requirements vary by implementation approach. Managed European cloud services like OVHcloud and STACKIT need capabilities similar to AWS/Azure operations—infrastructure management, monitoring, incident response—though platform-specific tooling differs. Open-source self-hosting demands server administration, backup management, security patching, and troubleshooting expertise that SaaS alternatives historically provided. Federated architecture through Gaia-X introduces distributed system complexity requiring integration pattern knowledge.
Organisations lacking internal expertise can pursue consulting partnerships, managed sovereignty services, or hybrid approaches combining professional implementation support with internal operational ownership. The skills gap represents genuine migration barrier justifying incremental adoption over compressed timelines demanding immediate competency across unfamiliar technology stacks.
Learn more: Implementing Data Act Switching Procedures and Deploying European Infrastructure provides technical implementation blueprint with step-by-step procedures, NextCloud and Mattermost deployment guides, and build-versus-buy decision frameworks. To understand the regulatory requirements driving these procedures, review Navigating EU Data Act and Digital Markets Act Cloud Compliance Requirements.
Sovereignty ROI modelling compares switching costs including data export charges, egress fees, application reconfiguration, testing, transitional support, and training against continued US platform dependency costs including vendor lock-in economics, lost pricing leverage, geopolitical risk exposure, and regulatory compliance penalties. Over 3-5 year horizons, Data Act egress fee elimination and proportionate charges standards reduce migration economics, while quantifying insurance value of reduced CLOUD Act exposure and vendor lock-in escape optionality reveals sovereignty benefits extending beyond direct cost comparisons.
Switching cost components include one-time migration expenses, currently including egress fees that will be eliminated post-January 2027. Application reconfiguration labour for proprietary API replacement and integration rewiring often exceeds data transfer costs, particularly for workloads with deep platform dependencies. Testing and validation effort ensures functional equivalence through performance benchmarking and compatibility verification. Hidden costs including productivity impacts during transition, learning curves for European platform tooling, and troubleshooting expertise development require comprehensive total cost of ownership analysis beyond simple platform pricing comparisons.
Research shows that while 96% of businesses use public cloud services, 42% of companies have already repatriated at least part of their workloads or plan to do so. Primary drivers are cost (43% cite higher-than-expected bills) and security concerns (33%). This suggests sovereignty decisions sit within broader cloud strategy reassessment, not isolated compliance exercises.
Cost models by company size reveal break-even thresholds. For startups with 25 employees, cloud 5-year TCO of $800K versus on-premise $1.025M favours cloud retention. Mid-size companies with 300 employees see cloud 5-year TCO $6.155M versus on-premise $7.985M, saving approximately $1.46M by staying cloud-hosted. Enterprise scale with 2,000+ employees shows cloud 5-year TCO $33.4M versus on-premise $30.5M, making repatriation approximately $1.275M cheaper. These models suggest sovereignty economics improve at scale when operational expertise justifies infrastructure ownership.
Vendor lock-in opportunity cost quantifies lost flexibility and leverage beyond switching fees. When trapped by proprietary dependencies, you lose ability to negotiate competitive pricing, accept unfavourable terms including price increases and service changes, and face strategic vulnerability when business-critical infrastructure sits within single-vendor control. Positioning sovereignty investment as purchasing escape optionality reframes analysis—you’re not just paying migration costs, you’re buying future flexibility.
Insurance value calculation models geopolitical risk reduction benefits through probability-weighted scenarios. What’s the likelihood of CLOUD Act data demands creating legal exposure? What would service disruption cost in operational continuity? What regulatory penalties might GDPR conflicts trigger? Sovereignty provides protection premium against tail risks that pure cost analysis overlooks but prudent risk management addresses. Quantifying this insurance value requires sector-specific risk assessment—regulated industries face higher exposure than general business applications.
Break-even timelines help decision-making. If migration costs recover through reduced vendor lock-in expenses, improved pricing leverage, and geopolitical risk reduction within 18-24 months, investment economics justify sovereignty adoption. Longer break-even periods require stronger conviction about future risk materialisation or regulatory mandate likelihood. Most organisations targeting 3-5 year planning horizons find sovereignty investment economically defensible when including lock-in opportunity costs and insurance value.
Learn more: Calculating Cloud Migration Costs and Modelling Return on Investment for Sovereignty provides financial analysis frameworks with ROI calculators, TCO comparisons, and insurance value quantification methodology. Once you’ve determined the economic case, Implementing Data Act Switching Procedures and Deploying European Infrastructure guides you through technical execution.
Gaia-X represents European federated cloud architecture where control, governance, and infrastructure distribute across multiple independent providers rather than centralising within single vendors like AWS or Azure. With 180+ data spaces operational in 2025, Gaia-X enables sector-specific implementations including Catena-X automotive supply chains and EONA-X healthcare networks using standardised protocols for interoperability while maintaining distributed sovereignty control, though operational complexity increases compared to centralised alternatives.
Federated architecture addresses sovereignty through distributed governance. No single entity controls data access, infrastructure operations, or policy enforcement. Instead, agreed protocols and standards enable cross-provider data flows while maintaining organisational control over access rights, residency requirements, and platform selection. This contrasts with AWS and Azure models concentrating power in vendor-controlled ecosystems where unilateral policy changes, pricing adjustments, and service deprecation decisions affect all customers simultaneously.
The Gaia-X initiative started as partnership between German Minister Peter Altmaier and French counterpart Bruno Le Maire in 2019, presented at the Digital Summit in Dortmund. Operating as non-profit association based in Belgium with European-dominated governing bodies, Gaia-X functions as ecosystem of nodes interconnected via open standards rather than single cloud platform. This design prevents power concentration that centralised models enable.
Real-world Gaia-X deployments demonstrate viability at scale. Catena-X connects automotive manufacturers and suppliers through federated data sharing maintaining sovereignty—each participant controls access to their data while enabling supply chain coordination. EONA-X provides healthcare data spaces enabling research collaboration without centralised patient information repositories, addressing privacy requirements while facilitating medical innovation. These sector implementations prove federated architecture delivers operational capability beyond theoretical frameworks.
Lighthouse Data Spaces recognised by Gaia-X AISBL showcase best examples of how Gaia-X concepts foster European data sovereignty and value creation. Data spaces span diverse sectors including automotive, aeronautics and space, manufacturing industry, and cloud services. Projects use the Gaia-X Trust Framework for setting up trust mechanisms for data exchanges and data services.
Operational trade-offs include increased technical complexity. Managing multi-provider integrations, standardised protocols, and federation governance requires sophisticated orchestration that centralised platforms handle internally. Less mature tooling compared to AWS and Azure ecosystem development means organisations accepting federated architecture sacrifice convenience for sovereignty benefits. The calculus favours federation when distributed control, vendor lock-in prevention, and European jurisdiction justify operational overhead.
Implementation challenges include managing relationships between players whose interests don’t always align. Governance tensions, insufficient technical maturity in some areas, and misaligned European stakeholder priorities create delays and uncertainty about long-term viability for some implementations. Success requires commitment to open standards, collaborative governance, and accepting evolution timelines longer than purchasing established centralised alternatives.
Learn more: What Digital Sovereignty Means and Why European Technology Independence Matters covers Gaia-X architecture principles and federated models, while Comparing European Cloud Providers and Open Source Alternatives to US Platforms details Catena-X and EONA-X sector implementations with technical specifications and case studies.
Incremental migration suits organisations with limited European platform expertise, complex technical dependencies requiring phased learning, risk-averse operational cultures preferring progressive validation, or multicloud strategies distributing workloads across US and European providers for balanced risk mitigation. Complete replacement makes sense when regulatory mandates demand full sovereignty for government contractors or defence participants, CLOUD Act exposure risk exceeds transition disruption costs, or vendor relationship deterioration from pricing disputes or service quality creates urgency justifying compressed migration timelines despite execution risks.
Incremental approaches sequence workloads by migration complexity and organisational risk tolerance. Beginning with file sharing transitions from Google Drive to NextCloud and team collaboration shifts from Slack to Mattermost builds technical expertise and organisational confidence before addressing databases, application infrastructure, and proprietary service replacements requiring deeper architecture changes and higher failure risks. This progression allows demonstration of European platform capabilities to stakeholders while maintaining production system stability.
Multicloud strategies maintain workload distribution across US and European providers, accepting hybrid architecture complexity to preserve access to AWS and Azure mature service ecosystems while reducing lock-in vulnerability and CLOUD Act exposure for sovereignty-sensitive workloads including customer data, proprietary algorithms, and regulated information. This offers pragmatic middle ground between full dependency and complete independence for organisations wanting gradual sovereignty adoption without wholesale platform replacement.
Skills development timelines favour incremental adoption. Teams need experience with European platform tooling, open-source operations, and federated architecture patterns before reliably operating production systems. Phased migration allows capability development through lower-risk implementations before committing critical systems requiring immediate competency across unfamiliar technology stacks under production service level pressures. Training parallel to migration reduces risk compared to compressed timelines demanding instant expertise.
Complete platform replacement becomes necessary when regulatory mandates require full sovereignty implementation. Government contractors, defence industry participants, and handlers of classified information face compliance requirements that gradual adoption doesn’t satisfy. In these cases, compressed timelines with greater execution risk become acceptable because operational alternatives don’t exist.
Vendor relationship deterioration also argues for complete replacement. When pricing disputes create untenable cost structures, service quality degradation threatens operational continuity, or compliance conflicts create regulatory exposure, remaining on current platforms carries greater risk than migration disruption. These scenarios justify aggressive switching despite technical complexity and organisational change management challenges.
Decision frameworks weigh current risk exposure against migration capability. High sovereignty requirements with high European platform expertise suggest complete migration. Low sovereignty urgency with limited expertise favours incremental approaches. Mixed scenarios—high urgency but limited expertise, or low urgency with strong expertise—require contextual judgment balancing competing factors.
Learn more: Assessing Cloud Vendor Lock-in and Planning Strategic Migration to European Platforms provides migration planning frameworks with phased approaches and decision criteria, while Comparing European Cloud Providers and Open Source Alternatives to US Platforms details platform options informing strategy selection.
The Digital Markets Act designates AWS, Azure, and Google Cloud as gatekeepers subject to interoperability mandates, data portability obligations, and non-discriminatory access requirements, with cloud service investigations launched November 2025 examining anticompetitive practices. DMA enforcement complements Data Act switching rights by imposing obligations on dominant platforms rather than creating customer entitlements, targeting market power concentration that prevents European alternative competitiveness through interoperability restrictions and proprietary lock-in mechanisms.
Gatekeeper obligations under DMA require interoperability with competing services, preventing AWS, Azure, and Google Cloud from leveraging proprietary APIs and integration patterns to maintain market dominance. This creates standardisation pressure reducing switching barriers and enabling European alternatives to compete on compliance positioning, pricing, and sovereignty benefits rather than purely matching feature completeness against mature hyperscaler service catalogues.
November 2025 cloud investigations examine whether AWS, Azure, and Google Cloud engage in anticompetitive practices. The European Commission is specifically investigating interoperability barriers between cloud services, data access restrictions for business users, service tying and bundling practices, and potentially imbalanced contract terms. These investigations could require structural remedies beyond Data Act switching procedures if anticompetitive behaviour is confirmed.
Enforcement potentially designates Microsoft Azure and Amazon Web Services as gatekeepers despite not meeting standard DMA thresholds. Recent analyses suggest both companies occupy strong positions relative to market participants, justifying gatekeeper obligations even when quantitative metrics fall below regulatory triggers. If investigations confirm gatekeeper status, cloud services would be added to existing core platform designations for both companies, expanding compliance requirements.
Combined regulatory pressure from Data Act customer switching rights and DMA gatekeeper competition obligations reshapes cloud economics from lock-in dependency toward portability-based competition. This improves European alternative viability by mandating technical standards US platforms must support regardless of strategic preference for proprietary approaches maintaining lock-in. Interoperability requirements reduce competitive advantage from ecosystem depth when customers can migrate workloads to alternative providers maintaining functionality.
The EU opened Silicon Valley office headed by Gerard de Graaf to establish closer contact with Apple, Google, and Meta, ensuring American tech companies comply with European rules. This signals enforcement seriousness beyond policy announcements. While increased interoperability could increase security complexity from managing multiple provider integrations, instituting changes makes it harder for companies to secure market share through network-driven dominance requiring customer lock-in.
Learn more: Navigating EU Data Act and Digital Markets Act Cloud Compliance Requirements provides comprehensive regulatory coverage including DMA gatekeeper obligations and November 2025 investigation details. For risk analysis of why these regulations matter, see Evaluating CLOUD Act Exposure and Geopolitical Risks in Technology Platform Dependencies.
Implementation support spans European cloud managed services from OVHcloud, STACKIT, and Deutsche Telekom offering migration assistance; open-source consulting firms like Adfinis (Swiss-based) and Code Enigma (UK-focused) providing NextCloud and Mattermost deployment expertise; regulatory compliance advisors including Deloitte and IAPP analysing Data Act requirements; and Gaia-X consortium resources offering federated architecture guidance and sector-specific data space implementations. This ecosystem enables organisations lacking internal expertise to execute sovereignty strategies through managed services, consulting partnerships, or hybrid approaches balancing control with professional implementation support.
Managed sovereignty services from European cloud providers include migration planning assistance covering lock-in assessment and compatibility analysis, technical execution support for data export and workload transfer, testing validation ensuring functional equivalence, and post-migration operations including monitoring, optimisation, and incident response. This enables sovereignty achievement without building complete in-house expertise for European platform operations, particularly valuable for organisations prioritising speed over internal capability development.
Nordcloud offers vendor-agnostic guidance across hyperscalers (AWS, Microsoft Azure, Google Cloud) and EU-native solutions with structured five-step approach: Sovereignty Workshop aligning teams on goals, Risk Assessment evaluating threats and exposure, Solution Blueprinting matching requirements to sovereignty models, Architecture & Migration implementing with minimal disruption, and Compliance Monitoring maintaining oversight. The vendor-agnostic positioning helps organisations evaluate trade-offs between European cloud, hybrid approaches, and US platform retention with sovereignty controls.
Open-source consulting specialisation addresses NextCloud enterprise deployment including scalability architecture, backup strategies, and federation configuration; Mattermost integration covering single sign-on, webhooks, and bot frameworks; and LibreOffice migration handling macro compatibility, workflow adaptation, and user training. This provides expertise that organisations using SaaS alternatives historically outsourced to platform vendors but must now develop or procure when pursuing self-hosted sovereignty.
Build-versus-buy decision frameworks weigh European cloud managed services offering faster deployment and less expertise requirements but ongoing vendor relationships, against self-hosted open-source providing greater control and transparency verification with lower long-term costs but higher operational burden. Consulting partnerships offer middle ground providing implementation expertise while maintaining organisational operational ownership, suitable for organisations wanting sovereignty without permanent external dependencies.
Adoption best practices across consulting providers emphasise starting with high-risk workflows like board communications and research and development, piloting sovereign tools in parallel with existing systems before full commitment, investing in integration and staff training as migration components, updating procurement policies to include sovereignty criteria in vendor evaluation, and securing executive sponsorship by framing sovereignty as risk reduction rather than technology replacement.
Learn more: Implementing Data Act Switching Procedures and Deploying European Infrastructure provides implementation blueprints including consulting service evaluation criteria and build-versus-buy frameworks. To understand the platform options these services deploy, review Comparing European Cloud Providers and Open Source Alternatives to US Platforms.
What Digital Sovereignty Means and Why European Technology Independence Matters Comprehensive overview establishing operational definitions including data location control, access rights management, and platform independence. Explains EuroStack seven-layer initiative architecture, details Gaia-X federated model, and positions sovereignty as pragmatic risk management rather than aspirational policy goal.
Evaluating CLOUD Act Exposure and Geopolitical Risks in Technology Platform Dependencies Strategic risk analysis explaining CLOUD Act extraterritorial data access mechanics, modelling geopolitical disruption scenarios including service withdrawal, compliance weaponisation, and data demands. Introduces digital colonialism framework and provides risk assessment methodology helping determine when sovereignty investment is justified as insurance strategy.
Navigating EU Data Act and Digital Markets Act Cloud Compliance Requirements Authoritative regulatory guide covering Data Act switching procedure deadlines, egress fee elimination enforcement, functional equivalence standards interpretation, proportionate charges calculation, transitional period negotiation, DMA gatekeeper obligations, and CLOUD Act conflict resolution.
Calculating Cloud Migration Costs and Modelling Return on Investment for Sovereignty Financial analysis framework comparing sovereignty investment costs including switching expenses, reconfiguration labour, testing effort, and training against continued US platform dependency costs covering vendor lock-in economics, geopolitical risk exposure, and compliance penalties. Includes 3-5 year ROI modelling and insurance value quantification.
Assessing Cloud Vendor Lock-in and Planning Strategic Migration to European Platforms Actionable methodology for quantifying current platform dependencies through proprietary API inventory, data export restriction analysis, egress fee calculation, and compatibility gap identification. Provides phased migration framework sequencing low-risk workloads before critical infrastructure plus functional equivalence testing protocols.
Implementing Data Act Switching Procedures and Deploying European Infrastructure Technical implementation blueprint providing step-by-step Data Act compliance procedures, functional equivalence testing acceptance criteria, platform-specific data export guidance for AWS, Azure, and GCP, NextCloud enterprise deployment documentation, Mattermost setup instructions, five-layer AI sovereignty stack architecture, and build-versus-buy decision frameworks.
Comparing European Cloud Providers and Open Source Alternatives to US Platforms Evidence-based comparative analysis of European IaaS/PaaS providers including OVHcloud, STACKIT, Deutsche Telekom Cloud, and 1&1 Ionos versus AWS, Azure, and GCP across performance, features, compliance, sovereignty, and cost dimensions. Covers open-source platforms including NextCloud, Mattermost, and LibreOffice; Gaia-X sector implementations through Catena-X automotive and EONA-X healthcare; European AI alternatives like Mistral AI and OpenEuroLLM; plus real-world migration case studies.
Worth depends on your CLOUD Act exposure risk, regulatory compliance requirements, vendor lock-in vulnerability, and risk tolerance for geopolitical disruption scenarios. Organisations in regulated industries including finance and healthcare, handling sensitive customer data, or facing GDPR conflicts with US jurisdiction typically justify sovereignty investment as defensive insurance. Data Act egress fee elimination January 2027 improves migration economics, reducing switching costs to near-zero for data transfer while functional equivalence mandates protect against degraded post-migration performance. Calculate ROI over 3-5 years including vendor lock-in opportunity costs and geopolitical risk insurance value, not just direct platform pricing differences.
Yes, with specific timeline. September 2025 brings switching procedure mandates including data portability, migration support, and functional equivalence testing. January 12, 2027, marks full egress fee prohibition when providers cannot charge any fees for data export during cloud switching. Current transparency requirements mandate egress fee disclosure but allow charges. Post-2027 enforcement completely eliminates this economic lock-in mechanism. Proportionate charges standard still permits reasonable fees reflecting actual technical migration effort rather than strategic deterrence, subject to regulatory oversight and customer challenge. This changes cloud economics from lock-in dependency toward portability-based competition.
Physical data location meaning servers residing within EU geographic boundaries provides necessary but insufficient sovereignty component. European data centres operated by US companies remain subject to CLOUD Act jurisdiction because American corporate domicile enables US government extraterritorial access regardless of physical server location. True sovereignty requires European legal jurisdiction through providers incorporated under EU law, operational control meaning access rights management independent of US authority, and platform independence avoiding proprietary lock-in preventing migration. Organisations pursuing sovereignty need European hosting plus European platform control.
EuroStack represents comprehensive European technology independence requiring seven infrastructure layers: physical data centre infrastructure including servers, networks, and facilities within EU; cloud IaaS layer providing compute, storage, and networking via OVHcloud and STACKIT alternatives; platform services covering databases, middleware, and managed offerings; application frameworks including development tools, APIs, and integration platforms; collaboration tools like NextCloud file sharing and Mattermost team communication; AI sovereignty stack with European models, local training, and sovereign inference; and federated governance through Gaia-X architecture and interoperability standards. Implementing all seven layers requires €300 billion investment over 10 years according to Bertelsmann Foundation analysis, though organisations typically pursue incremental adoption focusing on highest-risk dependencies first.
Yes, multicloud distributes workloads across US and European providers, maintaining access to AWS and Azure mature service ecosystems while reducing lock-in vulnerability and CLOUD Act exposure for sovereignty-sensitive workloads including customer data, proprietary algorithms, and regulated information. This hybrid approach accepts increased operational complexity from managing multiple platforms, integration patterns, and cost structures to balance sovereignty risk mitigation with pragmatic access to US platform capabilities European alternatives haven’t yet matched. Suitable for organisations wanting gradual sovereignty adoption without complete platform replacement, though requiring sophisticated workload orchestration and clear criteria determining which systems deploy where based on data sensitivity, regulatory requirements, and risk tolerance.
Migration timelines vary by technical complexity and organisational approach. Simple workloads including file sharing and collaboration tools often migrate within weeks using NextCloud and Mattermost deployments. Comprehensive platform migrations covering databases, applications, and infrastructure typically span 6-18 months for phased approaches starting with low-risk systems before critical dependencies. Data Act transitional periods recognise this, allowing extended timelines for complex workloads with continued provider support obligations. Incremental strategies enable faster initial deployment achieving partial sovereignty quickly versus complete platform replacement requiring all-or-nothing migration before operational cutover. Key variables include current lock-in depth from proprietary API dependencies, destination platform maturity affecting feature completeness gaps, internal expertise with European platform skills, and risk tolerance determining aggressive versus conservative validation approaches.
European sovereignty requires capabilities varying by implementation approach. Managed European cloud services from OVHcloud and STACKIT need skills similar to AWS and Azure operations including infrastructure management, monitoring, and incident response, though platform-specific tooling differs. Open-source self-hosting with NextCloud and Mattermost demands server administration, backup management, security patching, and troubleshooting expertise that SaaS alternatives historically provided. Federated architecture through Gaia-X introduces distributed system complexity requiring integration pattern knowledge and standardised protocol implementation. Organisations lacking internal expertise can pursue consulting partnerships through firms like Adfinis and Code Enigma, managed sovereignty services, or hybrid approaches combining professional implementation support with internal operational ownership. Skills gap represents genuine migration barrier justifying incremental adoption allowing team capability development before committing critical systems.
German state implementations demonstrate open-source sovereignty viability for file sharing workloads, proving NextCloud handles enterprise scale with thousands of users, provides Microsoft OneDrive and Google Drive functional equivalence, and operates under European jurisdiction with full data control. However, these deployments address specific use cases covering document collaboration and file storage rather than comprehensive platform replacement, representing incremental sovereignty for well-defined scope. Barcelona’s broader open-source adoption using multiple platforms across city operations provides more comprehensive validation but still represents public sector implementation with different risk tolerance, regulatory requirements, and operational constraints than commercial technology companies face. Case studies prove sovereignty achieves operational viability though implementation complexity, skills requirements, and feature completeness gaps versus US platforms remain genuine considerations requiring organisation-specific evaluation.
European digital sovereignty has evolved from policy aspiration to operational necessity. The combination of CLOUD Act jurisdictional conflicts, Data Act switching mandates, and Digital Markets Act gatekeeper obligations creates regulatory environment favouring platform independence. Vendor lock-in economics that historically trapped organisations in proprietary ecosystems face elimination through egress fee prohibition and functional equivalence standards. European alternatives from OVHcloud and STACKIT to NextCloud and Mattermost now provide viable migration targets, particularly for workloads prioritising compliance and jurisdictional control over feature breadth.
Your specific sovereignty strategy depends on current platform dependencies, CLOUD Act exposure risk, regulatory compliance requirements, and risk tolerance for geopolitical disruption scenarios. Start by assessing vendor lock-in through proprietary API inventory and egress fee calculation. Evaluate which workloads face highest sovereignty requirements from data sensitivity or regulatory mandates. Model migration costs against continued dependency costs over 3-5 year horizons, including lock-in opportunity costs and insurance value from risk reduction.
For most organisations, incremental adoption starting with file sharing and collaboration tools builds expertise before committing critical infrastructure. Pilot European alternatives in parallel with existing systems, demonstrating viability to stakeholders while maintaining operational stability. Update procurement policies to include sovereignty criteria in vendor evaluation. Secure executive sponsorship by framing sovereignty as risk reduction rather than technology replacement.
The seven detailed guides linked throughout this hub provide comprehensive coverage from fundamental concepts through technical implementation. Whether you’re responding to immediate regulatory deadlines or building long-term resilience against platform dependency vulnerabilities, these resources support each stage of your sovereignty journey from awareness through execution.
Why AI-Native Startups Win Against Legacy Companies and What It Means NowLegacy companies are facing a problem. AI-native competitors are scaling 2-3x faster than top-quartile SaaS benchmarks. AI startups reach $1M revenue in 11.5 months versus 15 months for traditional SaaS.
The competitive gap is structural. Data-centric architecture, organisational agility, and self-reinforcing data flywheels create advantages that compound over time. It’s not a feature gap. It’s a foundations gap.
Companies like Airtable, Handshake, and Opendoor are announcing “refounding” initiatives. They get it. The competitive crisis is here.
So here’s the question: Can your company actually compete, or is the architectural and cultural gap permanent?
This article is part of our comprehensive guide to understanding startup refounding and AI-driven business model transformation, where we examine three structural advantages with velocity metrics and competitive implications. It provides a framework for assessing your competitive position and the urgency of response. Let’s get into it.
ICONIQ Capital’s “State of Software 2025” documents AI-native companies growing 2-3x faster than top-quartile traditional SaaS benchmarks. Some AI-native startups achieved $30M ARR in just 20 months—about 5x faster than conventional SaaS trajectories.
The velocity advantage comes from three compounding factors: data flywheel effects, organisational agility, and data-centric architecture.
Traditional SaaS companies built around static workflows can’t adapt fast enough. They’re competing against companies designed for continuous AI-driven iteration. The speed gap compounds over time—early data advantages enable better models, which attract more users, which generate more proprietary data. It’s a loop.
Bain research shows 90% faster implementation times for AI-native solutions versus legacy systems requiring integration work.
The reason buyers prefer AI-native vendors is straightforward—they deliver better products with faster innovation rates. Companies built around AI from the ground up ship improvements that incumbents retrofitting AI simply can’t match.
One CTO at a high-growth SaaS company reported nearly 90% of their code is now AI-generated through Cursor and Claude Code, up from 10-15% twelve months ago with GitHub Copilot. That’s not incremental change. That’s a fundamental shift in how software gets built.
A data flywheel is a self-improving loop where user interactions generate proprietary data, which improves AI models, which attracts more users, which generates more data. Round and round.
Unlike traditional competitive moats—brand, network effects, switching costs—data flywheels compound exponentially over time.
Every customer interaction becomes training data that improves product quality. Legacy companies adding AI features can’t replicate the flywheel because their workflow-centric architecture doesn’t capture interaction data systematically. They weren’t designed for it. You can’t bolt a flywheel onto a static workflow system. Our technical deep dive on agentic AI architecture and the semantic gap challenge in data-centric systems explains the data flywheel technical implementation and AI-native architecture advantages in detail.
Your data is your moat. Proprietary datasets—transaction histories, usage patterns, domain-specific content—cannot be purchased or licensed by competitors. First-mover advantages in data collection create structural barriers to entry. If you have unique data, you have an advantage competitors can’t buy.
Here’s the problem: 42% of business leaders worry they don’t have enough proprietary data to effectively train or customise AI models. If you’re in that 42%, you’re already behind.
Integration enables organisations to transform static models into continuously improving systems, reducing iteration cycles from weeks to hours. Legacy companies can’t compete with that velocity.
Data agreements and governance become strategic competitive protection mechanisms. If you’re not treating data as a strategic asset, you’re handing the advantage to competitors who do.
Organisational agility is the capacity for rapid decision-making, cross-functional collaboration, and pivoting without bureaucratic friction. It’s not just moving fast. It’s moving fast repeatedly without breaking things.
AI-native startups are built with flat hierarchies, generalist teams, and distributed decision authority. This enables fast iteration cycles. McKinsey research shows flat hierarchies enable a five- to ten-fold increase in the speed of decision-making and change.
Legacy companies are constrained by hierarchical structures, approval chains, and functional silos that slow response time to competitive threats. Every decision requires four approval layers and three committee meetings. By the time you’ve decided, the AI-native competitor has shipped. Learn more about managing organisational transformation during startup refounding and cultural change, where we examine the organisational agility requirements and cultural speed advantages in depth.
The cultural differences matter. AI-native companies embrace risk-taking, normalise failure as learning, and reward experimentation over error-free execution.
Handshake’s refounding example shows what this looks like in practice. They reintroduced five-day office weeks and “startup culture” pace to compete in the AI-transformed recruiting market. Their AI division expanded from 15 to 150 employees within months and generated $100M in annualised revenue in just eight months. For more concrete examples, see our startup refounding case studies from Airtable, Handshake, Opendoor, and MoneyGram.
CEO Garrett Lord stated: “Winners and losers are being defined right now.” Without aggressive AI investment, companies become merely “okay,” trapped in incremental improvements generating modest quarterly gains—a pattern leading to corporate deceleration. Harsh, but accurate.
Leading companies deploy cross-functional teams blending business domain experts, data scientists, data engineers, and IT developers. This ensures solutions are technically sound and business-relevant. Airbus formed “AI squads” for manufacturing AI projects, each including factory engineers plus data experts, which accelerated development and adoption.
The measurement metrics matter: decision cycle time, time from idea to production, cross-functional project velocity, approval layer count. BCG research shows 74% of legacy companies struggle to derive AI value, largely due to organisational rigidity not technology gaps.
If your approval chains are slowing you down, you’re losing to competitors who don’t have those chains. It’s that simple.
Data-centric architecture is organisational design where proprietary data is the central product and all systems are built modularly around data capture and utilisation.
Workflow-centric architecture is traditional design structured around static processes, rigid workflows, and function-specific systems.
There’s an incompatibility. Workflow-centric systems are optimised for process efficiency, not data collection or AI model feedback loops. You built them to do a thing. AI needs them to learn from doing the thing. These are different objectives.
AI-native companies design the entire stack around data: outcome interfaces (natural language), agent operating systems (orchestration), and systems of record (data). This three-layer approach enables agent-to-agent communication and continuous improvement through learning.
Legacy bolt-on AI fails because it sits atop workflow systems not designed to capture interaction data or enable agent-to-agent communication. Retrofitting data flywheel capabilities onto workflow systems requires architectural reimagination, not feature additions.
Migration from workflow-centric to data-centric requires architectural reimagination. Technical debt and integration complexity make transformation exponentially harder for established legacy systems. The longer you wait, the worse it gets.
77% of organisations expect their AI agents to continuously improve performance through learning. This requires real-time data availability, embedded governance in access controls, and closed-loop feedback systems.
Data quality directly impacts which use cases can be implemented successfully. Garbage in, garbage out—but at AI scale.
Honest assessment: the architectural and cultural gaps are substantial but not insurmountable. It requires treating transformation as a “refounding moment.”
Airtable’s June 2025 announcement said AI integration “feels like refounding the company.” CEO Howie Liu emphasised this is not a pivot because it’s not about changing direction after getting something wrong. They chose “the language of founding because the stakes feel the same.”
That’s the attitude you need. Half-measures won’t cut it.
Bain analysis identifies four competitive response options: defend core (low AI exposure), selective transformation (hybrid approach), platform pivot (adjacent markets), and complete refounding (existential AI threat). Our guide on how to decide whether your company should refound or add AI features incrementally provides frameworks for competitive response and strategic decision urgency.
The decision depends on competitive exposure, customer expectations, and data moat potential. Not every company needs to refound. But you need to honestly assess where you sit.
Leadership recognition is the first step. Incremental AI features won’t bridge the competitive gap. It requires reinvention.
Timeline reality: legacy transformation takes longer than AI-native greenfield development, creating ongoing competitive pressure. The question is: Can you transform faster than AI-native competitors compound their data flywheel advantages?
There are strategic opportunities where legacy advantages—customer relationships, domain data, regulatory position—create defensible positions. Semantic layer standardisation, industry-specific data moats, and regulatory constraints favouring established players all create windows of opportunity. Our refounding as competitive response overview examines these strategic considerations in detail.
But windows don’t stay open forever.
You can’t manage what you don’t measure. Here are the metrics that matter:
Velocity metrics: implementation speed should match the 90% faster AI-native baseline discussed earlier. Track decision cycle time and time-to-production for new features.
Lead Time for Changes measures speed from code commit to production deployment. Deployment Frequency tracks how often teams release successfully.
Change Failure Rate quantifies problematic deployments: (Failed Deployments / Total Deployments) × 100. Target example: 2 failures in 50 deployments = 4%.
Data flywheel health: proprietary data capture rate, model improvement velocity, and user value increase from AI enhancements.
Organisational agility indicators: approval layer count, cross-functional project velocity, experiment success rate, and failure normalisation culture.
Architecture assessment: percentage of systems data-centric versus workflow-centric, agent-readiness score, and semantic standardisation progress.
Business model transformation: shift from seat-based to outcome-based pricing, tasks completed metrics, and customer value realisation measures.
Measurement should focus on outcomes—whether developers deliver working software faster—rather than volume metrics like lines of code generated.
Task completion velocity shows AI users spend 3-15% less time in IDE per task—small gains that compound across hundreds of tasks.
Technology ROI measures financial returns: (Financial Gain – Cost) / Cost. Example: $50K software investment yielding $150K savings = 200% ROI.
Track innovation metrics alongside stability indicators. Feature Adoption Rate validates development aligns with customer needs: (Monthly Active Users of Feature / Total Monthly Active Users) × 100.
The DORA framework provides validated metrics: deployment frequency, lead time, change failure rate, and mean time to recovery.
If you’re not tracking these metrics, you’re flying blind in a competitive race where your competitors have full instrumentation.
The semantic layer gap represents a divide between current data infrastructure and what Agentic AI requires. Gartner research predicts by 2028, 60% of existing dashboards will be replaced by GenAI-powered narrative and visualisation.
The semantic layer gap is the absence of industry-specific vocabulary standards and data definition protocols enabling agent-to-agent communication. Your agents can’t talk to each other because they don’t speak the same language.
This creates a strategic opportunity. Companies establishing semantic standards can reshape competitive landscapes and create network effects.
Anthropic’s Model Context Protocol represents an industry standardisation effort. It creates a universal, open standard enabling developers to build secure, two-way connections between data sources and AI-powered tools. Early adopters like Block and Apollo have integrated MCP.
Industry-specific vocabularies become competitive assets: healthcare terminology, financial transaction schemas, logistics data formats.
First-mover advantage means organisations defining semantic standards gain disproportionate influence over ecosystem development. Data governance becomes a strategic differentiator. Internal standardisation enables faster AI integration and external API value.
The risk: waiting for standards to emerge allows competitors to establish protocols that may disadvantage your architecture. Invest in standards-setting now or wait for industry convergence—but waiting means losing influence.
Refounding is formal strategic reinvention of an established company involving business model transformation, usually tied to AI integration. Unlike pivots—which are course corrections—refounding means the stakes feel the same as founding originally. It’s a complete organisational and cultural reset. You’re not tweaking the business. You’re rebuilding it.
Permanent structural shift. Data from ICONIQ, Stripe, and Bessemer shows consistent 2-3x growth advantages and 23% faster revenue milestones. The data flywheel mechanism creates compounding advantages that widen over time, not temporary benefits. This isn’t going away.
Multi-year transformation, typically 18-36 months for meaningful architectural migration. Timeline depends on technical debt, organisational complexity, and cultural readiness. The factor that matters: your transformation timeline competes against AI-native competitors compounding data advantages quarterly. They’re not waiting for you.
Possible with a “startup within a startup” approach—dedicated team freed from legacy constraints, empowered to deliver MVPs fast, operating at startup velocity inside the established organisation. Requires executive sponsorship and protection from bureaucratic friction. And you need to really mean it. Half-committed “innovation teams” fail.
Treating AI as feature addition rather than transformation requiring architectural, organisational, and cultural reinvention. Bolt-on AI solutions cannot create data flywheels or competitive velocity without underlying data-centric architecture. You can’t add a feature and call it AI transformation. It doesn’t work that way.
Revenue models charging for results delivered—tasks completed, outcomes achieved—rather than seats or subscriptions. Example: Intercom charging for “conversations resolved” instead of agent seats. Enabled by AI automation where value delivery decouples from human headcount. You’re paying for what got done, not how many people could theoretically do it.
Organisational agility is the capacity for rapid decision-making through flat hierarchies, distributed authority, and cross-functional teams. “Moving fast” without structural enablers creates burnout and errors. Agility equals speed plus adaptability plus sustainability. Moving fast without agility is just chaos with momentum.
Not universally. Bain identifies four strategic responses: defend core (low AI exposure), selective transformation (hybrid approach), platform pivot (adjacent markets), complete refounding (existential AI threat). Decision depends on competitive exposure, customer expectations, and data moat potential. Assess honestly where you sit.
Assess proprietary data uniqueness: Can competitors license or purchase equivalent data? Does your data capture domain-specific patterns unavailable elsewhere? Is data capture systematic and improving models? If answers are “no,” your data moat is vulnerable. If you can buy your “proprietary” data from a vendor, it’s not proprietary.
Risk tolerance and failure normalisation. Legacy cultures often reward error-free execution. AI-native cultures reward rapid experimentation and learning from failures. Cultural transformation requires leadership modelling and sustained commitment. You can’t tell people “fail fast” while firing everyone who ships a bug.
Organisation-wide AI understanding democratises innovation. Product, sales, and customer success teams identify AI applications faster than centralised AI teams alone. Executive modelling—leaders using AI tools visibly—drives cultural adoption faster than training programmes. If your CEO isn’t using AI daily, why should anyone else?
Architectural standard: Systems of Record (data layer), Agent Operating Systems (orchestration layer), Outcome Interfaces (natural language layer). Matters because AI-native companies design the entire stack around agent interoperability. Legacy bolt-ons lack the middle orchestration layer, limiting AI capabilities. Without that middle layer, your agents can’t coordinate. They’re just isolated tools.
Startup Refounding Case Studies from Airtable Handshake Opendoor and MoneyGramYou’re probably watching established startups face pressure from AI disruption and wondering how they’re actually responding. In 2025, four major companies publicly announced “refounding” strategies. We’ve got real implementation data to work with. This case study guide is part of our comprehensive understanding of startup refounding and AI-driven business model transformation, where we examine the strategic frameworks and practical implementation approaches behind this emerging trend.
Airtable, Handshake, Opendoor, and MoneyGram each took different paths. Product-first transformation. Adjacency-driven growth. Software-first pivots. Network-asset leverage. Each case study includes specific financial metrics, leadership decisions, organisational changes, and technical approaches.
What makes these worth examining is they’re navigating the same transformation challenges you might face. They’ve documented timelines, workforce impacts, and measurable outcomes. Let’s dig into what they actually did.
Refounding is when established startups publicly declare they’re rebuilding their company from first principles. It’s a formal strategic transformation. Understanding what refounding means and the difference from pivots is essential—it emphasises “founding moment” gravity rather than course correction.
Airtable CEO Howie Liu told the NYT the company considered calling it a relaunch or transformation, but ultimately chose “the language of founding because the stakes feel the same”.
Yale research on institutional drift provides the theoretical foundation. Organisations gradually lose their founding character over time through accumulated decisions. Success itself accelerates this drift through expansion complexity and acquisition-related cultural shifts.
Understanding why companies choose “refounding” over other terms reveals their strategic intent. Companies explicitly select this language to signal high stakes and commitment to fundamental change. Handshake’s chief marketing officer Katherine Kelly said the company is trying to bring startup culture “back into an existing business.”
It’s a comprehensive reassessment of goals, culture, and operational frameworks. Usually in response to market shifts. All four case study companies tie refounding to AI integration as the primary catalyst.
Refounding involves simultaneous changes across business model, product architecture, organisational culture, market positioning, and stakeholder communication. That’s a lot at once.
Mature startups possess competitive advantages established companies lack, but they also carry organisational complexity that can impede rapid innovation. Refounding addresses this paradox.
In June 2025, Airtable launched this movement by treating AI adoption as a foundational company reset rather than incremental feature development. CEO Howie Liu announced “every software product must be refounded for AI” rather than just adding AI capabilities to existing platforms.
Airtable concluded that AI-native design requires architectural decisions, not just feature additions. Companies that merely bolt AI features onto existing products will lose to AI-native competitors built from the ground up.
The company shifted from a simple project management tool to a comprehensive platform for collaboration and creativity. But the real change was architectural.
Airtable reorganised into two groups following Daniel Kahneman’s thinking model. The fast-thinking team focuses on rapid AI feature experimentation and weekly releases to customers. The slow-thinking team handles foundational infrastructure decisions with longer planning horizons. Database architecture, security frameworks, and platform scalability investments that require months of deliberate planning. These technical architecture patterns demonstrate how companies rebuild data-centric systems for agentic AI.
This dual-speed structure lets them rebuild the collaborative work management platform with AI-native design while maintaining velocity. They’re integrating AI agents into workflow automation. Shifting from user-configured automations to AI-suggested intelligent workflows.
Howie Liu uses AI hourly and is Airtable’s number one inference-cost user globally. That’s leadership by example.
The company restructured its entire engineering organisation around AI-first methodologies. Product design methodology shifted. The cultural change supports a “refounding moment” mentality.
Liu urges PMs, engineers, and designers to play with AI products daily, not just read about them. He calls this becoming an “IC CEO”—leaders who roll up their sleeves and engage directly with building, coding, and experimenting.
The financial results are there. The company achieved over $100 million in free cash flow following its transformation.
The lesson here is when product architecture matters more than business model flexibility, you commit to the rebuild.
While Airtable rebuilt product architecture, Handshake took a different path. They leveraged an unexpected business adjacency.
The company, valued at $3.5 billion, announced its refounding in October 2025 built on validated success. They grew an AI business from $0 to $100 million in annualised revenue in eight months.
CEO Garrett Lord’s adjacency-driven approach leveraged existing assets. The company was built on a simple belief: “talent is everywhere, opportunity is not.” They serve 20 million job seekers, 1,600+ universities, and 1 million employers. But they also had a network of 500,000 PhDs and 3 million Master’s degree holders.
When AI labs needed human experts to validate and improve models during post-training, Handshake filled that demand.
The financial validation came first. Current combined ARR hit $200 million. Projected year-end combined ARR: $300 million. The 2026 forecast reaches into the “high hundreds of millions.” The AI business is expected to surpass core recruiting operations by year-end.
That $100 million ARR in a new business line demonstrated product-market fit sufficient to justify company-wide refounding. Lower risk than exploring unproven AI capabilities.
Scale AI pioneered the expert network model for post-training AI models. When Meta partially acquired Scale AI in June 2025, it created market uncertainty.
Handshake entered a competitive landscape that included Mercor, Surge AI, and Turing. They rapidly scaled by leveraging their existing university partnerships and Fortune 100 relationships. All 100 of the Fortune 100 companies already worked with Handshake—American Express, McDonald’s, and Nike.
The timing was perfect. As Lord stated, “There are times in your life when you’re like, ‘Oh gosh we could not be more well-positioned'”. On competitive urgency, he emphasised: “Winners and losers are being defined right now.”
The organisational transformation was substantial. Handshake implemented a 15% workforce reduction, affecting approximately 100 employees from a U.S. staff of 650.
They mandated five-day office weeks with expectations for employees to operate “with a pace and number of hours that is meaningful and will help us hit goals.” The company’s chief marketing officer Katherine Kelly said Handshake is trying to bring startup culture “back into an existing business.” These cultural transformation approaches illustrate the people-side challenges that accompany strategic refounding.
The AI division scaled rapidly. From 15 to 150 employees within months.
Lord acknowledged the difficulty. On employee departures, he said “it really, really sucks”. But he argues complacency risks decline. Without aggressive AI investment, Handshake becomes merely “okay.” Trapped in incremental improvements generating modest quarterly gains.
Board member Mamoon Hamid from Kleiner Perkins initially responded with surprise, noting this “was not on my bingo card”. But the board ultimately supported the direction when they saw the financial validation.
The lesson from Handshake is clear. Leverage existing network assets. Time market gaps. Validate the adjacency before full refounding.
Opendoor took a third approach. In Q3 2025, new CEO Kaz Nejatian announced, “We are refounding Opendoor as a software and AI company.”
The timing matters. Leadership transition coincided with strategic refounding. Nejatian brought a software-first philosophy focused on profitability over growth-at-all-costs. He stated: “In my first month as CEO, we’ve made a decisive break from the past—returning to the office, eliminating reliance on consultants, and launching over a dozen AI-powered products and features that demonstrate our renewed velocity.”
This represents a business model transformation. Previously, Opendoor’s model required purchasing homes with company capital. Millions of dollars tied up per transaction. The software-first refounding prioritises technology platform and AI capabilities over real estate transaction volume.
The strategic goal centres on a path to profitability through margin improvement and operational efficiency rather than market share expansion.
The shift moves from transaction-based to platform-based revenue. AI integration handles property valuation and market analysis. Capital intensity reduces through partner networks. Technology infrastructure becomes the primary business asset.
Nejatian continued: “Our business will succeed by building technology that makes selling, buying, and owning a home easier and more joyful—not from charging high spreads and hoping the macro saves us”.
The company set three management objectives. First, transact with more sellers. More volume means more revenue from transactions and ancillary services, plus better leverage of cost base. Second, improve unit economics and resale velocity. Speed and profitability per transaction determine whether they build a sustainable business or remain vulnerable to macro swings. Third, build operating leverage. Scale transactions faster than fixed costs so each additional home adds profit. These margin improvement strategies align with the broader shift toward sustainable unit economics in AI-first companies.
By the end of next year, Opendoor aims to drive to breakeven on a 12-month go-forward basis.
New leadership brings fresh strategic perspective. Nejatian avoided founder attachment to the original business model. Board alignment on profitability priorities created space for transformation.
The cultural changes under new leadership were immediate. Return to office. Cutting consultants. Rapid product launches. These signal a different operating mode.
Q3 2025 financials showed the current state: revenue $915 million, gross profit $66 million (7.2% margin), homes sold 2,568, homes purchased 1,169.
The lesson from Opendoor is combining leadership changes with strategic refounding can work when capital efficiency drives the transformation.
The first three cases examined startup-to-startup refounding. MoneyGram demonstrates how companies with decades of legacy operations can apply the same framework.
Founded in 1940, the company has undergone transformation since going private in 2023. Evolving from a traditional remittances-focused player into a fintech company built around its global payments network.
The network-led transformation strategy uses MoneyGram’s global payment infrastructure as foundation for fintech services rather than building from scratch.
CEO Anthony Soohoo articulates the strategic pivot: “Remittances was the old way to think about the business, now the network is our business. You have really limitless possibilities for how you think about it”.
The financial validation is there. Digital transactions now represent one-third of total volume, up from less than half in 2022. Cross-border volume increased approximately 8% since privatisation.
Stablecoin integration represents core technical innovation. Blockchain-based digital payments enable low-cost cross-border transactions using existing network relationships.
The company navigated regulatory requirements for cryptocurrency payments. Integrating blockchain with legacy systems. Integrating blockchain technology into an 80+ year old infrastructure required navigating decades of accumulated technical decisions, compliance frameworks, and partner integrations.
The transformation represents a fundamental reconceptualisation of MoneyGram’s business model, positioning its global network as the foundation for diverse financial services opportunities.
Network-led refounding means leveraging existing infrastructure as transformation foundation. For MoneyGram, this meant identifying the global payment network accumulated over decades as their core asset. They converted established relationships with banks, regulators, and partners from remittance liabilities into fintech advantages. The balancing act involved maintaining legacy operations while gradually building new digital capabilities on top of that foundation.
MoneyGram’s refounding initiative emphasises digital-first capabilities. Network expansion and leverage. Stablecoin integration strategies. Artificial intelligence implementation. Market repositioning beyond traditional money transfer services.
The lesson from MoneyGram is legacy companies can refound successfully. Network assets provide transformation advantages. A phased digitisation approach works. Established trust and compliance matter.
Each company’s AI approach varies—from product features to operational efficiency—but the catalyst remains consistent. Airtable focuses on product features. Handshake saw business opportunities. Opendoor and MoneyGram emphasise operational efficiency.
Each case study demonstrates strategic asset leverage. Airtable had its product platform. Handshake and MoneyGram had network relationships. Opendoor had market position.
Public announcement strategy proves important. Formal declarations to stakeholders signal commitment. Attract talent. Reposition market perception. Create accountability pressure. TechCrunch and The New York Times reported on this phenomenon as a notable and growing startup trend in December 2025.
Organisational culture transformation appears universal. Whether return-to-office mandates at Handshake, leadership changes at Opendoor, or institutional renewal at Airtable, cultural reset accompanies strategic shifts.
Financial validation precedes or validates refounding. Handshake’s $100 million ARR proof. MoneyGram’s 33% digital transactions. Opendoor’s path-to-profitability focus. Airtable’s $100 million in free cash flow. These demonstrate measurable traction.
Investors appear to view these announcements as necessary adaptations to technological disruption rather than indicators of organisational distress. Refounding demonstrates adaptability, which investors increasingly favour.
What can you take from these patterns? First, refounding works best when built on existing strengths rather than completely abandoning your foundation. Second, financial metrics matter. Validation either precedes the announcement (Handshake) or guides the transformation (Opendoor’s profitability focus). Third, expect cultural resistance and plan for it. The decision frameworks and criteria can help you systematically evaluate which approach suits your circumstances.
Product-first refounding at Airtable treats architecture as competitive advantage. Adjacency-driven at Handshake uses validated new business to justify refounding. Leadership-transition at Opendoor enables fresh perspective for strategic shift. Network-asset at MoneyGram treats legacy infrastructure as digital foundation.
Each approach fits different circumstances. Product-first works when architectural decisions matter most. Adjacency-driven reduces risk by validating the new business first. Leadership-transition creates space for change without founder attachment. Network-asset leverages what you already have.
Workforce disruption creates potential talent loss. Handshake’s layoffs and return-to-office mandate illustrate this. Customer confusion during positioning changes is real. Execution risk matters because committing publicly creates accountability pressure. Investor sentiment needs management during transformation uncertainty.
The timing consideration matters too. Why did 2025 become the refounding year? AI reached a point where established companies felt competitive pressure. The technology matured enough to enable real transformation. Market conditions created space for bold moves.
Workforce resistance and cultural friction emerged as universal challenges across all four companies. Handshake’s layoffs and return-to-office mandate illustrate the difficult people decisions these companies made.
Research shows return-to-office mandates create significant retention risks. Companies implementing RTO policies experience 13% higher annual turnover, with 46% of hybrid workers indicating they would quit if forced back full-time. Stanford economist Nick Bloom notes: “You’re going to get negative selection. The ones who leave are the ones that can pull an outside offer, who are the better employees”.
Technical execution complexity varied by approach. Airtable’s architectural rebuilding differs from MoneyGram’s legacy system integration. These require different engineering capabilities and timelines.
Timeline ambiguity persists across case studies. Most refoundings were announced in 2025, with outcomes still developing. This creates planning challenges for companies considering similar transformations. How long does transformation take? Six months? Two years? Five years? We don’t have complete answers yet.
Market validation timing creates a chicken-egg problem. Handshake validated its adjacency before refounding with $100 million ARR. But Airtable and Opendoor committed to refounding before full validation. Which approach is right depends on your risk tolerance and competitive situation.
Communication complexity spans multiple stakeholder groups. Employees need different messaging than investors. Customers have different concerns than media. Each group requires tailored messages about transformation rationale and expected outcomes.
Employee morale during transformation announcements takes a hit. Talent retention versus intentional workforce restructuring creates tension. Do you want to keep everyone or is the reduction intentional?
Research shows companies that issued return-to-office mandates saw decreases in employee satisfaction scores. 80% of organisations globally admit they’ve lost talent due to return-to-office mandates.
Maintaining productivity during uncertainty requires clear communication and short-term milestones. Leadership credibility and trust maintenance matters.
Architectural rebuilding versus incremental enhancement was Airtable’s key decision. Legacy system integration challenges mark MoneyGram’s experience. Maintaining service continuity during transformation is non-negotiable.
Resource allocation creates tension between new capabilities and technical debt. Build versus buy decisions come up during refounding. Do you build the new AI capabilities or acquire them?
Handshake’s experience with market timing demonstrates the importance of validating business opportunities before full organisational commitment. Opendoor’s leadership transition shows that fresh perspective can overcome founder attachment to original models. MoneyGram’s legacy system integration reveals that established infrastructure can be an advantage rather than a liability when approached strategically.
These transferable implementation practices apply across different refounding types and company circumstances.
The common thread across all four case studies is these challenges are real but manageable with proper planning and stakeholder communication. Each company accepted these risks as necessary for transformation rather than showstoppers.
For a comprehensive overview of the strategic landscape and the frameworks that underpin these transformation decisions, see our refounding overview and frameworks resource.
These case studies naturally raise questions about applying refounding frameworks to your own situation. Here are the most common questions and what the evidence shows.
Airtable (June 2025), Handshake (October 2025), Opendoor (Q3 2025) and MoneyGram (ongoing 2020s transformation) represent publicly documented refounding cases. New York Times and TechCrunch coverage in December 2025 established refounding as a recognised industry trend.
Current case studies provide limited timeline data since most refoundings announced in 2025. Handshake demonstrated rapid AI business growth—eight months to $100 million ARR—before refounding announcement, but full transformation timelines remain in progress across all four companies.
Handshake’s $100 million ARR in new business line and MoneyGram’s 33% digital transaction rate provide concrete validation metrics. Opendoor focuses on path-to-profitability indicators. Airtable metrics remain private but achieved over $100 million in free cash flow.
Airtable CEO Howie Liu discussed the company’s refounding philosophy in June 2025 public statements and media interviews, emphasising that “every software product must be refounded for AI” rather than adding incremental AI features.
Handshake CEO Garrett Lord announced the refounding in October 2025 through company blog post and media interviews, detailing the $0 to $100 million ARR AI business growth and organisational transformation.
Handshake pursued adjacency-driven refounding (validated $100 million AI business before company-wide transformation). Airtable chose product-first refounding (architectural rebuilding before revenue validation). Different risk profiles and strategic rationales.
Scale AI’s acquisition by Meta in June 2025 created market gap and uncertainty in data labelling sector. This provided competitive opening that Handshake exploited by rapidly entering expert network market for AI post-training services.
MoneyGram’s transformation follows refounding framework by emphasising high stakes and “founding moment” gravity rather than incremental digitalisation. Positioning stablecoin integration and fintech services as fundamental company identity shift.
Case studies reveal workforce restructuring (Handshake’s layoffs), return-to-office mandates (Handshake’s 5-day policy), leadership transitions (Opendoor’s CEO change), and cultural resets emphasising “startup intensity” as frequent organisational transformation elements.
Kleiner Perkins’ Mamoon Hamid initially responded with surprise to Handshake’s refounding noting “was not on my bingo card” but the board ultimately supported the direction. Investors appear to view these announcements as necessary adaptations to technological disruption rather than indicators of organisational distress.
AI disruption pressure. institutional drift (Yale research framework). Validated new business opportunities (Handshake). Competitive positioning requirements (Airtable). Capital efficiency needs (Opendoor). Digital transformation imperatives (MoneyGram). These emerge as primary refounding drivers.
MoneyGram’s network-led fintech transformation demonstrates legacy companies (founded 1940) can successfully apply refounding framework by leveraging existing infrastructure assets and relationships as foundation for digital business models.
Managing Organisational Transformation During Startup Refounding and Cultural ChangeAI-native startups are deciding in days what takes your team quarters. They hit $1 million in revenue in 11.5 months. Your traditional SaaS company? 15 months. This isn’t because their product is better. It’s because their organisational structure is built for speed.
Your company is 8, 10, maybe 11 years old. Valued in the billions. But you’ve piled on the approval chains, the risk committees, the functional silos. Institutional drift has eaten away at why you started the company in the first place—buried under years of accumulated decisions and the complexity that comes with success. Understanding institutional drift and refounding is essential context for why cultural change is needed during organisational transformation.
This is what refounding transformation looks like. It’s a transformation where the stakes are as high as when you were founding the company. Your team expects stability. You need experimentation. They’re set up for predictable execution. You need people who act first and ask permission later.
Which creates some real problems: How do you tell people they’re being laid off without tanking morale? How do you structure severance so you don’t burn every bridge? How do you keep your technical teams feeling safe enough to take risks when the whole org is being turned upside down?
Yale’s three-part refounding framework—purpose, culture, strategy—gives you a systematic way to tackle this.
Refounding resets your culture by putting the stakes back at founding-level importance. You’re asking people to try new things, accept that some experiments will fail, and operate at a completely different pace.
Culture is like an intuitive, silent language that shapes how people behave, what they consider normal, and what they actually do every day. It decides which projects get money, how fast decisions happen, whether teams wait for permission or just do the thing and apologise if it goes wrong.
Success speeds up drift. Your company grew. You added layers. Maybe you bought other businesses. Only 20% of people actually doing the work feel connected to company culture. Compare that to 43% of leaders. That gap is a big problem when you’re trying to transform.
AI-native startups work differently. Flat hierarchies. Small teams that run themselves. Engineers talking directly to customers. Shipping code multiple times a day. Your organisation? Probably has approval chains, risk committees, functional silos, and annual planning cycles that take months. These things made sense during growth. Now they’re slowing you down when you need to move fast.
Employee expectations have to shift from “things stay the same” to “things keep changing.” From predictable to experimental. From being a specialist in one function to working across multiple areas. Edgar Schein stated: “The only thing of real importance that leaders do is to create and manage culture. If you do not manage culture, it manages you.”
Cultural transformation means embedding new ways of working into your governance, your compensation, who you hire, and how you evaluate performance. Not aspirational posters on the wall. Actual operational systems that make the new patterns stick.
Use stakeholder segmentation. Different people need different messages. Technical teams want to know what’s happening with the tech stack, how fast they’re expected to move, and whether their jobs are safe. Leadership wants the strategic thinking and competitive context.
Structure your communication using why-how-what. Start with purpose: explain why you’re refounding. Then describe how you’ll operate differently—the cultural shift. Finally, outline what you’re building—the strategic changes.
Garrett Lord at Handshake shows how transparency works. He said employee departures “really, really sucks” while also explaining why it was necessary. AI-native startups were deciding who wins and who loses.
For technical teams, talk about the tech stack directly. You’re adopting AI-native development. New tools. Different frameworks. Faster iteration.
Velocity expectations are changing. You’re shortening the time from decision to deployment. Cutting approval layers. Giving product teams more power. Moving to continuous deployment.
On job security, be straight. Refounding targets organisational patterns, not individual performance. If someone’s role is affected, you’ll tell them directly and give them proper severance.
Make your announcement before rumours start but after you’ve got leadership aligned. Never let gossip get ahead of your official message.
The sequence: get leadership aligned → get board approval → tell affected individuals → company-wide announcement → team-specific sessions within 48 hours.
Don’t use corporate-speak that hides what you actually mean. Don’t be prematurely optimistic in a way that feels fake. And don’t send one message to everyone that addresses nobody’s actual concerns.
People resist operational changes less when they own them. Create ways for people to talk back to you, not just sit and listen.
When refounding means reducing headcount, how you structure severance becomes part of your message about who you are as a company.
You’re balancing legal obligations, financial constraints, and how the people staying feel about what you’re doing. The standard approach: 2-4 months cash severance, accelerated equity vesting for time served, and extended healthcare.
Cash severance is typically 1-2 weeks per year of service. Senior people might need better packages.
For equity vesting acceleration, there are three common approaches:
Full acceleration for time served is the fairest. Someone who worked three years of a four-year vesting schedule gets their three years vested right away.
Extended exercise window is another option. Instead of the usual 90 days to exercise options, extend it to 2-3 years. Costs you nothing upfront but provides real value.
No acceleration with the standard 90-day window is technically legal but kills your reputation.
Healthcare continuation means COBRA minimum plus company-subsidised extensions. Three to six months of subsidised coverage is common.
Legal framework stuff you need to watch: WARN Act 60-day notice requirements, age discrimination protections, and documented selection criteria.
Outplacement services—help with resumes, career coaching, job search—represent goodwill investment.
How you treat people leaving affects how people staying feel about you. Severance generosity signals company values to everyone watching.
Handshake’s 15% workforce reduction shows how this works. Affected employees got extended equity exercise windows, removed one-year equity cliffs, and better severance packages.
The three-part framework from Yale research structures refounding around purpose, culture, and strategy. This framework is central to our comprehensive refounding guide, providing a systematic approach to organisational transformation. Purpose is why you exist. Culture is how your values show up in what people do every day. Strategy is what keeps evolving while staying grounded in why you’re here.
Start with purpose rediscovery. Work out what organisational need you uniquely serve that goes beyond your current products. Not what you build. Why you exist.
Airtable’s CEO Howie Liu said they picked “the language of founding because the stakes feel the same.”
Cultural transformation comes next—embedding purpose-driven values into how things actually work. Governance structures. Compensation frameworks. Management processes. Hiring criteria. Leaders embed character through operational systems that make identity stick.
Strategic evolution becomes continuous adaptation grounded in unchanging foundational purpose. What you do evolves. Why you exist stays stable.
How to sequence it: Start with leadership alignment on purpose. Then cascade cultural expectations through your managers. Finally, validate strategy alignment every quarter.
Airtable’s organisational restructuring shows how implementation works. They created fast-thinking and slow-thinking team structures. Fast-thinking teams focus on rapid experimentation. Slow-thinking teams maintain platform infrastructure and reliability.
Common mistakes: treating the strategic decision-making frameworks as a one-time thing, skipping cultural embedding to jump straight to strategic pivots, and leadership not modelling what they’re asking for. Culture change doesn’t happen on paper. A critical mass of people have to actually change how they think, what they value, and how they work with each other.
Track cultural transformation metrics, employee engagement scores, and how fast decisions get made.
Organisational agility is your capacity to quickly sense and respond to market changes through flexible structures and empowered teams. AI-native startups hit $1 million revenue in 11.5 months versus 15 months for traditional SaaS companies. That speed gap represents an organisational structure advantage, not a product advantage.
Legacy companies pile on decision-making layers, approval requirements, and risk-averse patterns that slow everything down. Refounding aims to tear them out.
They use generalist employees who adapt quickly across functions while traditional siloed specialist teams move too slowly. When technical people talk directly to customers, you get faster feedback loops.
Cultural enablers of agility: psychological safety for experimentation, accepting that smart failures are part of learning, and incorporating feedback quickly. Structural changes: fewer approval layers, product teams with real power, continuous deployment.
How to measure it: decision-to-deployment cycle time, how fast you’re running experiments, how quickly you respond to customer feedback. Speed defines success.
Maintain psychological safety by explicitly admitting what you don’t know, acknowledging concerns without dismissing them, and creating structured ways for people to give you feedback.
Psychological safety is the belief that taking interpersonal risks feels safe. Rapid change threatens it because people worry about losing their jobs, their skills becoming obsolete, and making political mistakes when power structures shift.
The IMF projects AI will affect 30% of jobs in advanced economies. Your engineers know this. Pretending the threat doesn’t exist destroys your credibility.
Show vulnerability by admitting what you don’t know. “Here’s what we know. Here’s what we’re figuring out. Here’s what’s still uncertain.” Leaders can create psychological safety by sharing their own learning moments.
Celebrate smart failures during experimentation. Experiments that tested reasonable ideas, had proper controls, and generated actual learning.
Treat resistance as useful feedback rather than a problem you need to eliminate. Someone pushing back might see risks you missed.
Proactive communication means regular town halls, small group listening sessions, and anonymous feedback channels. Not one announcement. Ongoing conversation.
Structural reinforcement matters more than words. Separate experimentation failure from performance reviews. Publicly acknowledge contributions with things like “Thanks for catching that; let’s dig into it.”
Technical leaders often struggle with ambiguity. Give examples of acceptable risk. Define boundaries for experimentation. Clarify who makes which decisions.
Handshake is an 11-year-old recruiting platform valued at $3.5 billion. CEO Garrett Lord cut workforce by 15%—about 100 people—while launching an AI division that hit $100 million in annualised revenue in eight months. This case study is one of several organisational transformation examples that demonstrates the practical challenges of cultural change during refounding.
Lord spotted a market gap in data labelling for AI model training, using Handshake’s network of 500,000 PhDs and 3 million Master’s degree holders. The AI division went from 15 to 150 employees in months.
Combined annual recurring revenue is $200 million. Projected year-end: $300 million. 2026 forecast: “high hundreds of millions.”
Lord communicated the shift as competitive necessity. Winners and losers are being defined right now. Without aggressive AI investment, Handshake becomes just “okay,” stuck making incremental improvements.
The refounding included cultural mandates. Mandatory five-day office week to increase collaboration intensity. Explicit expectations for experimentation.
Board member Mamoon Hamid from Kleiner Perkins was initially surprised but ultimately backed the direction.
What this means for your transformation: Being transparent about how hard this is matters. Generous severance signals your values. Cultural mandates require consistent enforcement.
Cultural transformation typically takes 12-18 months for new patterns to become embedded habits. You’ll see initial behaviour changes within 3-6 months of consistent leadership modelling.
Resistance usually peaks at 2-3 months when the novelty wears off but the new patterns aren’t comfortable yet.
Initial phase (0-3 months): Announcement shock, active resistance, leadership modelling intensively. This phase feels chaotic. Some people leave.
Adaptation phase (3-6 months): Tentative adoption of new behaviours, early wins get celebrated. Teams start experimenting. Momentum builds.
Integration phase (6-12 months): New patterns become comfortable, metrics show measurable improvement. The transformation feels less forced.
Embedding phase (12-18 months): Cultural self-reinforcement emerges, practices feel “normal.” Teams correct each other without management intervention.
What speeds it up: Consistent leadership modelling, quick wins celebrated publicly, replacing resistant managers, reward systems aligned with new culture.
What slows it down: Inconsistent enforcement, leadership contradictions, reward systems misaligned with the culture you say you want.
How to measure it: experimentation velocity, decision-making speed, employee engagement, voluntary turnover patterns. Establish baseline measurements. Track quarterly. Expect measurable improvement by 6-9 months.
Tell people upfront this takes multiple quarters. Celebrate incremental progress. Don’t declare victory too early.
Refounding elevates stakes to founding-level importance by fixing institutional drift and foundational refounding concepts. Refounding restores lost foundational purpose while pursuing new strategies. Pivoting corrects wrong strategic direction. Pivots are directional corrections. Refoundings are identity restorations that need cultural transformation, not just strategic adjustment.
Airtable’s refounding focused on restructuring without immediate workforce reduction. But layoffs commonly happen during refounding when your existing workforce lacks skills for the new direction or leadership uses workforce change to signal how serious the transformation is. Handshake’s 15% reduction shows workforce alignment with AI-focused strategy.
Use Yale’s diagnostic questions: Does leadership struggle explaining enduring purpose beyond current products? Do employees describe culture as “lost” compared to the founding era? Are decision-making processes increasingly bureaucratic? Has success complexity distanced the organisation from core identity? Multiple “yes” answers suggest you need to refound.
Acknowledge legitimate concerns while explaining the strategic thinking: increased collaboration intensity, cultural transformation visibility, competitive urgency. Offer a transition period (1-2 months). Commit to reassessing based on measurable outcomes after 6 months. Replace resistant senior engineers unwilling to adapt. Inconsistent enforcement kills transformation credibility.
Track leading indicators: experimentation velocity, decision-making cycle time, employee engagement scores, voluntary turnover patterns, cross-functional collaboration frequency, customer feedback incorporation speed. Establish baseline measurements. Track quarterly. Employees strongly connected to culture are 3.7 times as likely to be engaged. Expect measurable improvement by 6-9 months.
Communicate strategic refounding rationale before layoff decisions when possible. But legal constraints sometimes require simultaneous announcement. Never let rumours get ahead of official communication. Sequence: leadership alignment → board approval → affected individual notifications → company-wide announcement → team-specific follow-up within 48 hours.
Standard approaches: accelerated vesting for time served (fairest approach), extended exercise window (2-3 years increasingly common), or no acceleration with standard 90-day period (damages reputation). Handshake provided extended equity exercise windows and removed one-year equity cliffs. Generous severance maintains relationships and signals company values.
AI-native startups design for speed: flat hierarchies (3-4 layers maximum), small autonomous teams (two-pizza rule), product-engineering integration, continuous deployment (multiple daily releases), customer-facing technical roles, bias for action over consensus. Legacy companies accumulate decision layers and approval requirements that AI-native organisations intentionally avoid.
Successful refounding needs CEO-level ownership because cultural transformation demands consistent modelling from visible leadership, resource reallocation authority only CEOs have, and board communication establishing transformation as strategic priority. You can advocate for refounding and lead technical organisation transformation, but company-wide cultural change needs CEO championship. Handshake’s Garrett Lord and Airtable’s Howie Liu personally drove their refounding initiatives.
Diagnose failure causes: inconsistent leadership modelling, misaligned incentive structures, insufficient structural changes, declaring victory too early. Fix root causes through leadership changes, incentive realignment, structural adjustments. Consider whether your initial refounding premise was correct—institutional drift diagnosis may have been wrong, suggesting incremental improvement is enough.
Airtable’s model separates teams: Fast-thinking teams focus on rapid experimentation with higher failure tolerance and weekly sprints. Slow-thinking teams maintain infrastructure, reliability, security with quarterly roadmaps and lower failure tolerance. Teams have clear swim lanes and distinct success metrics—both necessary for sustainable innovation.
Balanced approach: Hire strategic external talent bringing AI-native startup experience (typically 20-30% new hires in key roles). Simultaneously invest in existing team development through training and role expansions—preserving institutional knowledge and signalling commitment. Replacing entire teams signals distrust and triggers talent exodus. Keeping everyone unchanged risks insufficient transformation momentum.
Agentic AI Architecture and the Semantic Gap Challenge in Data-Centric SystemsYou’ve deployed AI across your enterprise. Sales has it. Support has it. Product has it. Ask the same question from each department and you get three completely different answers. Welcome to the semantic gap—where your AI burns through tokens trying to figure out which of your conflicting data definitions to trust.
The numbers don’t lie. Research shows only 67% semantic accuracy in typical enterprise AI deployments. Your AI system is chewing through 13,281 tokens per decision, running in loops trying to resolve the ambiguity you’ve baked into your data infrastructure. That’s not just an accuracy problem—that’s money leaking out of your infrastructure budget with every query.
So what’s the fix? Data-centric architecture. That means semantic layers, real-time data flow, and hierarchical multi-agent systems that actually address the semantic gap instead of pretending it doesn’t exist. This article covers the technical challenges, the architectural solutions, the implementation frameworks, and the build-versus-buy decisions—all within the broader context of AI-driven business transformation reshaping competitive landscapes.
Let’s get into it.
The semantic gap is the translation problem between what people mean when they ask questions and what your systems think they mean. It shows up as inconsistent AI responses when different departments use conflicting definitions for the same terms. And it undermines trust in your autonomous AI systems, which limits how much of your enterprise will actually adopt them.
Here’s how it plays out in real life. Your sales team defines “customer value” by revenue potential. Your support team defines it by satisfaction scores. Your product team defines it by engagement metrics. So your AI agent gives three different answers to the same question depending on which context it’s running in—not because the AI is broken, but because you’ve got three conflicting definitions living in your systems.
The technical definition is straightforward—it’s the disconnect between human intent expressed in natural language and the precise technical configurations your AI needs for consistent execution. But the implications ripple through your entire AI deployment.
Traditional approaches don’t work. They either require structured policy languages that exclude non-technical operators, or they rely on rule-based systems brittle to linguistic variation. Most organisations discover there’s a gap between their current data infrastructure and what Agentic AI requires, with most semantic layers not designed for this transition.
The root causes? Data fragmentation, inconsistent ontologies, and workflow-centric architectures that prioritise process over data consistency. When each application maintains its own data definitions and business logic, semantic conflicts are inevitable. Check out startup refounding case studies for technical implementation examples addressing these challenges.
The semantic gap forces your AI into ReAct iteration loops that consume 13,281 tokens per decision as it tries to self-correct. Department-specific inconsistencies destroy user trust. Infrastructure costs balloon through repeated API calls. And adoption of agentic AI gets blocked because no one believes the answers.
This accuracy degradation comes from enterprise semantic inconsistency in your data infrastructure, not from limitations in your AI models. Controlled environments achieve 95%+ accuracy, but throw that AI into your enterprise semantic complexity and it drops to 67%. Your AI is burning tokens trying to resolve ambiguity that shouldn’t exist in the first place.
Token consumption economics show up in your infrastructure budget as line items you can’t ignore. At current API pricing, this translates to measurable cost increases that compound as usage scales. And your teams stop using the system because they can’t rely on it. Inconsistent answers across departments lead to abandonment of AI tools—you’ve seen it happen.
Scaling limitations follow close behind—human oversight requirements prevent autonomous operation at scale. AI-native competitors with semantic consistency gained their advantage by building data-centric architecture from day one. When you’re reviewing infrastructure costs and margins, remember that API pricing impacts extend well beyond the obvious per-token charges.
Workflow-centric architectures prioritise application processes with data fragmented across systems. Data-centric architectures prioritise unified data accessibility with applications pulling from a shared semantic layer.
Workflow-centric causes semantic gaps through inconsistent definitions per application. Data-centric establishes a single source of truth enabling consistent AI interpretations across your entire enterprise. The migration timeline? 18+ months for a comprehensive transformation.
AI data architecture is an integrated framework that governs how data is ingested, processed, stored, and managed to support AI applications. Unlike traditional data systems designed mainly for historical reporting, AI data architecture needs to support real-time and batch data processing.
Workflow-centric characteristics include application silos, data duplicated per workflow, definitions embedded in application logic, and batch data updates. Data-centric characteristics include a unified semantic layer, real-time data flow, definitions in a centralised ontology, and event-driven updates.
Workflow architectures create definition conflicts by design. Data architectures enforce consistency through centralised semantic layers. Organisations that try to deploy advanced AI agents without first cleaning up their data and processes are taking a shortcut that won’t get them to the desired state. Have a look at data flywheel case studies for migration examples.
Semantic layers provide unified data definitions that prevent conflicting AI interpretations. They act as a single source of truth for business logic and metrics across all your systems. Your AI agents access consistent data regardless of department or context. Vendor solutions implement in 3-6 months. DIY approaches require 18+ months.
The technical function is an abstraction layer translating a unified ontology to your underlying heterogeneous data sources. Maturity Stage 1 (Chaos) has no standardisation—each application defines its own terms and AI semantic gaps are inevitable. Stage 2 (Islands) has department-specific consistency but cross-department conflicts persist, limiting agentic AI capability. Stage 3 (Intelligence) has enterprise-wide semantic consistency, making agentic AI deployable with confidence.
You evaluate your current maturity using definition consistency metrics, cross-department data reconciliation needs, and governance enforcement capability. Test your AI systems with identical questions from different department contexts—inconsistent answers indicate you’re at Stage 1 or 2.
Implementation approaches split between vendor and DIY. Vendor platform solutions like AtScale deliver faster timelines with higher cost. DIY approaches have longer timelines, lower ongoing cost, and maximum control. Your build-versus-buy decisions connect to the economics of custom versus API models.
A data flywheel is a continuous feedback loop capturing AI outputs to retrain your models. It creates compounding improvement—more usage generates more training data, which generates better models. NVIDIA NeMo and Arize platforms provide enterprise implementation infrastructure.
The five-stage process works like this: (1) Capture AI outputs and user feedback, (2) Analyse performance and identify improvement opportunities, (3) Retrain models with new data, (4) Deploy improved models, (5) Repeat continuously. Production data refines your models, better models generate better outputs, and better outputs create more valuable training data.
The economic rationale is compelling. API pricing grows linearly with usage. Custom models have upfront investment but declining per-use cost. Traditional B2B SaaS enjoys margins of 80-90%, but AI-first companies typically operate at 50-65% gross margin due to inference and infrastructure costs.
NVIDIA NeMo provides modular microservices including Curator, Customiser, Evaluator, Guardrails, and Retriever components. Arize AX provides trace collection, online evaluations, human annotation workflows, monitoring, and experimentation features. Integration enables your organisation to transform static models into continuously improving systems, reducing iteration cycles from weeks to hours. Infrastructure requirements link to data flywheel case studies.
Migration involves semantic layer implementation, real-time data infrastructure, and organisational change management. Four stages: (1) Assess current state, (2) Implement semantic layer, (3) Establish real-time data flow, (4) Migrate applications iteratively.
Timeline is 18+ months for the DIY approach, 3-6 months for a vendor semantic layer foundation. Prerequisites include a skills inventory (data engineering, MLOps, AI expertise), executive sponsorship, and a governance framework.
Assessment phases evaluate semantic layer maturity, inventory data sources, identify definition conflicts, and assess organisational readiness. Your organisation must first ensure comprehensive digital representation of operations extending beyond isolated connectivity to creating an up-to-date digital model of the entire enterprise.
Semantic layer implementation involves build-or-buy decisions, ontology design, definition governance establishment, and initial system integration. Your choice between vendor and DIY approaches depends on available engineering resources, timeline constraints, and budget allocation.
Real-time data flow establishment involves event-driven architecture implementation, MQTT or similar protocol deployment, and Unified Namespace establishment. MQTT brokers serve as the central nervous system of data infrastructure. HiveMQ platform is one option for implementation.
Application migration commonly uses prioritisation frameworks—high-value AI use cases first. Parallel operation of old and new systems maintains business continuity. Iterative validation and adjustment catches issues early.
Organisational readiness assessment evaluates skills inventory, cultural prerequisites, governance capacity, and change management timeline. Fast track organisations complete in 18-24 months with strong existing data infrastructure. Standard implementation takes 24-30 months with moderate data maturity. Complex transformation requires 30-36+ months with legacy system integration challenges. Implementation frameworks connect to AI-driven business transformation strategic context.
Real-time data enables your AI agents to access current information for autonomous decisions. Batch data creates staleness that causes inaccurate autonomous actions. Event-driven architecture with MQTT protocols provides millisecond-latency data updates. The Unified Namespace concept from industrial IoT establishes a real-time semantic data fabric. This is critical for agentic AI deployment in production environments.
Batch data is hours or days stale. Real-time data is millisecond-current. Current data enables confident autonomous action. Stale data undermines autonomous decision quality, forcing fallback to human oversight that defeats the entire purpose of automation.
With agentic AI, data’s role shifts from learning patterns and feeding predictions to becoming a continuous fuel stream that powers autonomous, goal-driven action in dynamic environments.
Use case examples include autonomous customer service agents requiring current account state, supply chain AI needing real-time inventory, and pricing AI using current market conditions. For a SaaS platform, real-time data flow enables usage analytics agents to monitor subscription metrics, detect anomalies in user behaviour patterns, and trigger retention workflows based on engagement signals. The sooner an AI agent can observe change and act on it, the greater the impact.
Infrastructure implementation includes MQTT broker deployment, event schema design, latency optimisation, and semantic layer integration for consistent event interpretation. Technical implementation connects to technical implementation examples.
Horizontal platforms serve multiple industries with generalised capabilities and higher customer acquisition costs. Vertical agents specialise in specific industries with deep domain optimisation and lower CAC through targeted positioning.
Vertical agents achieve better margins through fine-tuned models reducing API costs, domain-specific semantic layers, and focused go-to-market efficiency. Horizontal platforms benefit from broader addressable market, platform network effects, and cross-industry data flywheel.
Horizontal platform characteristics include multi-industry applicability, generic semantic layer, API-dependent models, broad marketing required, and platform economies of scale. Vertical agent characteristics include industry-specific ontologies, fine-tuned domain models, narrow but deep market positioning, and specialised semantic layers.
Traditional B2B SaaS enjoys margins of 80-90%, but AI-first companies typically operate at 50-65% gross margin due to inference and infrastructure costs. Early-stage AI startups have reported margins as low as 25%, sometimes even negative initially. Four primary cost drivers include model development, inference costs, infrastructure expenses, and third-party dependencies.
Margin analysis for vertical advantages includes infrastructure costs—custom models are cheaper long-term than APIs. Customer acquisition benefits from targeted positioning. Pricing power increases from domain expertise premium. Vertical AI is changing startup physics in the enterprise software landscape.
Margin analysis for horizontal advantages includes broader market size, development efficiency (one platform serving many industries), and network effects from cross-industry insights.
Data flywheel differences matter. Vertical flywheel improves domain-specific accuracy faster. Horizontal flywheel benefits from diverse data but slower domain optimisation. Infrastructure cost models show custom fine-tuned models have upfront investment but declining per-use cost versus API linear cost growth. See economics comparison for detailed analysis.
Strategic decision factors include market positioning, available resources, domain expertise depth, and time-to-market urgency. 92% of AI software companies now employ mixed pricing models combining subscriptions with consumption fees. Economic analysis connects to outcomes-based pricing and margin economics.
Semantic gap is the disconnect between how humans describe what they want in natural language and the precise technical configurations your AI systems need to deliver it. For example, when sales defines “high-value customer” by revenue potential but support defines it by satisfaction scores, your AI agents give inconsistent answers depending on context. This translation problem undermines enterprise AI reliability.
Assess using the three-stage maturity model. Stage 1 (Chaos) has no standardisation—agentic AI isn’t viable. Stage 2 (Islands) has department-specific consistency but cross-department conflicts—limited agentic capability. Stage 3 (Intelligence) has enterprise-wide semantic consistency—agentic AI is deployable. Evaluate by testing whether your AI queries return consistent answers across all departments and contexts.
DIY approach using open-source tools requires 18+ months, deep MLOps expertise, but offers maximum control and lower long-term costs. Vendor platforms like NVIDIA NeMo with Arize implement in 3-6 months with proven patterns but higher ongoing costs. Choose based on available ML engineering talent, time-to-market urgency, and budget for vendor licences versus infrastructure investment.
Inconsistency stems from semantic gaps where each department maintains conflicting data definitions. Sales, support, and product teams define the same terms differently, causing your AI to provide department-specific answers. The solution requires a unified semantic layer providing consistent definitions across all systems, implemented through data-centric architecture transformation.
Start with APIs for rapid prototyping and low initial cost. Transition to custom fine-tuned models when usage volume makes API costs exceed custom infrastructure investment (typically 12-24 months), domain-specific accuracy requirements exceed general-purpose models, or your data flywheel generates sufficient training data for meaningful improvement. Breakeven analysis depends on your usage volume and accuracy requirements.
Three parallel streams are required. (1) Technical—implement semantic layer and real-time data infrastructure (12-18 months). (2) Organisational—change management for data-driven culture adoption (12-18 months). (3) Governance—establish semantic consistency enforcement mechanisms (6-12 months). Prerequisites include data engineering talent, executive sponsorship, and phased migration strategy maintaining business continuity.
Vendor platforms (NVIDIA NeMo + Arize) implement foundational infrastructure in 3-6 months. DIY approach requires 12-18 months for feedback capture systems, model retraining pipelines, evaluation analytics, and deployment automation. Add 6-12 months for organisational learning cycles before flywheel momentum becomes self-sustaining. Timeline depends on existing MLOps maturity, available ML engineering resources, and domain complexity.
Required for autonomous decision-making where agents act independently without human approval. Batch data (hours or days stale) creates significant accuracy risk for autonomous actions. Real-time streaming (millisecond latency) enables confident autonomous operation. Assess by determining whether your agents make autonomous decisions, what’s the cost of stale-data errors, and can your business tolerate batch-update delays. If answers indicate autonomous operation, real-time infrastructure is mandatory.
Stage 1 (Chaos) has each application defining its own terms, no standardisation, and inconsistent AI answers guaranteed. Stage 2 (Islands) has department-level consistency but cross-department conflicts persist, limiting AI reliability. Stage 3 (Intelligence) has enterprise-wide unified definitions, consistent AI answers, and agentic deployment is viable. Assess by querying your AI systems with identical questions from different department contexts—inconsistent answers indicate you’re at Stage 1 or 2.
Hierarchical architectures use specialised agents for specific domains coordinated by master agents. Each specialist agent operates within a consistent domain-specific semantic layer, reducing ambiguity. ArXiv research shows 67% accuracy improvement versus monolithic models. Master agents handle cross-domain coordination using a unified semantic layer. Orchestration platforms like LangChain, CrewAI, and LangGraph implement these patterns.
Vendor semantic layer (AtScale) has 3-6 month implementation. ROI depends on reduced AI error costs and autonomous operation value—typically 12-24 months to positive ROI. DIY semantic layer has 18+ month implementation, lower ongoing cost, ROI 24-36 months but better long-term economics. ROI accelerates with higher AI deployment volume, more autonomous operations enabled, and reduced manual oversight requirements.
GraphRAG maintains semantic relationships between entities, providing better accuracy for complex enterprise contexts where relationships matter (org charts, process flows, regulatory connections). Vector similarity search is faster and simpler but loses relationship context. Choose GraphRAG when accuracy requirements exceed 90%, relationship context is necessary for correct answers, and Fluree or similar platform investment is justified. Use vector search for rapid prototyping, lower accuracy tolerance, and simpler implementation requirements.
Outcomes-Based Pricing and AI-First SaaS Gross Margin Economics ExplainedYou’re probably thinking about adding AI to your product. Maybe you’ve already shipped something. And you’re looking at your financials wondering why your margins look terrible compared to the rest of the SaaS world.
This article is part of our comprehensive guide to startup refounding and AI-driven business model transformation, where we explore the economic realities companies face when transitioning to AI-first business models.
Here’s what’s happening. Traditional SaaS enjoys gross margins of 80-90%. Your AI-first product? You’re running at 25-60% if you’re lucky. Some companies started negative.
The difference comes down to one thing: every AI model invocation costs real money. Unlike traditional software where serving another thousand users barely moves the needle on infrastructure costs, AI burns cash with every request.
GitHub Copilot learnt this the hard way. They were losing $20-80 per user monthly while charging $10/month flat rate. That’s the kind of margin compression that kills companies. You can read the full GitHub Copilot margin case study for specific details on how they addressed this.
But there are three ways out. Infrastructure optimisation gets you routing simple requests to cheaper models. Pricing model evolution moves you from flat rates to hybrid and outcomes-based approaches. Product bundling strategies let you capture more value per customer.
This guide walks through the economics, compares the pricing approaches, and gives you frameworks for making the transition. We’ll look at what companies like Replit, Cursor, and Intercom actually did to fix their margin problems.
Traditional SaaS scales beautifully. Once you’ve built the software, serving another hundred thousand users costs almost nothing. Maybe some more hosting capacity. Some customer support. That’s why mature SaaS companies run at 80-90% gross margins.
AI products work differently. Each user action triggers computationally intensive operations with direct variable costs. Every model call costs money.
Think about a coding assistant. One request might trigger dozens of model calls. Understanding intent, searching documentation, generating code, checking syntax, writing tests. The features users love most burn through margins fastest.
Your COGS looks completely different. Traditional SaaS: hosting, support, minor compute. AI-first: inference costs, GPU clusters, vector databases, API fees. The list is longer and every line item is bigger.
Bessemer’s “State of AI 2025” splits AI companies into two categories. Supernovas run at roughly 25% margins with unoptimised infrastructure and experimental pricing. Shooting Stars hit 60% margins after custom models and refined pricing. The gap? Infrastructure maturity and pricing sophistication.
If you’re using OpenAI or Anthropic APIs, you’re paying per token. Every request. 84% of companies see 6%+ gross margin erosion from AI infrastructure costs.
Ben Murray from SaaS CFO puts it well: “If SaaS is about margin efficiency, AI is about value density”. You’re optimising for how much output, productivity, or labour you replace per dollar of compute.
Outcomes-based pricing in practice relies on what industry practitioners call “pricing on proxies”—near-outcome metrics that are easier to measure than actual business results. Companies pick measurable metrics that correlate with value, even if they don’t perfectly capture it.
Intercom demonstrates this. Their CEO claimed their Fin AI agent grew to 8-figure ARR with 393% annualised Q1 growth by tying revenue to ticket resolutions. That’s outcome-based pricing working. But notice they’re charging for resolutions, not for “improved customer satisfaction” or “reduced churn”. Resolutions are measurable. Satisfaction is squishy. To give customers flexibility, Intercom offers both credit buckets and outcome-based options, letting different segments choose what works. For more detailed examples of how companies implement outcomes-based pricing, see our case studies from companies like GitHub Copilot and Sierra.
Zapier’s Agent illustrates the measurement challenge. The same tool generates entirely different outcomes depending on use case. Customer support deflects tickets. Sales books meetings. Marketing creates content. What’s the outcome? Which one do you price on?
Attribution gets messy. External variables influence results and business outcomes become impossible to measure cleanly.
Decagon wrestles with this tension in their pricing strategy. Bihan Jiang, their Director of Product, explains: “By focusing on conversation volume rather than parsing ‘outcome,’ incentives stay clean”. Conversations are easy to count. Resolutions are harder. Most companies start with the simpler metric and migrate as measurement systems mature.
Most 2025 enterprise AI deals rely on usage-based or hybrid pricing. Pure outcome-based pricing remains rare because customers need predictability.
Joey Quirk from Chargebee calls outcomes-based pricing “usage pricing with a marketing degree”. You’re still charging for something the product does. You’ve just picked a metric closer to value.
Industry practitioners recommend a parallel approach: “Measure outcomes even when you don’t price on them”. Build dashboards, establish baselines, create feedback loops. This builds trust that sustains pricing power.
Traditional SaaS: 80-90% gross margins. AI-first early stage: roughly 25%. AI-first mature: roughly 60%.
Bessemer’s data shows LLM-native companies maintain around 65% gross margin while growing roughly 400% year-over-year. These are the companies that figured it out.
The difference? API dependency versus custom models. Flat-rate pricing versus usage-based. No caching versus intelligent routing. Poor cost visibility versus dashboards showing engineers what their code costs.
Understanding these pricing models matters because the margin differences are substantial. 92% of AI software companies now use mixed pricing models combining subscriptions with consumption fees. That shift happened fast. 41% of leading SaaS teams use hybrid models, up from 27%. This pricing evolution is a key component of successful AI transformation strategies.
Replit experienced gross margins below 10% before adopting usage-based pricing. They eventually improved to the 20-30% range. That’s real money at scale.
Companies using hybrid models report the highest median growth rate at 21%, outperforming pure subscription and pure usage-based models. The market has spoken. Hybrid wins.
44% of SaaS companies now charge for AI-powered features, unlocking new revenue streams. This is the “value bundling” play. Your traditional SaaS product runs at 80% margins. Your AI features run at 40% margins. Bundle them together, charge more, land somewhere profitable.
The trajectory is clear. You start at 25% margins with API dependency and flat pricing. You optimise infrastructure and shift to hybrid pricing. You hit 40% margins. You develop custom models and refine your pricing further. You reach 60% margins. AI-driven SaaS will likely mature towards 60-70% gross margins. Lower than legacy software but sustainable with proper cost management.
These margin gaps aren’t permanent. Companies follow predictable paths to close them.
The path from 25% to 60% margins follows three phases. Immediate pricing adjustments, medium-term infrastructure optimisation, long-term custom model development.
Phase 1 runs 0-6 months. Companies achieving higher margins shift from flat-rate to hybrid pricing with usage components. This is the quick win. They’re capturing some of the variable costs from customers instead of eating all of it.
Successful companies implement cost transparency dashboards for engineering teams. Developers need to see what their code costs. Most teams have no visibility. Finance sees a number in Looker once a month. No one can explain how it got there.
Phase 2 runs 6-12 months. The winning companies deploy intelligent routing. They direct 80% of simple requests to cheaper models, reserving expensive ones for complex tasks. This is the middle ground between full API dependency and custom models.
Caching strategies help. How many requests are near-duplicates? Can you cache responses for common queries? Every cache hit is a free inference.
Phase 3 runs 12-24 months. Market leaders develop custom fine-tuned models for high-volume use cases. They reduce third-party API dependency. This requires real investment but delivers 50-70% cost reduction at scale. For technical details on moving from API dependency to custom models, see our guide on infrastructure optimisation strategies.
67% of companies are actively planning to repatriate AI workloads from cloud to reduce costs. Another 19% are evaluating. 61% already run hybrid AI infrastructure mixing public and private.
Complementary tactics help. Product bundling improves mix. Value-based tier structuring ensures the best customers pay more. Credit system experimentation provides temporary scaffolding while companies refine measurement.
The improvement roadmap isn’t linear. Quick wins come from pricing. Deep margin improvement requires infrastructure investment.
One of the most impactful changes in that improvement roadmap is the pricing model transition.
The successful companies don’t rip the bandaid off. They start with a hybrid model. They maintain base platform fees while adding usage or outcome components. This provides a transition path without creating a revenue cliff.
Credit systems serve as temporary scaffolding. Companies give customers predictable pools while they refine outcome measurement.
The pattern that works: pilot with new customers first. A/B test with cohorts. Extend to renewals gradually. Never force existing customers onto new pricing without grandfather clauses.
Companies build metering infrastructure. They instrument products to track proxy metrics. Conversations, resolutions, API calls. They integrate with billing platforms like Metronome, Chargebee, or Stripe Billing. The smart ones don’t build this from scratch. They leverage existing infrastructure.
The migration path looks like this: pure subscription → hybrid → usage-heavy hybrid → outcomes-based. Most companies get stuck at “usage-heavy hybrid” because it works well enough. 59% expect usage-based pricing to grow revenue share, up from 18% in 2023.
Customer communication strategy matters more than you think. One head of self-serve monetisation at a product-led SaaS company found usage stopped not because of price, but because admins didn’t trust they’d stay in budget.
A pricing strategist at an enterprise DevOps vendor puts it well: “It’s not about the unit economics. It’s about buyer confidence in total exposure.”
80% of customers report that consumption-based pricing better aligns with value they receive. But they need guardrails.
Fireflies.ai and Synthesia price by output units like meeting minutes or video minutes. This makes value tangible without exposing model complexity. Customers understand “minutes”. They don’t understand “tokens”.
Companies mitigate risk through grandfather clauses for legacy customers, spending limits to prevent bill shock, transparent usage dashboards, and phased rollouts starting with new customers.
The infrastructure question sits at the heart of the margin problem.
The build-versus-buy decision for AI models comes down to scale and specialisation.
Companies stay with API until reaching $50K-100K monthly inference spend, then evaluate custom development. Below that threshold, API dependency makes sense. Low upfront cost, fast implementation. You’re paying for convenience.
But that convenience has a price. Direct cost-per-use creates immediate margin compression. GPT-3.5-turbo costs $3 per million input tokens and $6 per million output tokens. Small numbers until you multiply by millions of users.
Custom model development requires investment. $100K-$500K+ for team, infrastructure, training. But you get 50-70% cost reduction at scale. That’s the payoff.
Intelligent routing sits in the middle. Companies use API for complex queries (20% of requests), fine-tuned models for simple queries (80%). Immediate cost reduction without full custom build.
Successful companies evolved this way. They started with pure API dependency, shifted to hybrid approaches with multiple model tiers. Small models for simple tasks, bigger models for complex generation.
The TCO analysis needs to factor everything. Team costs, infrastructure expenses, training compute, maintenance overhead versus ongoing API fees.
The break-even analysis changes based on usage volume. Low volume: API wins. Medium volume: intelligent routing wins. High volume: custom models win.
Technical capability requirements matter. API dependency needs minimal team. Intelligent routing needs infrastructure engineering. Custom model development needs ML expertise, GPU cluster management, training pipelines.
Decision framework: If you’re below $50K monthly inference spend, stay with API. If you’re between $50K-$200K monthly, implement intelligent routing. If you’re above $200K monthly, evaluate custom model development.
Three common patterns emerge. Pure usage-based charging per API call. Hybrid model with base fee plus usage. Credit pools with pre-purchased consumption buckets.
Customers prefer predictable spending over exact value alignment. Hybrid models balance predictability with cost management.
Metric selection determines customer perception. Choose units that align with value. Conversations beat tokens. Resolutions beat compute time. Minutes beat API calls.
Cursor crossed $1B in ARR less than 24 months from launch with this pricing model. The hybrid approach works.
GitHub Copilot charges $19 USD per user per month (Business) or $39 USD per user per month (Enterprise). Simple per-seat pricing with usage included.
But the economics get interesting at scale. A 500-developer team using GitHub Copilot Business faces $114k in annual costs. Same team on Cursor’s business tier would pay $192k.
Tiering strategy combines base platform fees with included usage allowances, charging overages at declining rates. GitHub Copilot’s Pro+ tier offers 1,500 premium requests with $0.04 per additional request.
Pricing psychology matters. Cursor shifted to compute credit pools and triggered customer backlash due to unpredictability. Communication is everything.
Hybrid usage-based pricing breaks existing billing systems. Most companies run separate PLG and SLG stacks, neither supporting clean usage pricing.
Billing metres can be integrated with Stripe, Recurly, and Chargebee. Automated emails become handy when users approach next usage tier, get close to rate limits, or run out of credits. This prevents bill shock and builds trust.
Usage-based pricing charges for consumption metrics like API calls, tokens, or compute time. Outcomes-based pricing charges for results delivered like resolutions, conversions, or value created. In practice, most “outcomes-based” models use “pricing on proxies” rather than true business outcomes. Near-outcome metrics like conversations completed are easier to measure than customer satisfaction.
AI introduces variable costs per user interaction through inference expenses. Unlike traditional software where marginal costs approach zero, each model invocation requires compute. If you’re using third-party APIs like OpenAI, you incur direct per-use charges that immediately compress margins.
Stay with API until reaching $50K-100K monthly inference spend, then evaluate custom development. For immediate margin improvement without full custom build, implement intelligent routing: use APIs for complex queries (20%), fine-tuned cheaper models for simple queries (80%).
Companies achieving better margins implement cost transparency dashboards for engineering teams, set spending caps on API usage, use intelligent routing to cheaper models for simple requests, and instrument products to track inference costs per customer. Many adopt hybrid pricing capturing usage costs from customers.
Credit systems let customers pre-purchase consumption pools, providing spending predictability while companies refine outcome measurement. Credits serve as valuable transitional scaffolding for iterating teams, but durable strategies anchor to customer-understandable value drivers.
12-24 months typically, through phased approach. Immediate pricing adjustments (0-6 months), intelligent routing implementation (6-12 months), custom model development for high-volume use cases (12-24+ months). Quick wins come from pricing. Deep margin improvement requires infrastructure investment.
92% of AI software companies use mixed models combining base subscriptions with usage components. Pure subscription pricing is dropping as companies realise flat rates can’t sustain variable AI costs. Companies using hybrid models report the highest median growth rate at 21%.
Companies use proxy metrics that are measurable and correlate with value. Conversations completed rather than customer satisfaction. Resolutions attempted rather than business impact. Perfect outcome measurement is impractical due to attribution complexity. Proxies provide practical middle ground.
Metering instrumentation in products, usage tracking databases, customer-facing consumption dashboards, spending alerts, billing platform integration like Metronome, Chargebee, or Stripe Billing. Most don’t build from scratch. They leverage existing billing infrastructure to focus on product.
Companies explain cost drivers transparently, show value calculation clearly, offer grandfather clauses for legacy pricing, provide spending caps to prevent bill shock, display real-time usage dashboards, and phase rollout starting with new customers before extending to renewals.
Per-conversation is easier to measure and explain, but per-resolution aligns better with customer value perception. Most companies start with conversations (simpler attribution) then explore resolution-based pricing once measurement systems mature. Decagon’s analysis shows resolution pricing increases customer willingness to pay.
Early stage: 25-40% is acceptable while optimising infrastructure and pricing. Growth stage: 40-50% target through intelligent routing and hybrid pricing. Mature: 60%+ goal via custom models and refined pricing. Traditional SaaS margins (80-90%) unlikely for pure AI products due to inherent compute costs.
Understanding the economic realities of AI-first pricing and margins is just one piece of the puzzle. If you’re evaluating whether to pursue AI transformation, our guide on strategic decision frameworks for evaluating business model changes provides the frameworks you need to make informed choices about your company’s direction.
How to Decide Whether Your Company Should Refound or Add AI Features IncrementallyYou’ve got a high-stakes call to make. Your company needs an AI strategy, but which way do you go? All-in on a comprehensive refounding—rebuilding everything around AI-native architecture—or the safer bet of adding AI features to what you’ve already got?
Get this wrong and you’re either months deep into unnecessary transformation or you’re watching AI-native competitors build self-improving systems you can’t match by sticking features onto your legacy setup.
This guide is part of our comprehensive understanding the refounding trend, where we explore how companies navigate AI-driven business model transformation. Companies like Airtable, Handshake, and Opendoor are announcing refounding initiatives, not incremental feature roadmaps. Meanwhile other companies are doing just fine with measured approaches.
So in this article we’re giving you a four-factor decision framework: technical feasibility, business model economics, organisational readiness, and competitive positioning. You’ll get assessment criteria, board negotiation guidance, and risk mitigation strategies to make the right call based on what your company can actually do.
Refounding is when you rebuild your business around AI-native architecture while keeping your market position. It’s not the same as pivoting.
The difference matters. Pivots are about course correction after you’ve worked out an approach isn’t cutting it. Refounding is proactive transformation to grab new opportunities without admitting your previous strategies failed. When Airtable’s CEO Howie Liu announced their refounding, he was clear: “This is not about changing direction after getting something wrong”.
Refounding changes everything at once—business model, technical architecture, organisational culture, and value proposition.
Pivots usually tackle one dimension—product, market, or business model—while keeping the rest steady. You pivot your product-market fit or your revenue model. You refound your entire company.
The timing’s different too. Pivots happen reactively when your current path fails. Refounding happens proactively to capture emerging opportunities. Investors view these announcements as necessary adaptations to technological disruption rather than distress signals.
Think about the historical examples. Salesforce defeating Siebel wasn’t a pivot—it was a new player unburdened by legacy models. ServiceNow dominating legacy ITSM vendors followed the same pattern. Refounding is about recapturing that new entrant advantage while keeping your market position and customer relationships.
These companies get it. AI-native competitors build self-improving data flywheels that create advantages you can’t match by tacking features onto legacy architectures.
Airtable kicked this off in June, declaring they’d treat AI adoption as a foundational company reset rather than incremental feature development. They didn’t just add an AI assistant—they repositioned as an “AI-native app platform”.
Handshake’s refounding shows you the clearest financial picture. Their AI division grew from 15 to 150 employees within months and pulled in $100 million in annualised revenue in just eight months. They’re now at $200 million combined ARR and projecting “high hundreds of millions” for 2026.
CEO Garrett Lord said it plainly: “Winners and losers are being defined right now”. Without aggressive AI investment, Handshake becomes “merely okay,” stuck in incremental improvements generating modest quarterly gains—a pattern that leads to corporate deceleration.
Handshake rolled out dramatic cultural shifts, including mandatory five-day office weeks with expectations for employees to work “with a pace and number of hours that is meaningful and will help us hit goals.”
The threat is architectural. AI-native startups create self-reinforcing loops where usage generates data, improved data refines AI, better AI attracts more usage, and more usage generates better data. You can’t bolt this onto existing systems built around different assumptions.
Your optimal strategy hangs on four factors: technical feasibility of agentic AI (can your team build agentic AI), business model economics of AI-first SaaS (can you make outcomes-based pricing work), organisational readiness for transformation (can you drive cultural transformation), and competitive positioning (how urgent are AI-native threats).
Technical feasibility isn’t about whether you can add machine learning features. It’s about whether you can build agentic AI architecture—systems that perceive environments, make decisions, act independently, and adapt strategies without constant human oversight. It’s about whether you can create data flywheel patterns where collected data continuously refines AI models.
Before we get into the details, here’s what you need to know about Bret Taylor’s three-layer AI market framework. Taylor identifies three AI market layers: Frontier Models (capital-intensive foundation models like GPT-4), AI Tools Market (infrastructure enabling AI), and Applied AI (agent-based solutions for specific job functions).
Applied AI is the most exciting tier—this is where specialised vertical agents solve specific industry problems and deliver measurable outcomes rather than productivity enhancements. Taylor argues what were SaaS applications in 2010 will be agent companies in 2030. Your refounding decision depends on which layer you compete in, with Applied AI companies having the clearest refounding imperative.
Business model economics asks if you can move to outcomes-based pricing where customers pay for results delivered, not software access. Traditional SaaS enjoys margins of 80-90%, but AI-first companies typically operate at 50-65% gross margin because of inference and infrastructure costs. Can you capture enough value from outcomes to justify those costs?
Organisational readiness looks at your ability to drive cultural transformation. Can you get the board on board? Can you keep key employees during comprehensive change? Can you sustain transformation effort before seeing results?
Competitive positioning examines the urgency. AI-native startups reach $1 million revenue in 11.5 months versus 15 months for traditional SaaS firms. Are AI-native startups threatening your market now, or have you got runway for an incremental approach?
Be honest with yourself on each factor. High technical feasibility and viable outcomes economics combined with AI-native competitive threats and organisational readiness point to refounding. Low scores on technical feasibility or economics with limited competitive threats suggest incremental approaches. For a complete overview of all aspects of the refounding overview, see our comprehensive guide.
You need to work out if your team can build agentic AI systems with autonomous decision-making, data flywheel patterns for continuous improvement, and enterprise data architecture for model refinement.
Agentic AI is characterised by autonomy, proactivity, and goal-oriented behaviour. These systems perceive their surroundings, make decisions, act independently to achieve objectives, and adapt strategies based on new information.
The data flywheel creates a self-improving loop: production data refines models, better models generate better outputs, and better outputs create more valuable training data for continued improvement.
Can you create these systems? Work out the gap between your current architecture and AI-native foundation. How much technical debt is in the way? Do you have AI/ML expertise or mainly traditional software engineers?
Build capabilities in logical order: modern data platforms, then robust metadata and treating data as a product, then mapping business process metadata, then agentic automation. Skipping steps doesn’t work.
Start with human-in-the-loop systems that let agents propose actions while keeping humans in control of final decisions. Focus on high-impact, narrow-scope use cases where human involvement is expensive and decisions are repetitive but data-rich.
The build versus licence decision matters. Which bits need internal development versus licensing frontier models? Companies building Applied AI solutions should focus on solving customer problems rather than developing frontier models.
Refounding becomes economically viable when you can move to outcomes-based pricing where customers pay for results delivered. This lets you capture higher value than per-seat subscription models despite increased infrastructure costs.
Traditional B2B SaaS enjoys margins of 80-90%, but AI-first companies typically operate at 50-65% gross margin. The cost drivers are real. Model development costs millions in GPU compute. Each user action triggers computationally intensive inference operations.
GitHub Copilot initially lost approximately $20 per user monthly when charging $10/month, as compute costs ran $20-80 per user.
Outcomes-based pricing changes the equation. Bret Taylor’s company Sierra demonstrates the model: customers pay a pre-negotiated rate when an AI agent autonomously resolves a customer issue. If escalation to a human is necessary, there’s no charge.
The market is heading this way. Gartner forecasts 40% of enterprise SaaS will include outcome-based elements by this year, up from 15% a few years ago.
The closer you get to solving a problem for a company, the more successful your business will be. Applied AI companies building agent-based solutions for specific job functions have clearer paths to outcomes-based revenue.
Can you quantify customer outcomes AI delivers? Are customers willing to pay for results versus software access? Can you absorb revenue model risk?
AI-driven SaaS will likely mature toward 60-70% gross margins—lower than legacy software but sustainable with proper cost management and value-aligned pricing.
Technical capability is necessary but it’s not enough for successful refounding. You need to work out if you can drive cultural transformation including renewed startup intensity, stakeholder alignment through board negotiations, and employee retention during comprehensive change.
Refounding means changing work location policies, pace expectations, decision-making speed, and risk tolerance.
Change-seeking cultures actively hunt for innovation rather than passively responding to disruption. They encourage workers to test ideas and take calculated risks, establish safe spaces for learning from failures, and use feedback mechanisms to distribute insights.
Can you get the board on board? Handshake’s board member Mamoon Hamid from Kleiner Perkins initially responded with surprise, but the board ultimately backed the direction. You’ll need to quantify competitive threats, model outcomes-based pricing economics, show technical capability gaps with incremental approaches, and present phased roadmaps to reduce perceived risk.
You need a compelling vision of what the aspirational culture will offer employees. Leaders must communicate a change narrative that creates shared understanding of the past, reasons for transformation, and a compelling vision for the future.
The way to change culture is to change how people behave. Create cross-functional teams. Hold blameless postmortems. By removing fear, you enable teams to surface problems and solve them more effectively.
When people see positive outcomes from initial changes, they become more open to further change—creating a virtuous cycle.
Which key employees might push back or leave during transformation? How long can you keep transformation going before seeing results? Does leadership have experience driving comprehensive organisational transformation?
Even with strong organisational readiness, competitive dynamics may force your hand on timing.
AI-native startups entering your market with data flywheel advantages, customers demanding autonomous AI solutions rather than enhanced workflows, and declining competitive differentiation from your current product all point to urgent refounding need.
The battle lines are clearest in vertical software. AI-native startups push deeper into industry-specific workflows—automating insurance claims, legal briefings, revenue cycle management. Traditional SaaS players face a stark choice: evolve or become obsolete.
Legacy companies must evolve from workflow-centric architectures to data-centric ones where data serves as the central product. AI-native companies empower generalist employees who adapt quickly across functions. Traditional siloed specialist teams move too slowly.
What are customers asking for? Autonomous problem-solving or feature enhancements? Outcomes or software access? This tells you whether incremental features will satisfy demand or whether customers expect fundamentally different capabilities.
By prioritising architecture over tools, enterprises can build knowledge engines—platforms that iterate, refine, and compound advantage with every interaction.
Is your differentiation dropping as competitors add similar features incrementally? How fast are AI capabilities advancing in your industry?
Are you protecting market share or capturing new opportunities? Defensive positioning with incremental features works when competitive threats are distant. Offensive positioning requires refounding when AI-native competitors are gaining traction.
If AI-native competitors are already in your market building data flywheel advantages, the incremental approach may be a temporary holding pattern rather than a viable long-term strategy.
Roll out refounding through parallel development tracks where AI-native architecture runs alongside legacy systems, with clear milestones, reversible decision points, and gradual customer migration. Avoid big-bang transformation failures.
A strategic, phased rollout builds internal confidence, allows for continuous learning, and minimises disruption. Follow this structure: Phase 1 focuses on low-risk, high-impact internal automation. Phase 2 addresses core value-chain enhancements. Phase 3 targets strategic differentiation and new business models.
Run AI-native systems alongside legacy systems for customer continuity. Define clear success criteria at each phase before committing to the next stage. Structure early phases so you can pivot back to an incremental approach if refounding proves unviable.
Customer migration needs planning. Will you gradually transition users or use full cutover strategies for different customer segments? How do you fund refounding without starving existing product development?
The strangler pattern gradually replaces parts of the system but, using a facade, those outside your system notice no difference. The aim is reducing migration risk by doing it a small bit at a time.
For data migration, use gradual approaches: sync data between old and new systems, migrate users in batches, validate data consistency continuously, and roll back individual users if issues arise.
Phase 1 examples include internal IT support automation: triage tickets, answer common queries, reset passwords, and autonomously resolve known issues. Phase 2 examples include finance operations and supply chain management.
Phase 3 focus: agents can power entirely new services, optimise complex decision-making at scale, and create dynamic, personalised customer journeys impossible with traditional systems.
Split teams between maintenance and rebuild. Feature freeze the old system except for emergency fixes. Put quality assurance focus on the new system.
Design a multi-year roadmap that balances quick wins with long-term strategic transformation. Communicate each phase’s success to build internal champions and secure ongoing investment.
Comprehensive refounding usually takes 12-24 months from decision to initial AI-native product launch, with another 12-18 months for full customer migration and organisational transformation. Phased approaches let you capture value earlier. As detailed in our case studies with specific metrics, Handshake’s AI division generated $100 million in annualised revenue in just eight months, showing that aggressive execution can deliver results faster than typical timelines suggest.
Return-to-office mandates are one cultural transformation approach, not a universal requirement. Some companies successfully refound with distributed teams by emphasising other mechanisms for startup intensity renewal—increased decision speed, clearer accountability, tighter alignment.
Use the four-factor assessment framework to build a data-driven business case. Quantify competitive threats: AI-native startups reach $1 million revenue in 11.5 months versus 15 months for traditional SaaS firms. Reference Handshake and Airtable precedents to show how other companies secured board support. Model outcomes-based pricing economics versus subscription limitations. Present a phased roadmap to reduce perceived risk.
Leadership structure depends on existing technical depth. Companies with strong engineering leadership often embed AI ownership in the CTO role. Those with weaker technical capabilities benefit from dedicated Chief AI Officers, particularly during transformation phases.
Do an architecture assessment. If core systems need a complete rebuild to support agentic AI and data flywheel patterns, and rebuild timeline exceeds your competitive window, an incremental approach may be more viable as a starting point. For detailed guidance on data-centric architecture requirements, see our technical deep-dive. Build capabilities in logical order: modern data platforms first, then metadata and data as product, then business process metadata, then agentic automation.
The incremental approach creates path dependency. AI-native competitors build data flywheel advantages that compound over time, making future refounding progressively harder. Production data refines models, better models generate better outputs, and better outputs create more valuable training data. If competitive assessment shows AI-native threats, the incremental approach may be a temporary holding pattern.
Transition viability depends on your ability to quantify customer outcomes AI delivers, customers’ willingness to pay for results versus software access, and your capability to absorb revenue model risk. For a comprehensive outcomes-based pricing analysis, see our detailed economic breakdown. Sierra demonstrates this approach, charging only when AI agents autonomously resolve issues. Gartner forecasts 40% of enterprise SaaS will include outcome-based elements by this year. Applied AI companies building agent-based solutions have clearer outcomes paths.
Frame refounding as a growth opportunity, not a failure response. Give them a clear vision of the AI-native future state. Involve key technical leaders in strategy development. Offer learning and development opportunities in AI. Be open about cultural changes. For detailed guidance on cultural change requirements, see our comprehensive playbook. When people see positive outcomes from initial changes, they become more open to further change.
Refounding usually extends exit timeline because of transformation complexity. If exit is imminent (12-18 months), incremental AI features may better serve near-term valuation. If exit is longer-term (24+ months), refounding may position the company for higher valuation in an AI-transformed market.
Bret Taylor identifies three AI market layers: Frontier Models (foundation models like GPT-4), AI Tools Market (infrastructure enabling AI), and Applied AI (agent-based solutions for specific job functions). Applied AI is the most exciting tier, where specialised vertical agents solve specific industry problems and deliver measurable outcomes. Taylor argues what were SaaS applications in 2010 will be agent companies in 2030. Your refounding decision depends on which layer you compete in, with Applied AI companies having the clearest refounding imperative.