In early February 2026, more than $1 trillion in enterprise software market cap disappeared in seven days. The SaaSpocalypse — as Jefferies and Forrester have labelled it — is forcing decisions that most CTOs at 50–500 employee companies have been putting off.
The problem is SaaS sprawl. Forrester identifies uncontrolled portfolio growth as a primary pain point for technology leaders — and AI is making it both more expensive (vendor pricing uplifts) and more solvable (AI-native alternatives at a fraction of the cost).
This playbook gives you four phases: inventory and classify, triage, build vs. buy, and renegotiation. You’ll end up with a ranked vendor action list and a 90-day plan, with AI investment funded from SaaS savings — which is the framing that makes this a board-level conversation instead of just an IT project. This article is part of our comprehensive SaaS reckoning guide, where we cover the full strategic landscape from market context through to tactical execution.
Why auditing your SaaS stack is now a strategic priority, not just a procurement exercise
The per-seat pricing model is collapsing. According to Bain, in three years any routine, rules-based digital task could move from “human plus app” to “AI agent plus API” — and vendors are already pricing accordingly. 65% of the 30+ SaaS vendors Bain analysed have introduced a hybrid approach, layering an AI usage meter on top of seat-based pricing, while 35% have bundled AI into per-seat price increases. Gartner predicts 40% of enterprise SaaS spending shifts to usage- or outcome-based pricing by 2030 — and that shift is already happening at renewals now, not in four years.
The opportunity is in the gap between vendor uncertainty and buyer inaction. One company that spent $18K per month on Datadog discovered 12 people logged in per month and 89% of metrics were never viewed. Audit proactively and you lock in favourable terms before vendors regain confidence. The political case for AI tooling depends on showing reallocation from SaaS savings — not addition. That framing starts here.
Phase 1: Inventory and classify — mapping your stack against AI displacement risk
Start with spend data, not software lists. Pull actual invoiced SaaS spend from finance for the last 12 months — all vendors, all amounts, seat counts. Most CTOs are surprised to find 30–40% of tools are used by fewer than 20% of their licensed seats. Include free-tier and browser-based tools to capture the full shadow IT footprint.
Once you have the data, apply the Bain Four-Scenario Framework to classify each vendor across two dimensions: the potential for AI to automate the core workflow, and the potential for AI to penetrate the product category.
Core Strongholds — low automation potential, low AI penetration. Procore‘s project cost accounting and Medidata‘s clinical-trial randomisation both require deep domain expertise and regulated data flows. Don’t waste energy on replacing these.
Open Doors — low automation potential, high AI penetration. Third-party agents can already hook into exposed APIs and replicate the core value. HubSpot list building and Monday.com task boards are the canonical examples.
Gold Mines — high automation potential, low AI penetration. Incumbents hold exclusive data and rules that give them a head start. Look for AI feature extension rather than replacement.
Battlegrounds — high automation potential, high AI penetration. Intercom‘s Tier 1 support, Tipalti‘s invoice processing, and ADP‘s time-entry approvals are all easy to automate — and just as easy for others to copy.
Layer the Forrester REAP Model on top as the disposition layer: Reassess, Extract, Advance, and Prune. It maps to business fitness and technical fitness. Open Door vendors typically get Prune or Extract. Core Strongholds get Extract. Battleground vendors get Reassess or Advance. For a deeper look at how to evaluate which specific vendors are likely to survive this transition — and which are structurally exposed — see our vendor survival framework.
Focus your audit on the top 20 vendors by spend — these account for 80–90% of your SaaS budget. Don’t attempt to audit the full tail immediately. Create an AI vulnerability score for each using three inputs: workflow type (deterministic vs. probabilistic), seat utilisation rate, and availability of viable AI-native alternatives today. Deterministic SaaS — payroll, ERP, compliance, healthcare records — scores low vulnerability. Probabilistic SaaS — content, task management, marketing automation — scores high.
The output is a ranked priority list: each top-20 vendor with its Bain quadrant, REAP disposition, and AI vulnerability score. That list drives everything that follows.
Phase 2: Triage — which vendors need immediate action, and which can wait?
The filter is simple: prioritise vendors where renewal is within 12 months and the vendor falls in the Open Door or Battleground quadrant. When both conditions are true, you have maximum leverage and minimum time.
Tier 1 — act now (0–90 days): Open Door and Battleground vendors with renewals approaching. AI-native alternatives already exist in these categories and vendors negotiate hardest when displacement is demonstrably real. Build a genuine BATNA before entering any negotiation — a theoretical alternative doesn’t move vendors, a real quote from a competitor does. ZoomInfo is flexible because Apollo and Clay exist; Gong negotiates because Clari and Salesloft are options.
Tier 2 — prepare now, act at renewal: Battleground vendors with renewals 12–24 months out. Begin building the AI-native comparison case now so you have leverage when the deadline arrives.
Tier 3 — monitor: Core Strongholds. Renegotiate on price and terms only, not replacement. Don’t waste leverage on vendors whose regulatory depth and data moats protect them.
A realistic Tier 1 action list is three to five vendors. Running too many parallel renegotiations at once exhausts capacity — you’ll execute nothing well.
Phase 3: The build vs. buy decision for the AI era — when does it make sense to build?
The build vs. buy calculus has shifted. AI coding tools like Cursor and Claude Code reduce MVP development costs from hundreds of thousands of dollars to near-zero. 35% of engineering teams have already replaced at least one SaaS tool with a custom build, and 78% plan to build more in 2026. But the maths depends on what you’re building, at what cost, and whether you can sustain it. For a full analysis of how AI coding tools have changed the economics of building vs. buying software, including developer productivity data and maintenance cost models, see our dedicated article on that topic.
The practical threshold: a SaaS tool costing $100,000 a year is worth building internally; a tool costing $10,000 a year is probably not. Use four criteria to evaluate each candidate:
1. Workflow criticality: If errors create regulatory exposure or revenue loss, the risk of a custom build exceeds the savings. Stay with a tested incumbent.
2. Team capacity: If your engineering team is fewer than five full-time developers, in-house builds are unlikely to be sustainable without a dedicated maintenance commitment. As Jason Evanish puts it: “The day-one math looks great. Month 18 is where it falls apart.” AI product debt compounds just like technical debt.
3. Regulatory exposure: If the workflow involves PII, financial data, healthcare records, or multi-jurisdictional compliance, build only if you have dedicated compliance engineering capacity.
4. Replacement cost: If an AI-native SaaS alternative exists at 30–50% of the incumbent’s cost with equivalent core functionality, buy first and build later.
The best build candidates are high-volume, low-complexity automation workflows — data pipelines, reporting dashboards, lead scoring, customer notifications — locked inside expensive per-seat tools. ClickUp built six AI tools connected to Salesforce, Zendesk, and Snowflake that cut $200K per year in automation subscriptions. Narrow scope, measurable savings, maintainable by a small team.
The worst build candidates are anything involving compliance audit trails, multi-party regulated integrations, or where a vendor’s data moat is genuinely irreplaceable. One governance note: 60% of engineers have already built something outside IT oversight in the past year. If you don’t have a clear build policy, your team is already making these decisions without you.
Phase 4: The renegotiation playbook — what to ask for, how to push, what to protect
The goal of renegotiation is structural contract reform — aligning your costs with actual usage and AI-agent deployment plans. Start the conversation 90–120 days before renewal — vendors are most flexible before the deadline creates urgency on your side. For contracts above $100K, start six months out.
What to ask for:
Seat count right-sizing. Document actual active users over the previous 90 days using login and session data. Present the utilisation data and ask for an adjusted seat count. Vendors have limited grounds to push back when the numbers are clear.
Pricing model transition clause. Request a contractual path to usage-based or outcome-based pricing in the next renewal cycle. Forrester is explicit: “SaaS vendor contracts are primarily based on seats. This will shift to consumption and outcome-based pricing as AI agents are deployed.” Reference Gartner’s 40% shift prediction as market context — this is industry direction, not a novel request.
AI agent licensing clarity. Most SaaS contracts assumed human users with individual logins. As you deploy AI agents accessing SaaS systems via API, vendors are increasingly charging per-agent fees or requiring new tiers. Clarify their current policy before you deploy agents at scale. Salesforce’s emerging agentic licensing model is the most prominent early example.
Consumption cap structure. For vendors already on usage-based pricing, negotiate a consumption cap for the first 12 months as a risk hedge. Zendesk’s AI agent pricing dropped from $2,833/month to $1,500/month within a single year due to competitive pressure. Don’t lock in at first-generation pricing.
What not to concede:
Never concede data portability. Multi-year lock-in without an exit clause tied to AI agent licensing changes is a position to hold. Avoid auto-renewal clauses without a minimum 90-day notice window, and don’t accept seat minimums that exceed your projected active user count.
AI-driven price increases typically arrive at 20–37%. Buyers who negotiate reduce vendor asks by approximately 55% in relative terms, landing an average 12% above pre-AI baselines. That delta, across your top-20 vendor list, is meaningful. The specific mechanics of outcome-based pricing are covered in our article on how SaaS pricing is shifting from per-seat to usage and outcome — this section covers the broader contract structure and relationship tactics that go beyond pricing mechanics.
The Klarna lessons — what radical stack replacement teaches mid-market CTOs
Klarna deployed an AI customer support bot handling the equivalent workload of 700 to 850 employees, replaced Salesforce CRM with an internally-built AI system, and drove revenue per employee from $300K to $1.3M. Then customer satisfaction declined, Klarna reversed course, and CEO Sebastian Siemiatkowski acknowledged publicly that “people were very angry with me” for his earlier claims about AI replacing workers.
Three failures worth studying:
Ticket type matters. AI performs well for Tier 1 transactional support — account status, simple FAQ, basic transactions. It does not match human agents for complex complaints, billing disputes, or emotionally-charged interactions. Separate ticket types before you automate, not after.
Reversal costs are real. Rehiring, retraining, and restoring institutional knowledge rarely appear in the original build-vs-replace business case. Model the reversal scenario before you commit.
Simultaneous consolidation creates failure modes. Phasing your builds would have avoided the cascading dependencies that emerged from doing everything at once. Klarna had 800 engineers to manage that complexity. Most companies at 50–500 employees don’t.
Gartner predicts half of companies that cut customer service staff due to AI will start rehiring by 2027 — Klarna’s reversal is a leading indicator of a broader pattern, not a one-off. The economic logic of consolidation was sound. The pace and sequencing were the problem. Apply the lesson proportionally: one scoped internal tool, piloted in a non-critical workflow, with a fallback position preserved.
Capturing AI budget from SaaS savings instead of adding to the IT budget
The board presentation works best as a reallocation story. Document current SaaS spend from your Phase 1 audit. Model projected savings using conservative assumptions — Tropic‘s negotiation data provides a usable baseline: AI-driven pricing lands around 12% above pre-AI baselines even after negotiation, meaning well-managed renewals generate meaningful savings relative to unmanaged ones. Add tool consolidation in the Open Door quadrant.
The realistic savings range for disciplined execution at 50–500 employee scale: $200K–$500K annually, depending on portfolio size and current sprawl. ClickUp freed $200K in annual automation software subscriptions from a focused internal build effort — the kind of concrete figure that lands well with a CFO.
Three levers to present:
Seat right-sizing. Unused seat removal across the portfolio. Present utilisation data from the audit; vendors resist less when the numbers are clear. Portfolios with genuine sprawl typically carry 15–25% unused or underutilised seats.
Pricing model transition savings. Outcome-based pricing is typically lower than per-seat for AI-augmented workflows. Gartner’s 40% shift prediction provides the external validation that this direction is standard, not experimental.
Tool consolidation. Eliminating redundant Open Door tools reduces both spend and the management overhead of a sprawling vendor portfolio.
The compound benefit: once Claude Code or Cursor-class tools reach your engineering team, development velocity increases — which reduces the cost of building subsequent internal replacements and extends the savings case. For the full strategic context behind why this reallocation opportunity exists now, see our complete overview of the SaaS reckoning.
Your 90-day action plan
Days 1–30: Inventory and classify
Pull 12 months of invoiced SaaS spend from finance. Map your top 20 vendors against the Bain Four-Scenario quadrants. Apply Forrester REAP dispositions to each vendor. Create an AI vulnerability score using workflow type, seat utilisation rate, and AI-native alternative availability.
Output: a ranked priority list with Bain quadrant, REAP disposition, and AI vulnerability score for each top-20 vendor by spend.
Days 31–60: Triage and prepare
Identify Tier 1 vendors: Open Door or Battleground quadrant, renewal within 12 months. For each, run the build vs. buy analysis. For renegotiation paths: pull seat utilisation data and draft the renegotiation brief. For build paths: scope the single workflow to pilot with available engineering capacity. Initiate renegotiation conversations immediately if renewals are approaching.
Output: renegotiation brief for each Tier 1 vendor; one pilot build scoped and resourced; Tier 1 vendor conversations opened.
Days 61–90: Execute and track
Complete Tier 1 renegotiations — signed contract amendments or term sheets. Launch the pilot build at minimum viable scope (no more than one concurrent build). Brief finance on projected savings and establish a budget reallocation proposal. Set up monthly SaaS spend tracking: vendor, spend, seats licensed, seats active, AI vulnerability score, next renewal date. Brief the board on the reallocation framing.
Output: signed renegotiation(s), pilot build launched, board brief delivered, ongoing tracking established.
One scope note: align your co-founder, COO, or CFO before beginning vendor renegotiations. Renegotiations that lack internal alignment stall at the wrong moment — and vendors notice when the person across the table doesn’t have full authority to close.
For a broader view of all the dimensions of this challenge — from the market forces driving the shift to the vendor landscape and competitive dynamics — see our comprehensive SaaS reckoning guide.
FAQ
Should I be replacing my SaaS tools with AI right now?
Selectively and immediately for Open Door category tools, where AI-native alternatives already match core functionality at lower cost. Cautiously and in phases for Battleground tools. Not at all for Core Strongholds. The timing for renegotiation is now regardless of replacement decisions — audit and renegotiate first.
How do I know which SaaS tools are most at risk from AI displacement?
Apply the Bain Four-Scenario Framework: classify each tool by how much AI can automate the core workflow and how much AI has already penetrated the product category. Probabilistic SaaS — content, marketing automation, task management — is more vulnerable than deterministic SaaS like payroll, ERP, compliance, and healthcare records.
What is the Forrester REAP Model and how do I use it?
REAP stands for Reassess, Extract, Advance, and Prune — built on business fitness and technical fitness dimensions. Use it as the action output layer on top of your Bain classification: Open Door vendors typically get Prune or Extract; Core Strongholds get Extract; Battleground vendors get Reassess or Advance.
What is the difference between usage-based and outcome-based SaaS pricing?
Usage-based pricing charges on consumption volume — API calls, tasks completed, messages sent. Outcome-based pricing charges on achieved results — resolved tickets, closed deals, successful transactions. Vendors are mostly in hybrid territory: 65% of SaaS vendors have layered an AI usage meter on top of seat-based pricing, while 35% have bundled AI into per-seat increases. Pure outcome-based pricing is still rare.
How do I renegotiate a SaaS contract when the vendor is resistant?
Lead with utilisation data — vendors have limited grounds to argue for full seat billing when you can show 40% of licensed seats are inactive. Use AI-native alternatives as genuine BATNA; get real quotes, not theoretical options. Ask directly: “What would a renewal at our current tier and feature set cost?” and document the response.
What should I not concede in a SaaS renegotiation?
Never concede data portability rights. Never agree to multi-year lock-in without an exit clause tied to AI agent licensing changes. Avoid auto-renewal clauses without a minimum 90-day notice window. And always negotiate AI data rights explicitly — who owns outputs, training data, and derivative insights generated by AI features.
Is it realistic for a 50-employee company to build SaaS alternatives in-house?
Yes, for narrowly scoped automation workflows where annual SaaS spend exceeds $100K. No, for anything involving compliance, multi-party regulated integrations, or workflows requiring a vendor’s proprietary data moat. The threshold question is whether you have sustained engineering capacity for maintenance. Initial build is often straightforward. Month 18 is where AI product debt surfaces.
How does compound engineering change the build vs. buy calculation?
A single developer using AI coding tools can now maintain and ship what previously required three to five FTEs — 51% of builders are shipping production software with AI and about half report saving six or more hours per week. That changes the ongoing maintenance cost calculation. The caveat: 72% of production builders use AI to write discrete pieces of code integrated into larger projects, not prompting their way to complete apps.
What went wrong with Klarna’s AI stack replacement?
AI customer support degraded satisfaction in complex cases, the reversal cost wasn’t factored into the original business case, and consolidating too many apps simultaneously created cascading dependencies. The lesson is not “don’t build” — it is phase your builds, pilot in non-critical workflows first, and preserve fallback positions.
How do I build the business case for AI investment at board level?
Document projected SaaS savings from renegotiations and consolidations and present AI tooling investment as funded from that savings line. Use Gartner’s 40% pricing model shift prediction as external validation that this is standard industry direction, not an experiment.
What is SaaS sprawl and why does it matter?
Forrester identifies SaaS sprawl as a primary pain point for technology leaders — the accumulation of tools with poor visibility into actual usage, duplicated functionality, and unchallenged auto-renewals. Making the sprawl visible is the prerequisite for all classification and action that follows.
What is the AI agent licensing issue and why does it matter for negotiations?
Most SaaS contracts assumed human users with individual logins. As companies deploy AI agents accessing SaaS systems via API, vendors are increasingly charging per-agent fees or requiring new licensing tiers. Salesforce’s emerging agentic licensing model is the most prominent early example. In any renegotiation, clarify the vendor’s current policy on AI agent access before you deploy agents at scale.