AI-washing in workforce decisions is now a documented pattern — attributing layoffs to AI when the real driver is cost-cutting or an overhiring correction. Oxford Economics found only 4.5% of US job cuts in 2025 were genuinely attributable to AI. The Yale Budget Lab found no major labour market shifts across three years of post-ChatGPT data. Even Sam Altman acknowledged that companies are “blaming AI for layoffs that they would otherwise do.” This creates governance risk, legal liability, and strategic miscalculation for the businesses doing it.
This article is the practitioner companion to this series on AI-washing in workforce decisions. Three tools: a ten-question diagnostic checklist, board discussion scripts grounded in institutional evidence, and WARN Act compliance guidance for companies in the 100–500 employee range.
How do you tell whether AI is the actual cause of a workforce change in your own organisation?
AI-washing is identifiable. There’s a structured diagnostic for it: look for the gap between what a company claims publicly and what its internal deployment evidence actually shows.
Run these ten questions against any proposed workforce reduction before you accept the framing.
1. Has the organisation actually deployed the AI system cited as the driver? Demand deployment dates, vendor contracts, and production rollout evidence. Amazon’s “Just Walk Out” technology, marketed as AI-powered, was later revealed to rely on remote workers monitoring cameras.
2. Can the organisation specify exactly which tasks the AI now performs? Category-level claims — “AI is transforming our operating model” — are not evidence. Task-level specificity is required.
3. What is the timeline gap between claimed AI impact and actual deployment? Wharton‘s Peter Cappelli: “There’s very little evidence that AI cuts jobs anywhere near like the level that we’re talking about. In most cases, it doesn’t cut head count at all.” Six weeks from deployment to announcement is not enough time for structural change.
4. Does internal communication reference AI as the driver, or do internal documents reference cost targets? Emails and board papers create a paper trail. If internal communications name financial targets while external communications name AI, that discrepancy is discoverable.
5. Does the WARN Act filing check the automation/AI disclosure box? New York’s 2025 amendment requires employers to specify whether “technological innovation or automation” drove the layoff. Zero of 162 NY WARN Act filings checked the box — despite many of the same companies attributing cuts to AI publicly.
6. Is the announcement clustered around an earnings call or investor day? Oxford Economics noted companies attributing layoffs to AI “convey a more positive message to investors” than admitting to past over-hiring. Timing is a diagnostic signal.
7. Were the affected roles in functions where AI automation is technically feasible at the claimed scale? If affected roles are spread across unrelated functions, the AI attribution needs scrutiny.
8. Did the company over-hire during 2021–2023 and is now returning to pre-pandemic headcount? The overhiring correction is the most common actual driver of tech sector layoffs currently being attributed to AI.
9. Has leadership publicly walked back its AI attribution claims? Amazon CEO Andy Jassy warned AI would shrink the workforce in June 2025, then clarified after October 2025 layoffs that they “weren’t really AI-driven. Not right now, at least.” See the full case analysis in our review of real company examples across the AI-washing spectrum.
10. Is Challenger, Gray & Christmas data being cited without its methodological caveat? Challenger records stated reasons, not verified causes. It shows companies claiming AI attribution, which is exactly the phenomenon being challenged.
What evidence should you bring to a board discussion challenging an AI displacement narrative?
Board-level challenges need institutional-grade sources. Personal opinion won’t shift a narrative that’s already been endorsed by the CEO.
Here’s what to bring.
Yale Budget Lab analysed US labor market data from November 2022 through late 2025. Finding: “The picture of AI’s impact on the labor market that emerges from our data is one that largely reflects stability, not major disruption.” That’s the most cited empirical finding on AI’s actual labour market impact right now.
Oxford Economics (January 2026): firms “don’t appear to be replacing workers with AI on a significant scale.” Only 4.5% of US job cuts cited AI as the driver. Their direct test: “If AI were already replacing labour at scale, productivity growth should be accelerating. Generally, it isn’t.”
NBER study: nearly 90% of C-suite executives across the US, UK, Germany, and Australia said AI had no employment impact over the three years following ChatGPT’s release.
Sam Altman: “There’s some AI washing where people are blaming AI for layoffs that they would otherwise do.” Hard to dismiss as anti-AI bias.
The Productivity Paradox: Robert Solow said “You can see the computer age everywhere but in the productivity statistics.” Apollo Global‘s Torsten Slok: “AI is everywhere except in the incoming macroeconomic data.” Technologies took decades to change labour markets at scale. This one is no different.
Frame the challenge as fiduciary duty. Here’s specific language that works: “Before we approve this, can we confirm that the WARN Act filing will reflect the AI attribution we’re using in the press release? New York data shows zero of 162 companies have checked that box despite similar public claims.”
For the full set of six empirical counterpoints to AI displacement claims, each sourced to a named independent institution, see the evidence synthesis article in this series. For more on why investor incentives drive these announcements, see the analysis of the motivation structure behind AI-washing announcements.
How do you communicate workforce changes honestly without creating confusion?
Name the actual driver. That’s it. The two failure modes are AI-washing language that overstates AI’s role, and defensive over-hedging that erodes trust with everyone involved.
Avoid:
- “We are redeploying resources as we adopt AI tools” — when no redeployment is occurring
- “AI is transforming our operating model” — when it hasn’t happened yet
- “We are streamlining for an AI-native future” — when the actual driver is a revenue shortfall
Cappelli sums it up: “The headline is, ‘It’s because of AI,’ but if you read what they actually say, they say, ‘We expect that AI will cover this work.’ Hadn’t done it. They’re just hoping.”
Use instead:
For an overhiring correction: “We are correcting headcount to match current revenue and business requirements.”
For pandemic hiring context: “We made hiring decisions in 2021–2023 based on growth projections that did not materialise.”
For genuine AI automation: “We have deployed [specific system] in [specific function], which now handles [specific tasks], allowing us to consolidate [number] positions.”
ASML‘s CFO described 1,700 job cuts as trimming “bloat and inefficient layers” — no AI attribution. Target’s CEO on 1,800 cuts: “The complexity we’ve created over time has been holding us back.” Both named the actual driver and stayed investor-credible.
Internal communications should be more precise than public ones. Board papers and HR records create the paper trail against which any WARN Act filing will be compared. Get this right before anything goes out.
What are the WARN Act obligations that apply when AI is cited in a layoff notice?
Most companies in the 50–500 employee range have not yet worked out how AI-washing language interacts with WARN Act disclosure obligations. Here’s what you need to know.
Federal WARN Act: applies to employers with 100 or more employees. Triggered by layoffs of 50 or more employees in a 30-day period, or 33% of the workforce. No AI-specific disclosure at the federal level yet.
NY WARN Act (stricter): applies to employers with 50 or more employees, 90-day notice triggered by 25 or more employee layoffs. The 2025 amendment added the automation/AI disclosure checkbox.
The zero-compliance finding: zero of 162 NY WARN Act filings in the analysed period checked the AI box. Amazon filed 660 affected NY workers under “economic” reasons. Goldman Sachs affected 4,100+ NY workers; all marked “economic.” Bloomberg Law put it plainly: “It is critical that employers answer the questions in WARN frankly and honestly.”
The compliance risk is the discoverable discrepancy between public claims and legal filings. Ask your legal counsel: “If we are claiming AI drove this reduction publicly, do we need to check the automation box on the NY WARN Act filing?”
Forward-looking risks: the Warner/Hawley AI-Related Job Impacts Clarity Act (bipartisan Senate bill, November 2025) would extend AI disclosure obligations federally. New York has additional bills with $10,000 fines and five-year loss of access to state incentives. For detailed guidance, see the full WARN Act disclosure requirements analysis.
How do you respond professionally when your CEO asks you to support AI-washing communications?
The professional response has four steps. Frame each one as risk management, not a values disagreement.
Step 1: Run the diagnostic before responding. “Can I confirm what we have deployed, where it’s running in production, and what it’s actually doing?”
Step 2: Identify the legal exposure. “The NY WARN Act now has an AI disclosure checkbox. If we’re attributing this to AI publicly, we need to make sure our filing reflects that — or we create a discoverability risk.”
Step 3: Offer an accurate, investor-positive alternative. “We positioned ourselves for significant growth and made investments that reflected that ambition. Business conditions have changed, and we’re aligning headcount with our current revenue and requirements. We remain on track with our AI investments for [specific future capability].”
Step 4: Fiduciary duty as a last resort. “The board’s fiduciary duty requires that material statements about AI are accurate and consistent with our legal filings. I’d recommend we get legal to review the press release language against the WARN Act filing before we go public.” Use this once.
Forrester found 55% of employers who attributed layoffs to AI would regret doing so, and half would quietly rehire. Cappelli: “A few decades ago, the market stopped going up because investors started to realize that companies were not actually doing the layoffs that they said they were going to do.” The same dynamic is starting to apply to AI attribution now.
What does a genuine AI-driven restructuring look like, so you can identify the difference?
Five criteria distinguish genuine AI-driven restructuring from spin.
Criterion 1: Specific task substitution evidence. The organisation can name the exact tasks now performed by AI, with deployment dates predating the announcement — not “AI is transforming customer service” but “our system, deployed in [month], now handles [specific queries] that previously required [number] FTEs.”
Criterion 2: Measurable productivity data. Oxford’s test: “If AI were already replacing labour at scale, productivity growth should be accelerating.” No measurable productivity improvement means the attribution cannot be verified.
Criterion 3: Internal documentation consistency. Board papers, performance records, and WARN Act filings all use the same language as public communications. Zero of 162 NY WARN filers passed this test.
Criterion 4: Functional concentration. Affected roles are concentrated in functions where current AI capabilities can genuinely substitute. If roles are distributed across unrelated functions, AI attribution requires task-level evidence for each one.
Criterion 5: Timeline plausibility. Genuine AI-driven workforce changes require months of deployment, testing, and transition. A six-week timeline is not plausible.
The structural vs. cyclical test: “If business conditions improved in 12 months, would we rehire for this role?” If yes — the unemployment is cyclical, not structural.
The Klarna case: CEO Sebastian Siemiatkowski claimed the company replaced 700 employees with AI. Quality declined, customers revolted, the company had to rehire. Even with specific deployment claims, the actual outcome required revision. For a full spectrum analysis — Amazon, Salesforce, Duolingo, and Klarna — see the case study comparison across the AI-washing spectrum.
For workforce planning, the honest baseline matters: what genuine AI displacement looks like versus what companies claim sets the factual foundation for distinguishing real from fictional restructuring when you are building headcount plans.
Frequently Asked Questions
What is the difference between structural unemployment and cyclical unemployment?
Structural unemployment is permanent — the work no longer exists because technology has replaced the function. Cyclical unemployment is temporary — the work still exists but demand is lower. Plain language test: if the company would rehire for the role in 12 months, the unemployment is cyclical, not structural.
How many employees must a company have before the WARN Act applies?
Federal WARN Act: 100 or more employees, 60-day notice triggered by 50 or more employee layoffs or 33% of workforce. NY WARN Act: 50 or more employees, 90-day notice, with the 2025 AI/automation disclosure checkbox. Most SaaS, FinTech, and HealthTech companies at 50–500 employees operating in New York fall within NY WARN Act scope.
What does the NY WARN Act automation checkbox actually require?
Employers must specify whether “technological innovation or automation” was a contributing factor. Zero of 162 NY WARN Act filings checked the box in the analysed period, despite many of the same companies attributing cuts to AI publicly.
Can I cite the Challenger, Gray & Christmas data in a board discussion?
Only with the caveat: it records stated reasons, not verified causes. Using it to support AI attribution is circular — it shows companies claiming AI attribution, which is the phenomenon being challenged. Better sources: Yale Budget Lab, Oxford Economics, and the NBER study.
What did Sam Altman say about AI-washing and layoffs?
At the India AI Impact Summit in February 2026, Altman acknowledged companies are blaming AI for layoffs they would have made anyway. Difficult to dismiss as anti-AI bias.
Is AI-washing illegal?
Not directly in most jurisdictions, but specific forms create real exposure: WARN Act inconsistency risk when public AI attribution conflicts with legal filing language; potential securities fraud risk in material investor communications. The Warner/Hawley AI-Related Job Impacts Clarity Act (November 2025) would create specific disclosure obligations if enacted.
How do I use the Productivity Paradox in a board discussion?
Robert Solow: “You can see the computer age everywhere but in the productivity statistics.” Apollo Global’s Torsten Slok: “AI is everywhere except in the incoming macroeconomic data.” Goldman Sachs found AI boosted the US economy by “basically zero” in 2025. Use it to rebut urgency framing.
What are phantom layoffs and how do they relate to AI-washing?
Phantom layoffs are announced layoffs that never fully materialise — Wharton’s Peter Cappelli’s term. AI-washing is the current version: announcing AI-driven headcount reductions that are not actually implemented at the claimed scale.
What is the NACD Three-Pillar Framework for board oversight of AI workforce decisions?
The National Association of Corporate Directors recommends: Human Capital Foundations (baseline workforce metrics before AI deployment); AI Strategy Framework (governance structure including workforce impact assessment); and Talent Impact Assessment (structured evaluation before deployment). Use this to request the board apply its own governance standards.
How should I distinguish genuine AI-driven role elimination from a temporary pause in hiring?
Role elimination (structural): the function is removed permanently; AI performs the work. Hiring pause (cyclical): the function exists but new hires are paused; the role may be reinstated. Test: is there a specific AI system in production for the relevant function? If not, AI attribution is premature.
This article is part of our series on AI-washing in workforce decisions — covering what the data shows, why companies do it, how the major players rank on the spectrum, and what regulatory accountability looks like in practice.