A complete AI fraud operation now costs an attacker under $60 a month. Less than most SaaS subscriptions. You’re looking at $5 for a synthetic identity kit, $24/month for anonymous virtual machine infrastructure through a service like RedVDS, and about $30/month for a Dark LLM that writes the scripts. That’s your attacker’s stack.
On the other side of that ledger: the FBI’s Internet Crime Complaint Centre recorded $16.6 billion in total cybercrime losses in 2024, with business email compromise alone accounting for $2.7 billion from 21,442 complaints. This article lays out both sides of that equation — what attackers spend and what organisations lose — so you can understand the economic logic driving the acceleration. This article is part of our comprehensive guide to the full AI social engineering threat landscape, which covers how these attacks actually work in practice across every stage of the threat.
Why is AI fraud suddenly everywhere — what changed in the last two years?
Costs collapsed below the threshold where running many parallel fraud operations stopped making sense and started being obviously rational.
Group-IB documented a 371% surge in dark web forum posts featuring AI keywords since 2019, with a tenfold increase in replies. AI isn’t an occasional exploit anymore. It’s embedded as core criminal infrastructure.
Three things happened at roughly the same time. Dark LLMs appeared — criminal-purpose language models with no safety guardrails, priced at around $30/month. Synthetic identity kits dropped to $5 on the dark web. And anonymous VM infrastructure became commodity-priced through services like RedVDS at $24/month. Three separate cost collapses, all landing together.
What previously required technical skill and significant investment now requires a credit card and a few subscriptions. The volume data makes this concrete — multi-step fraud attacks rose 180% year-over-year (BIIA), deepfake attacks surged 880% in 2024 (Pindrop), and identity fraud attempts using deepfakes surged 3,000% in 2023 (Deloitte).
What does it actually cost an attacker to run an AI fraud operation?
The attacker cost stack has three layers, each available independently on the dark web.
Identity layer — $5. Synthetic identity kits sell for approximately $5 per package on dark web forums. That gets you a generated face image, a cloned voice sample, fabricated supporting credentials, and a fake employment history. Everything needed to construct a fraudulent identity for KYC bypass, new account fraud, credit applications, or social engineering with a credible false persona.
Scripting layer — ~$30/month. Dark LLMs like WormGPT and FraudGPT are criminal-purpose language models trained without safety guardrails. They generate personalised phishing scripts and social engineering pretexts at scale. Group-IB identifies at least three active vendors with over 1,000 active subscribers, with subscriptions ranging from $30 to $200 per month. These aren’t rough tools thrown together — they get updates, support requests, and feature iterations, just like legitimate SaaS products.
Infrastructure layer — $24/month. RedVDS gave criminals access to disposable virtual computers for as little as $24/month, making fraud operations cheap, scalable, and hard to trace. In one month, 2,600 distinct RedVDS virtual machines sent an average of one million phishing messages per day to Microsoft customers alone. Microsoft’s Digital Crimes Unit and Europol took RedVDS offline in January 2026 after linking it to approximately $40 million in U.S. fraud losses since March 2025.
Total: under $60/month for a complete AI fraud capability. The criminal marketplace follows the same competitive dynamics as legitimate SaaS — vendor competition is driving further price compression at every layer. That dynamic has a name.
What is Cybercrime-as-a-Service and how does it work like a subscription business?
Cybercrime-as-a-Service (CaaS) is a criminal market model that mirrors B2B SaaS. Attack capabilities are packaged into subscription services, sold by specialised vendors across different layers, with transparent pricing and service-level guarantees. Group-IB notes that CaaS vendors mimic aspects of legitimate SaaS businesses — pricing tiers, subscription models, customer service support, the lot.
Think about how a legitimate company assembles its SaaS stack — cloud hosting, analytics, authentication, CRM — from a bunch of specialised vendors. An attacker does exactly the same: one vendor for identity kits, one for Dark LLM scripting, one for infrastructure. Each vendor specialises. That division of labour reduces the barrier to entry at every layer and makes the overall ecosystem much harder to dismantle.
Deepfakes-as-a-Service (DaaS) is a growing sub-market of CaaS — AI-generated video and audio impersonation tools available on subscription, with tiered pricing and product update cycles. Group-IB documented $347 million in verified deepfake fraud losses in a single quarter, which tells you how fast this sub-market is scaling.
Here is the thing that matters most if you’re thinking about defensive strategy: disrupting a single vendor does not collapse the ecosystem. Microsoft’s RedVDS takedown was their 35th civil action targeting cybercrime infrastructure. New services keep emerging because the underlying market incentives are intact. Remove one supplier in a competitive market and you create an opening for others — and state-sponsored actors using the same infrastructure makes this even harder to contain.
What are the documented losses from AI-enabled fraud so far?
The aggregate figures actually understate the problem because most losses go unreported.
FBI IC3 2024 data: total cybercrime losses hit $16.6 billion, with business email compromise accounting for $2.7 billion from 21,442 complaints. Group-IB documented $347 million in deepfake fraud losses in a single quarter. RedVDS alone drove approximately $40 million in U.S. losses since March 2025 — from one infrastructure provider charging $24/month.
In January 2024, a multinational company’s employee in Hong Kong authorised 15 wire transfers totalling HKD 200 million — roughly USD $25.6 million — after a video conference where every other participant, including the company’s CFO, turned out to be an AI-generated deepfake. The Hong Kong Police Force confirmed the incident. No arrests, and the funds remain unrecovered.
1 in 10 adults encountered AI voice cloning scams, and 77% of voice scam victims reported financial losses (BIIA). The FTC documented $12.5 billion in U.S. consumer fraud losses in 2024 — a 25% increase despite fraud report volumes staying flat. Scams are getting more effective, not more numerous.
From Microsoft’s RedVDS case files: H2-Pharma, an Alabama-based pharmaceutical company, lost more than $7.3 million through RedVDS-enabled BEC — money intended for cancer treatments and children’s allergy medications. These aren’t enterprise targets with big security teams. They look like your clients, or your suppliers.
How does the attacker ROI compare to the cost of a defensive response?
Run the numbers the way you’d evaluate any business unit.
The attacker’s operating cost: $720/year (12 × $60/month). The Hong Kong case yielded $25.6 million from a single operation. Even conservatively — if 1 in 100 BEC attempts succeeds at a $125,000 average yield — the annual return on a $720 investment beats any legitimate business benchmark by a wide margin.
The efficiency multiplier makes it worse. AI-generated spear phishing achieves a 54% click-through rate compared to 12% for human-crafted attempts, according to CrowdStrike data. That’s a 4.5× efficiency gain. AI tools reduce the cost of attacks while simultaneously making them more effective. At $60/month per operation, running 10 simultaneous campaigns costs $600/month. The rational strategy is to run many moderately targeted attacks in parallel and let the hit rate do the work.
For defenders, the asymmetry is the point. Your organisation needs to invest in training, detection tooling, verification protocols, and incident response capabilities. 91% of enterprises are planning to increase spending on voice fraud prevention over the next 12 months (Modulate, January 2026). That defensive investment costs orders of magnitude more than the $720/year attacker operating cost it’s designed to counter. The asymmetry doesn’t resolve on its own — why these low-cost attacks achieve such high hit rates is a function of detection failure at the human level.
What is the economic trajectory — where does this go from here?
The Group-IB trend line is an accelerating curve. The 371% increase in dark web AI mentions since 2019 has its steepest acceleration in 2024–2025. Pindrop tracked a 1,210% surge in deepfake attacks by December 2025. Deloitte projects generative AI fraud losses will climb from $12.3 billion in 2023 to $40 billion by 2027.
The next phase is agentic AI fraud — fully automated, machine-to-machine fraud chains where AI agents execute attacks end-to-end without any human operator involved. Experian identifies agentic AI fraud as the top emerging fraud threat for 2026. Once that transition happens, the last remaining constraint in the attacker cost stack — the human operator’s time — disappears. The scaling limit shifts from operator availability to computational capacity. And computational capacity is cheap.
Commoditised tools also reduce the skill barrier, which brings new entrants into criminal markets who couldn’t previously participate. More attackers. More parallel campaigns. At $60/month each.
For your organisation, the strategic question is how to calibrate defensive investment to the new economic reality — and what the cost-benefit case for specific defensive controls looks like when run against an attacker operating cost of $60/month. Attack volume will keep increasing regardless of individual disruption actions like the RedVDS takedown, because the underlying economic incentives are structural.
For the broader context of AI-enabled fraud, including how these attacks translate into specific attack vectors and who is running them, the AI-enabled social engineering threat landscape overview covers the full picture across every dimension of this threat.
FAQ
What is a Dark LLM and how does it differ from ChatGPT or Claude?
A Dark LLM is a language model with safety guardrails deliberately removed, sold on dark web platforms for criminal use. Unlike commercial models that refuse to generate phishing content or social engineering scripts, Dark LLMs like WormGPT and FraudGPT are purpose-built for exactly those tasks. Group-IB identifies at least three active vendors with over 1,000 active subscribers; subscriptions range from $30 to $200 per month.
How much does a synthetic identity kit cost on the dark web?
Approximately $5. A synthetic identity kit combines stolen PII with AI-generated facial images, voice samples, and fabricated employment histories — everything needed to pass KYC verification, open fraudulent accounts, or run social engineering with a credible false persona.
What is RedVDS and why was it shut down?
RedVDS was a virtual desktop infrastructure provider that sold anonymous VM instances at $24/month. Microsoft’s Digital Crimes Unit and Europol disrupted it in January 2026 after linking it to approximately $40 million in U.S. fraud losses since March 2025. In one month, 2,600 RedVDS VMs sent an average of one million phishing messages per day to Microsoft customers alone.
Is voice cloning fraud more common than video deepfake fraud?
Voice cloning is the higher-frequency attack vector — cheaper to run, requires less compute than real-time video deepfakes, and works over a standard phone call. Pindrop documented an 880% surge in deepfake attacks in 2024, with voice-based attacks comprising the majority, and a 1,300% year-on-year increase in deepfake voice calls.
How much money has been lost to AI voice cloning fraud?
Group-IB documented $347 million in deepfake fraud losses in a single quarter. The Hong Kong Arup case in January 2024 resulted in $25.6 million lost in a single incident involving a deepfake video conference. RedVDS infrastructure enabled approximately $40 million in U.S. losses since March 2025. FBI IC3 reported $2.7 billion in BEC losses for 2024, a growing proportion of which involves AI-generated voice and content.
What is the difference between WormGPT and FraudGPT?
Both are Dark LLMs sold on the dark web for criminal use. WormGPT launched in June 2023; FraudGPT followed days later, with subscriptions ranging from $90 to $200 per month depending on the tier. Multiple competing products — WormGPT, FraudGPT, DarkBERT, DarkBARD — tell you that the criminal AI market has matured to the point of active vendor competition, which only drives prices down further.
How does AI-generated phishing compare to human-crafted phishing in effectiveness?
AI-generated spear phishing achieves a 54% click-through rate compared to 12% for human-crafted attempts, according to CrowdStrike data — a 4.5× efficiency gain. AI tools reduce the cost of attacks while simultaneously making them more effective.
What does Cybercrime-as-a-Service mean for small and medium businesses?
CaaS means attacking your organisation no longer requires a skilled hacker — it requires a subscription. SMBs handle high-value transactions while typically lacking dedicated security teams — an attractive combination for attackers operating at $60/month.
Can disrupting platforms like RedVDS stop AI-enabled fraud?
Individual disruptions slow but don’t stop AI-enabled fraud. Microsoft’s RedVDS action was their 35th civil action targeting cybercrime infrastructure; new services emerge because the market incentives remain intact.
What is Deepfakes-as-a-Service (DaaS)?
Deepfakes-as-a-Service is a sub-market of Cybercrime-as-a-Service that provides AI-generated video and audio impersonation tools on a subscription basis, with tiered pricing, customer support channels, and product update cycles. Group-IB documented $347 million in DaaS-related fraud losses in a single quarter — evidence of how quickly this sub-market has scaled from niche capability to mainstream criminal infrastructure.
How do attackers use synthetic identities to bypass KYC checks?
Synthetic identity kits combine stolen PII with AI-generated photos, voice samples, and fabricated employment histories to pass KYC verification at financial institutions, open fraudulent accounts, and establish social engineering backstories — all for $5 per kit.