Insights Business| SaaS| Technology State Actors and Cybercriminals Are Now Using the Same AI Fraud Infrastructure
Business
|
SaaS
|
Technology
Mar 5, 2026

State Actors and Cybercriminals Are Now Using the Same AI Fraud Infrastructure

AUTHOR

James A. Wondrasek James A. Wondrasek
Graphic representation of AI-enabled social engineering showing state actor and cybercriminal convergence

The AI tools that nation-states develop to attack high-value targets become commercially available to criminal operators within months. The capability ceiling for financially motivated attacks on ordinary organisations is now set by nation-state innovation — not criminal ingenuity alone.

DPRK, Iranian, Russian, and Chinese state-sponsored groups are documented using the same $30/month Dark LLMs, the same deepfake infrastructure, and the same disposable virtual machine platforms as independent cybercriminals.

This article maps the documented threat actors, the shared infrastructure they use, and what that convergence means for your threat model. Every claim references a named intelligence source: Google’s Threat Intelligence Group (GTIG), CrowdStrike‘s 2025 Global Threat Report, Group-IB, and Microsoft’s Digital Crimes Unit. For the broader picture of the AI-enabled social engineering threat landscape, our pillar guide covers the full context.

Which state-sponsored threat actors are using AI for social engineering attacks?

At least four nation-states — DPRK, Iran, Russia, and China — have state-sponsored groups with documented, active use of generative AI in offensive operations.

Google’s GTIG has published direct evidence of APT42 (Iran / IRGC), APT28/FROZENLAKE (Russia / GRU), APT41 (China / PRC), and UNC1069/MASAN (DPRK) using AI platforms including Gemini and the Hugging Face API. CrowdStrike’s 2025 Global Threat Report documents FAMOUS CHOLLIMA (DPRK) conducting corporate infiltration operations using GenAI for persona creation and live job interview assistance.

None of these groups are building bespoke AI systems. They are using commercially available or openly hosted models. The same Gemini that APT42 uses to craft phishing lures is available to anyone with a Gmail account. The same Hugging Face API that APT28 uses to dynamically generate Windows commands is public infrastructure.

That is the convergence mechanism. The same AI tools used to clone voices and generate synthetic identities are accessible to state actors and criminals at identical price points.

What is North Korea doing with AI — and why should non-government organisations care?

North Korea operates at least two documented threat groups using AI offensively — FAMOUS CHOLLIMA targeting corporate hiring pipelines, and UNC1069/MASAN conducting cryptocurrency theft.

FAMOUS CHOLLIMA operatives create AI-generated LinkedIn profiles with believable employment histories and AI-generated profile images, then use GenAI to produce plausible technical answers during live job interviews. The goal is legitimate employment at private technology companies, providing long-term insider access. CrowdStrike responded to 304 FAMOUS CHOLLIMA incidents in 2024, with nearly 40% classified as insider threat activity. These are documented incidents at private technology companies — not government agencies or defence contractors.

UNC1069/MASAN uses Gemini for cryptocurrency research and reconnaissance, then distributes the BIGMACHO backdoor via deepfake video lures impersonating known figures in the cryptocurrency industry. Victims are directed to download what is presented as a “Zoom SDK” link.

Any organisation with active engineering recruitment is within FAMOUS CHOLLIMA’s operational scope. Their methods — AI-generated personas, GenAI interview assistance — are already mirrored in commercial tools. Synthetic identity kits, including AI-generated faces and voices, are available on dark web markets for approximately $5 each, with sales continuing to rise through 2025 (Group-IB).

How are Iranian and Russian state actors using AI in active operations?

Iran and Russia represent two distinct patterns: systematic intelligence collection and surveillance targeting on one hand, architecturally novel malware on the other.

APT42 (Iran / IRGC) uses Google’s Gemini for phishing lure creation, translation of specialised vocabulary, target reconnaissance on think tanks and political organisations, and research into Israeli defence matters. APT42 also attempted to build a “Data Processing Agent” — a tool that converts natural language queries into SQL to track individuals by phone number, travel patterns, or shared attributes. That is AI-assisted mass surveillance, built on a publicly available LLM.

Iran is not running a single experimental programme. TEMP.Zagros (tracked as MUDDYCOAST), a separate Iranian group, used Gemini to develop custom malware including web shells and a C2 server — and inadvertently exposed hard-coded C2 domains and encryption keys to Gemini in the process, which GTIG used to disrupt the campaign.

APT28/FROZENLAKE (Russia / GRU) introduced an architectural shift that matters for detection. GTIG identified APT28 deploying PROMPTSTEAL against Ukrainian targets — the first documented case of LLM-querying malware in live operations. CERT-UA independently corroborated the finding, tracking it as LAMEHUG.

PROMPTSTEAL masquerades as an image generation programme. Instead of hardcoded exfiltration commands, it queries the Qwen2.5-Coder-32B-Instruct model via the Hugging Face API at runtime to dynamically generate Windows commands — collecting system information, process lists, network configuration, Active Directory data, and Office documents before exfiltration.

GTIG calls this “just-in-time AI” — malware that generates its own instructions dynamically rather than executing a fixed payload. No fixed payload means static signature detection has nothing to match.

What is China doing with AI-enabled cyber operations?

APT41 (China / PRC) used Gemini throughout August 2025 for C2 framework development, code obfuscation, and infrastructure reconnaissance (Google GTIG). APT41 sought assistance with C++ and Golang code for a C2 framework called OSSTUN, used prompts related to obfuscation libraries to harden tooling against detection, and used open forums to lure victims toward exploit-hosting infrastructure.

The distinguishing feature of Chinese group AI use is breadth. A separate China-nexus actor observed by GTIG used Gemini across every phase — reconnaissance, phishing, lateral movement, C2 configuration, and exfiltration — treating it as a general-purpose force multiplier rather than a specialist tool.

What is cybercrime-as-a-service and how does it connect state actors to your organisation?

Cybercrime-as-a-service (CaaS) is the commercial layer where AI attack infrastructure becomes available to any buyer with a dark web connection and a small monthly budget. The price points are specific: Dark LLMs for approximately $30 per month, disposable virtual machines from RedVDS for $24 per month (before its disruption), and synthetic identity kits for approximately $5 each.

Group-IB reports AI mentions on dark web forums are up 371% since 2019. Dark LLMs — purpose-built underground services operated behind Tor with no safety guardrails — have more than 1,000 active users. These are not jailbroken chatbots. They generate phishing content, malware code, and fraud scripts without restriction.

RedVDS, disrupted by Microsoft and Europol in January 2026, is the clearest case study. It provided subscribers with disposable Windows virtual machines leaving minimal forensic trace, combined with AI face-swapping and voice cloning for fraud. Since March 2025, RedVDS-enabled activity drove approximately $40 million in fraud losses in the United States alone. In a single month, more than 2,600 distinct VMs sent an average of 1 million phishing messages per day to Microsoft customers alone.

RedVDS is not an edge case. Affected sectors include construction, manufacturing, healthcare, logistics, education, and legal services. H2-Pharma, an Alabama-based pharmaceutical company, lost more than $7.3 million through a RedVDS-enabled BEC scheme.

For a deeper look at the economics of attack infrastructure and why these price points matter, that analysis is in our companion article.

How does state actor innovation become the criminal commodity of next year?

The capability commoditisation cycle runs in one direction. State actors develop techniques against high-value targets, those techniques prove effective, and within months they appear on dark web markets as subscription services.

PROMPTSTEAL’s just-in-time AI architecture is reproducible using open-source models and public APIs. Qwen2.5-Coder-32B-Instruct is on Hugging Face. The Hugging Face API is public. The only missing ingredient is the targeting and access method — which CaaS supplies.

FAMOUS CHOLLIMA’s AI-assisted persona creation is already mirrored in the $5 synthetic identity kits on the same markets where Dark LLMs are sold. That commoditisation cycle is already complete.

The tools DPRK, Iran, Russia, and China develop to attack high-value targets become commercially available to criminals targeting ordinary organisations within months. That is what this article maps — and the full picture of AI-driven fraud that SMB leadership needs to understand is in our comprehensive guide to AI-enabled social engineering threats.

What does the convergence of state and criminal AI tooling mean for SMB threat models?

The convergence requires three specific updates to your threat model.

Hiring pipelines are now an attack surface. FAMOUS CHOLLIMA’s AI-assisted corporate infiltration is documented, not theoretical — CrowdStrike responded to 304 incidents in 2024. Any organisation with active engineering recruitment needs identity verification that goes beyond a polished LinkedIn profile. Synthetic identity kits at $5 each make this scalable, not targeted.

Email and voice channels must account for AI-generated content. RedVDS provided AI face-swapping, voice cloning, and multimedia email thread generation at consumer price points. Deepfake fraud drove $347 million in verified losses in a single quarter (Group-IB). The entry cost for a complete AI fraud operation is under $100 per month.

Endpoint detection has a gap. PROMPTSTEAL’s just-in-time AI architecture bypasses static signature detection because no fixed payload exists. Most EDR tools identify known malware patterns. Dynamically generated runtime commands do not match that model. APT28 pioneered this architecture; it will enter criminal toolkits through the same commoditisation cycle every previous state innovation has followed.

You don’t need to build state-actor-grade defences. You need to recognise that the baseline sophistication of criminal attacks has permanently shifted upwards and update your threat model accordingly. The tools nation-states use today are available to the criminals targeting your organisation tomorrow.

For a more complete picture of the full AI-driven fraud landscape and why commodity criminal infrastructure is now nation-state tested, the companion articles cover every angle of this threat.

Frequently Asked Questions

Are SMBs actually being targeted by nation-state hackers?

Not directly. The risk is capability commoditisation — state actors develop AI tools for high-value targets, those tools become commercially available to criminals within months. The same $30/month Dark LLMs and $5 synthetic identity kits are used by both.

What is PROMPTSTEAL and why is it significant?

PROMPTSTEAL is malware deployed by APT28/FROZENLAKE (Russian military intelligence) against Ukrainian targets. It queries the Qwen2.5-Coder-32B-Instruct model via the Hugging Face API at runtime to dynamically generate Windows commands — the first documented case of LLM-querying malware in live operations. Static signature detection cannot catch it because no fixed payload exists to match. CERT-UA independently corroborated the finding as LAMEHUG.

How much does it cost to run an AI-powered fraud operation?

Under $100 per month. Dark LLMs run approximately $30 per month. Synthetic identity kits are around $5 each. RedVDS provided disposable virtual machines for $24 per month before its January 2026 disruption. Price points sourced from Group-IB’s 2026 whitepaper and Microsoft’s Digital Crimes Unit.

What is a Dark LLM?

A Dark LLM is a purpose-built underground AI service sold on dark web markets, operated behind Tor with no safety guardrails. Unlike ChatGPT or Gemini, these are built specifically to generate phishing content, malware code, and fraud scripts. Group-IB reports more than 1,000 active users across multiple vendors.

How did FAMOUS CHOLLIMA use AI to infiltrate technology companies?

FAMOUS CHOLLIMA operatives created AI-generated LinkedIn profiles and used GenAI to produce plausible technical answers during live job interviews — obtaining legitimate employment at private technology companies. CrowdStrike responded to 304 incidents in 2024, with nearly 40% involving insider threat activity.

What was RedVDS and why does its disruption matter?

RedVDS was a cybercrime-as-a-service platform disrupted by Microsoft and Europol in January 2026. It provided disposable Windows VMs for $24 per month combined with AI face-swapping and voice cloning for fraud. It drove approximately $40 million in US losses and operated more than 2,600 VMs sending around 1 million phishing messages per day. Its disruption shows the scale of commercial AI fraud infrastructure — and that similar platforms will emerge to replace it.

Can existing endpoint detection tools catch AI-generated malware like PROMPTSTEAL?

This is an active gap. PROMPTSTEAL queries an external LLM at runtime to generate commands dynamically, which bypasses static signature detection. Most EDR tools identify known malware patterns, not dynamically generated instructions. The defensive toolchain has not yet adapted to this architectural shift.

How is APT42 using AI for surveillance and targeting?

APT42 (Iran / IRGC) used Gemini to craft phishing lures, translate specialised content, and conduct target reconnaissance on think tanks and political organisations. They also attempted to build a “Data Processing Agent” converting natural language queries to SQL for tracking individuals by phone number, travel history, and shared attributes.

What is cybercrime-as-a-service and how has AI changed it?

CaaS is a business model where criminal tools and infrastructure are sold on subscription, mirroring legitimate SaaS markets. AI has expanded CaaS capabilities significantly — Dark LLMs for content generation, synthetic identity kits for persona creation, AI-enhanced VMs for automated fraud campaigns. Group-IB reports AI mentions on dark web forums are up 371% since 2019.

What is the BIGMACHO backdoor and how is it distributed?

BIGMACHO is a backdoor deployed by UNC1069/MASAN (DPRK) via deepfake video lures impersonating known cryptocurrency industry figures. Victims are directed to download a malicious “Zoom SDK” link. The operation targets cryptocurrency organisations for financial theft.

Is there a practical difference between a nation-state attack and a criminal attack when the tools are identical?

At the infrastructure level, increasingly not. Both use the same Dark LLMs, the same synthetic identity kits, and the same disposable VM platforms. The difference is in intent (intelligence vs. financial gain) and persistence. But the tools and techniques the defending organisation faces are converging.

AUTHOR

James A. Wondrasek James A. Wondrasek

SHARE ARTICLE

Share
Copy Link

Related Articles

Need a reliable team to help achieve your software goals?

Drop us a line! We'd love to discuss your project.

Offices Dots
Offices

BUSINESS HOURS

Monday - Friday
9 AM - 9 PM (Sydney Time)
9 AM - 5 PM (Yogyakarta Time)

Monday - Friday
9 AM - 9 PM (Sydney Time)
9 AM - 5 PM (Yogyakarta Time)

Sydney

SYDNEY

55 Pyrmont Bridge Road
Pyrmont, NSW, 2009
Australia

55 Pyrmont Bridge Road, Pyrmont, NSW, 2009, Australia

+61 2-8123-0997

Yogyakarta

YOGYAKARTA

Unit A & B
Jl. Prof. Herman Yohanes No.1125, Terban, Gondokusuman, Yogyakarta,
Daerah Istimewa Yogyakarta 55223
Indonesia

Unit A & B Jl. Prof. Herman Yohanes No.1125, Yogyakarta, Daerah Istimewa Yogyakarta 55223, Indonesia

+62 274-4539660
Bandung

BANDUNG

JL. Banda No. 30
Bandung 40115
Indonesia

JL. Banda No. 30, Bandung 40115, Indonesia

+62 858-6514-9577

Subscribe to our newsletter