Insights Business| SaaS| Technology Synthetic Candidate Fraud Is Real and Remote Engineering Roles Are the Primary Target
Business
|
SaaS
|
Technology
Feb 24, 2026

Synthetic Candidate Fraud Is Real and Remote Engineering Roles Are the Primary Target

AUTHOR

James A. Wondrasek James A. Wondrasek
Graphic representation of synthetic candidate fraud targeting remote engineering roles

In July 2024, KnowBe4 — a US cybersecurity awareness training company whose entire business is teaching people to detect social engineering — hired a North Korean operative. The person passed four rounds of video interviews, a background check, and reference verification. What caught the operative was not the hiring process. It was endpoint detection software that flagged malware loaded onto the company-issued laptop within hours of it arriving.

If that can happen at a company that trains other organisations to spot deception, it can happen at yours.

This is not theoretical. It is documented, growing, and disproportionately targeting remote engineering roles. Gartner projects that by 2028, one in four candidate profiles worldwide will be fake. Amazon’s CISO disclosed in December 2025 that the company had blocked over 1,800 suspected North Korean applications since April 2024, with a 27% quarterly increase. This is not incidental — it is industrial.

This article defines the threat, presents the evidence, and explains why remote engineering hiring is structurally vulnerable. For a broader view, see synthetic candidate fraud in hiring and the full cluster of articles it anchors.

What Is Synthetic Candidate Fraud and How Does It Differ from Resume Padding?

Synthetic candidate fraud is the use of a fabricated or AI-assembled identity to pass screening and secure employment. The applicant is not who they claim to be.

This is different from resume fraud — a real person overstating real credentials. And it is different again from overemployment — holding two or more legitimate remote jobs under your real identity. Synthetic candidate fraud is adversarial, often criminal, and in state-sponsored cases a potential OFAC sanctions violation.

Resume padding is dishonesty about credentials. Synthetic candidate fraud is dishonesty about your entire existence.

As Brian Long, CEO of Adaptive Security, put it: “These ’employees’ can pass screening, ace remote interviews, and start work with legitimate credentials. Then, once inside, they steal data, map internal systems, divert funds, or quietly set the stage for a larger attack.”

If your current process cannot distinguish a real person from a well-constructed synthetic identity, you will not find out until something goes wrong. For a closer look at why standard screening misses synthetic candidates, that detection gap is covered in a companion piece.

How Does a Synthetic Identity Actually Get Built?

A synthetic identity is assembled, not stolen wholesale. Real personal data fragments — Social Security numbers, addresses, names from genuine records — get combined with AI-generated components to produce a candidate that passes both automated screening and human judgement.

Identity fragments come from data breaches and dark web purchases. Bad actors also hijack dormant LinkedIn accounts to gain verification marks, targeting genuine software engineers to appear credible.

AI-generated headshots from StyleGAN-class tools produce photorealistic images with no reverse-image-search footprint.

AI resume generation produces keyword-optimised, ATS-targeted resumes at scale. The fraud signal — near-identical resumes with the same phrasing, or resumes far more articulate than interview performance — is only detectable if you are looking for it.

Fabricated references are AI-generated voice responses or co-conspirators who confirm employment history on request.

The laptop farm comes after hire. DPRK operatives request that company-issued laptops be sent to US residential addresses managed by facilitators who maintain racks of devices accessible from overseas. The US government uncovered 29 such laptop farms as of June 2025. An Arizona woman was sentenced to more than eight years in July 2025 for running one that serviced over 300 US companies and generated over $17 million for the North Korean government.

Standard background checks verify individual data points; they do not verify that those points belong to the same person. That gap is what makes it work. For detail on the state-sponsored infrastructure, see the North Korean IT worker scheme.

How Widespread Is This — and Do the Numbers Hold Up Under Scrutiny?

Let’s look at what is actually documented.

KnowBe4 (July 2024) — covered in the case study section below.

Amazon (December 2025). CISO Stephen Schmidt disclosed that Amazon had blocked over 1,800 suspected DPRK-affiliated applications since April 2024 — a nearly one-third annual increase. His framing: “this trend is likely to be happening at scale across the industry.”

DOJ enforcement (November 2025). Five guilty pleas in a single action covered 136 US victim companies, generating over $2.2 million for the DPRK regime.

Gartner predicts one in four candidate profiles worldwide will be fake by 2028. The primary research is paywalled; the figure is cited across authoritative sources including Huntress and The Hacker News. Take the specific percentage as indicative, but the directional claim is consistent with every observed trend.

Sumsub‘s Identity Fraud Report 2025-2026, based on analysis of over 4 million fraud attempts, found sophisticated identity fraud attacks grew 180% year-on-year. Synthetic identity fraud accounted for 21% of all first-party fraud. Multi-step attacks rose from 10% to 28%. The attacks are more coordinated and harder to detect — not just more frequent.

And this is not just an enterprise problem. The Arizona laptop farm case impacted over 300 companies — not all Fortune 500. Any company with cloud credentials and GitHub access is a viable target regardless of headcount.

Why Do Remote Engineering Roles Attract This Specific Threat?

Remote engineering roles are the primary target because they combine four properties that no other job category matches: privileged systems access, a remote format that removes physical identity checkpoints, compensation that justifies the investment, and high-value assets available from day one.

Privileged access. A software engineer at a typical SaaS or FinTech company gets GitHub repository access, AWS or GCP credentials, and often read access to production databases. The attack goal is a legitimate employee account with trusted permissions.

Remote format removes the last checkpoint. Before 2020, in-person onboarding was a natural identity verification moment. Remote work removed it. The pandemic expanded remote opportunities, which North Korean intelligence services exploited to scale their operations. Generative AI then made fabricating a convincing identity cheap enough to industrialise.

Compensation justifies the investment. Individual DPRK IT workers earn an average of $300,000 per year according to US government estimates. The cost of constructing a synthetic identity with AI tools is low relative to that return.

Asset value from day one. A synthetic hire with GitHub write access and AWS credentials is an insider threat from the moment they onboard — positioned for data theft, malware installation, and ransomware staging.

Your credentials and production data are just as operationally valuable to an adversary as those at a company ten times your size. Understanding how recruiting became a security entry point starts with recognising that the hiring pipeline is now attack surface.

The KnowBe4 Case: What Does It Tell Us About Our Own Hiring Process?

KnowBe4 is not an ordinary company that got unlucky. Its product is teaching people to recognise and resist social engineering. If any organisation was going to catch a synthetic candidate, it was this one.

The operative passed four rounds of video interviews, a background check, and reference verification before being hired as a software engineer. Detection came from endpoint security — not HR. Within hours of the MacBook arriving at a US laptop farm address, the operative loaded information-stealing malware. KnowBe4’s security operations centre flagged it immediately.

The lesson is pretty direct: hiring process controls cannot catch a well-constructed synthetic candidate. A company without that level of endpoint monitoring would not have caught the malware deployment until far more damage had been done.

Mandiant‘s finding that nearly every Fortune 500 CISO has admitted to hiring at least one DPRK worker suggests that for every KnowBe4, there are organisations where the hire was never detected at all.

If a security company with four interview rounds was fooled, a standard hiring process is more exposed, not less.

Opportunistic Fraud Versus State-Sponsored Operations — Why Does the Distinction Matter?

Not all synthetic candidate fraud comes from Pyongyang. Two categories share the same attack vector but differ in motivation and legal consequence.

State-sponsored DPRK operations are structured and regime-directed — front companies, revenue targets, US-based facilitators managing laptop farms. Companies that discover a DPRK hire face a security incident response and an OFAC sanctions compliance obligation simultaneously.

Opportunistic fraud rings use the same tools — AI resume generators, headshot generators, deepfake video — but operate independently for financial gain. Oleksandr Didenko, a Ukrainian national who pleaded guilty in 2025, ran an operation stealing US citizen identities and selling them to overseas IT workers seeking remote work.

DPRK operatives can often do the work initially because sustained employment is the objective. Opportunistic fraudsters may fail performance expectations faster. But both exploit remote hiring with inadequate identity verification, so the defensive controls overlap.

For deeper treatment of the DPRK infrastructure and OFAC compliance, see the North Korean IT worker scheme.

Where Is This Heading? Agentic AI and the Automated Attack Chain

The current threat is human-operated with AI assistance. The 2026 escalation removes that human oversight entirely.

Sumsub’s Identity Fraud Report identifies AI fraud agents as “autonomous, self-learning systems capable of executing entire fraud operations with minimal human intervention.” In practical terms: an AI agent could generate a synthetic identity, build a social media history, submit tailored applications to hundreds of companies simultaneously, and conduct initial phone screens — without a human initiating each step. Scaling from ten candidates to a thousand costs almost nothing.

Multi-step attacks rose from 10% to 28% of all identity fraud between 2024 and 2025, per Sumsub’s analysis of over 4 million fraud attempts. Controls built for today’s threat will face a harder adversary within 12-18 months.

For the full threat landscape that connects the hiring fraud vector to the broader security picture, the full threat landscape maps the complete terrain.

Conclusion

Synthetic candidate fraud is documented at KnowBe4, quantified at Amazon (1,800+ blocked applications, December 2025), and projected at scale by Gartner (one in four candidate profiles by 2028). Any company issuing GitHub access and cloud credentials to remote engineers is a viable target.

The hiring controls most companies rely on — video interviews, background checks, reference verification — were not designed to detect adversarial identity fabrication. Detection at KnowBe4 came from endpoint security, not HR.

Next steps: understand how recruiting became a security entry point to frame the security architecture implications, and review the North Korean IT worker scheme for the enforcement and compliance picture. For the broader picture of hiring fraud risk across the full threat landscape, the cluster overview connects all the evidence.

Frequently Asked Questions

What is the difference between synthetic candidate fraud and resume fraud?

Resume fraud is a real person exaggerating their qualifications. Synthetic candidate fraud is a fabricated identity — the applicant is not who they claim to be. One is dishonesty about credentials. The other is deception about your entire existence.

Can a completely fake person actually get hired at a tech company?

Yes. KnowBe4 hired a North Korean operative in July 2024 who passed four video interviews and a background check. The operative was caught only when endpoint detection flagged malware loaded onto the company-issued laptop. The hiring process caught nothing.

What does Gartner’s “one in four applicants will be fake by 2028” prediction mean?

Gartner projects that by 2028, 25% of candidate profiles worldwide will be synthetic or fraudulently constructed — AI-generated resumes, fabricated identities, deepfake-assisted applications. The primary research is paywalled, but the figure is widely cited across authoritative secondary sources.

Are small companies targeted or is this only a Fortune 500 problem?

Small companies are targeted. The Arizona laptop farm case impacted over 300 US companies. The DOJ’s November 2025 enforcement actions covered 136 victim companies. Cloud credentials and code access are the target — headcount is irrelevant.

What is a laptop farm and how does it relate to hiring fraud?

A laptop farm is a US residence containing racks of company-issued laptops managed by a facilitator, maintaining the appearance that a remote worker is physically located in the US while the actual operative works from overseas. The US government uncovered 29 as of June 2025. The Arizona case generated over $17 million for the North Korean government.

How did Amazon discover 1,800 fake job applicants?

Amazon CISO Stephen Schmidt disclosed in December 2025 that Amazon had blocked over 1,800 suspected DPRK-affiliated applications since April 2024, with a nearly one-third annual increase — framing the problem as industry-wide, not Amazon-specific.

What is agentic AI fraud and why does it matter for hiring?

Agentic AI fraud is autonomous AI agents executing end-to-end fraud — identity creation, job application, initial screening — with minimal human oversight. Multi-step attacks rose from 10% to 28% of all identity fraud in 2025. Sumsub and Experian identify it as the 2026 escalation vector.

Is overemployment the same threat as synthetic candidate fraud?

No. Overemployment is a real person holding two or more legitimate remote jobs under their own identity — financially motivated and non-malicious. Synthetic candidate fraud involves fabricated identities and adversarial intent, potentially funnelling salary to a hostile state or staging data theft.

What are the legal consequences of unknowingly hiring a DPRK operative?

Employing a DPRK operative — even unknowingly — is a potential OFAC sanctions violation. The November 2025 DOJ actions included five guilty pleas and $15 million in civil forfeiture. Companies face concurrent security incident response and sanctions compliance obligations.

Why are deepfake interviews hard to detect?

Detection requires biometric liveness checks or structured behavioural interview techniques. Interpol has warned that synthetic media “can enable highly convincing impersonations that are difficult to distinguish from genuine content.” NIST evaluations show performance varies significantly by deepfake type and media conditions. Interviewer judgement alone is not a reliable checkpoint.

What is the first thing I should do if I suspect a candidate is synthetic?

Do not confront the candidate. Escalate to your security team or legal counsel. Preserve all application materials, interview recordings, and communications. If a DPRK connection is suspected, the FBI’s Counterintelligence Division and IC3 have reporting channels for IT worker scheme tips.

AUTHOR

James A. Wondrasek James A. Wondrasek

SHARE ARTICLE

Share
Copy Link

Related Articles

Need a reliable team to help achieve your software goals?

Drop us a line! We'd love to discuss your project.

Offices Dots
Offices

BUSINESS HOURS

Monday - Friday
9 AM - 9 PM (Sydney Time)
9 AM - 5 PM (Yogyakarta Time)

Monday - Friday
9 AM - 9 PM (Sydney Time)
9 AM - 5 PM (Yogyakarta Time)

Sydney

SYDNEY

55 Pyrmont Bridge Road
Pyrmont, NSW, 2009
Australia

55 Pyrmont Bridge Road, Pyrmont, NSW, 2009, Australia

+61 2-8123-0997

Yogyakarta

YOGYAKARTA

Unit A & B
Jl. Prof. Herman Yohanes No.1125, Terban, Gondokusuman, Yogyakarta,
Daerah Istimewa Yogyakarta 55223
Indonesia

Unit A & B Jl. Prof. Herman Yohanes No.1125, Yogyakarta, Daerah Istimewa Yogyakarta 55223, Indonesia

+62 274-4539660
Bandung

BANDUNG

JL. Banda No. 30
Bandung 40115
Indonesia

JL. Banda No. 30, Bandung 40115, Indonesia

+62 858-6514-9577

Subscribe to our newsletter