Insights Business| SaaS| Technology Why Background Checks Do Not Stop Deepfake Candidates and What Does
Business
|
SaaS
|
Technology
Feb 24, 2026

Why Background Checks Do Not Stop Deepfake Candidates and What Does

AUTHOR

James A. Wondrasek James A. Wondrasek
Graphic representation of the topic Why Background Checks Do Not Stop Deepfake Candidates and What Does

Most hiring teams feel pretty good about their screening process. Background checks — tick. Video interview rounds — tick. Reference calls — tick. Signed offer letter — tick. If a candidate clears all of that, they must be who they say they are. Right?

Wrong. KnowBe4 found out the hard way. A newly hired software engineer passed four video interview rounds, background checks, and verified references. Within hours of getting their work laptop, endpoint security flagged malware being loaded. The employee was a North Korean operative using a stolen US identity and an AI-enhanced photo the entire time. Every standard check passed. Every single one.

Here is the structural problem: background checks confirm that documents and history exist for a name. They do not confirm that the live person in front of you is the owner of that name. That is not a gap in how checks are run — it is a gap in how the whole verification model was designed. Gartner predicts one in four candidate profiles worldwide will be fake by 2028. Attackers have found that gap and they are walking straight through it.

This article walks through why each standard defence fails, what the deepfake detection landscape actually looks like, and what countermeasures work — from zero-cost techniques you can use in your next interview to formal identity proofing. It is part of a broader treatment of synthetic candidate fraud in this series.


What Does a Background Check Actually Verify — and What Does It Miss?

A background check confirms that documentary evidence exists linked to a claimed name — criminal records, employment history, credentials, reference responses. What it does not confirm is that the person in your video interview is the owner of that name.

Each check type has a specific failure mode.

Employment verification confirms prior job titles, dates, and companies. It fails when a synthetic identity uses real employment data or fabricated references — which is common in DPRK IT worker operations, where facilitators maintain a whole network of controllable contacts ready to vouch.

Criminal records checks review databases for prior convictions. A synthetic identity built from clean data fragments will have no criminal history. The check passes because the identity is clean, not because the person is trustworthy.

Reference checks fail when references are fabricated contacts. In the KnowBe4 case, the identity package was coherent: documents real or convincing enough, history checked out, references responded correctly.

Document validation reviews government ID for authenticity markers. It is defeated by high-quality forgery — or by using a real person’s legitimate documents, which is exactly what DPRK operations do.

The upshot: background checks verify documents, not lived identity. The hiring pipeline assumes trust by default, and adversarial synthetic candidates exploit that assumption at its weakest point. The checks confirm that data exists. They cannot confirm that a person owns it.

Why standard screening misses synthetic candidates is explored in detail in this series’s opening analysis.


How Does a Deepfake Video Interview Actually Work?

A deepfake video interview runs as three integrated components working together in real time.

Face swapping / visual overlay: Real-time AI replaces the impersonator’s facial features with the claimed identity’s face — a live overlay running continuously throughout the call. Current tools have advanced to the point where casual visual inspection will not reliably catch it.

Virtual camera feed insertion (the injection attack): A virtual camera driver — tools like OBS or ManyCam — intercepts the native webcam feed before it ever reaches the conferencing platform. From Zoom’s, Teams’, or Meet’s perspective, it is receiving a perfectly normal camera input. There is no mechanism inside those platforms to distinguish an injected deepfake stream from a real feed. The platform is blind to the attack by design.

Voice cloning: AI synthesis of a target’s voice, synchronised with the face swap. Attackers need as little as three seconds of audio scraped from LinkedIn posts or YouTube videos.

The injection attack is the key concept here. Because the deepfake output is fed through a standard virtual camera interface, detection requires something the conferencing platform was never designed to provide. Passive (asynchronous) video interviews are even more vulnerable — candidates can record multiple attempts, optimise the output, and submit the best version.


Why Is Your ATS Optimised for Speed, Not Adversarial Pressure?

Applicant tracking systems were built for legitimate candidate experience: speed, ease of application, recruiter workflow efficiency. They were not built for adversarial scenarios.

There is no fraud detection at the submission stage. No identity consistency checking across pipeline stages. No device intelligence. ATS platforms assumed good-faith applicants because when they were built, that assumption was reasonable.

Synthetic resumes are polished, keyword-heavy, and optimised for ATS filters. A synthetic identity with a coherent resume and a matching LinkedIn presence flows through an ATS exactly as a legitimate candidate would. There is nothing to flag it.

The gap is category-wide. Every ATS assumes applicants are acting in good faith — and the first line of defence in your hiring pipeline has no defensive capability against this threat at all.

Why your existing screening tools have a gap across the full pipeline is covered in more detail in our analysis of the recruiting pipeline as a security boundary.


What Does NIST’s Data Actually Show About Deepfake Detection Tools?

The instinct when you first encounter this problem is to reach for a technical solution: “Can’t we just buy a detection tool?” The honest answer: not as a standalone defence.

NIST evaluations show variable performance across tools and lighting conditions. Under targeted attacks — where adversaries test their deepfakes against known detection tools before deploying them — detection performance can collapse entirely.

The false positive problem is equally significant. Detection tools sensitive enough to catch fakes will also flag legitimate candidates. That is a candidate experience problem and a potential legal liability. The false negative problem is asymmetric in the worst way: the attacker only has to succeed once. The defender has to succeed every time.

In documented cases, detection has happened post-hire via endpoint security — not during the hiring pipeline. NSA, FBI, and CISA guidance recommends verification, planning, and training rather than assuming reliable detection. Do not bet the house on one detector. Build verification and response readiness into the process instead.

Detection tools are one signal in a layered defence — useful, but not sufficient on their own. The adversary’s tooling evolves faster than detection models can keep pace.


Why Does Checking Identity Once at the Offer Stage Leave a Gap?

Most identity verification in hiring happens once — typically at the offer stage or onboarding. This is point-in-time verification, and it has a structural substitution problem.

Checking identity once does not verify that the person who applied, the person who interviewed, and the person who shows up on Day 1 are the same person. A fraud operation could run one person for the application, a different person for the technical interview, and a third at onboarding.

If verification is concentrated at specific checkpoints, an identity package only needs to hold together at those checkpoints — not across the full pipeline.

The solution is multi-stage verification: identity checks at application, at interview, and at onboarding. No continuous monitoring required — just verification at the key transitions. For remote roles, a single physical confirmation before onboarding raises the attacker’s cost significantly, since synthetic workflows are optimised for fully remote execution.


What Is the Structured Unpredictability Technique and How Does It Work?

This is the zero-cost countermeasure you can use in your next video interview. No vendor contracts. No additional investment.

The principle: require candidates to perform spontaneous, unscripted actions that disrupt both pre-recorded video and real-time AI overlays. Face-swapping overlays are trained for front-facing conversational posture. They struggle with rapid head movements, off-axis views, and requests for environmental information that only a physically present person could provide.

Here are the prompts a hiring manager can use right now:

  1. Ask the candidate to look away from the camera and describe what is behind them. A deepfake operator cannot reliably describe what is physically behind them.

  2. Ask the candidate to hold up a specific number of fingers or a named object. This tests physical presence and overlay stability with an unpredictable prompt.

  3. Ask the candidate to read an unexpected phrase displayed on your screen. Type it into chat. Unexpected input is harder to synchronise with voice cloning.

  4. Ask follow-up questions requiring specific lived experience from a claimed prior role. “What broke during that project, and how did you find out?” Scripted backgrounds cannot generate authentic specificity under pressure.

Passive behavioural observation has some value but is inconsistent. Structured unpredictability is more reliable because it creates active tests rather than relying on pattern recognition.

This is not a standalone defence. It raises the attacker’s difficulty and cost, which is the right framing for a layered approach. For where it fits relative to identity proofing, see a layered hiring defence stack that works.


What Is Identity Proofing and How Does It Close the Background Check Gap?

Identity proofing combines government-issued document validation with biometric liveness verification. It confirms not just that documents exist, but that the live person presenting is actually the holder of those documents.

That distinction is the entire gap. Background checks confirm documentary history for a name. Identity proofing confirms the live human is the owner of both.

The formal framework is NIST’s Digital Identity Guidelines, specifically Identity Assurance Level 2 (IAL2) — which defines what “verifying a person’s identity” actually means beyond document review: confirming live human presence and tying that presence to the documents.

Liveness detection is the biometric component. It confirms a real, live human is present — not a pre-recorded video or AI overlay. Both active liveness (prompting specific actions) and passive liveness (analysing intrinsic cues like skin texture) test for physical characteristics that virtual camera feeds cannot replicate.

Identity proofing is available as SaaS that integrates into existing hiring workflows. The question is not whether to add it — it is which pipeline stages to add it at. Implementation guidance is covered in a layered hiring defence stack that works.



The case for changing your hiring process is not theoretical — it is documented. Every standard control in the hiring pipeline was designed for legitimate candidates acting in good faith. That assumption is no longer safe. Background checks confirm documents, not identity. ATS platforms have no adversarial pressure testing. Detection tools are inconsistent under targeted attack. Point-in-time verification misses substitution across pipeline stages.

The gap is real and the countermeasures are available — from zero-cost structured unpredictability techniques you can deploy in your next interview to identity proofing that formally closes the background check gap. For the complete guide to hiring fraud defence — covering the full threat landscape, the security framing, and the legal exposure — see the complete guide to hiring fraud defence.


FAQ

Can a deepfake candidate pass a live video interview with multiple interviewers?

Yes. The KnowBe4 case involved four separate video interview rounds. Real-time face-swapping and voice cloning operate continuously — multiple interviewers see the same fabricated identity. Additional rounds do not increase detection probability unless interviewers are trained in structured unpredictability techniques.

Are passive or pre-recorded video interviews more vulnerable than live interviews?

Passive (asynchronous) video interviews are more vulnerable. Candidates can record multiple attempts and submit the best version. Live interviews introduce real-time variability — though they remain vulnerable to injection attacks without specific countermeasures.

How much does it cost to set up a deepfake for a job interview?

The tools — face-swapping software, virtual camera drivers, and voice cloning — are commercially available or open-source. Voice cloning requires as little as three seconds of audio. The barrier is technical skill, not money, which is why state-sponsored operations like the DPRK IT worker scheme scale so effectively.

Does facial recognition technology detect deepfake candidates?

No. Facial recognition matches a face to a database; liveness detection confirms the face belongs to a live human rather than an AI overlay or recorded video. Detection requires liveness, not just recognition.

What is an injection attack in a video interview?

An injection attack feeds a fabricated video feed into a conferencing platform via a virtual camera driver. The platform — Zoom, Teams, Meet — sees what appears to be a normal webcam feed. There is no mechanism to distinguish an injected deepfake stream from a legitimate camera input.

Do background checks verify that the person interviewing is the person on the documents?

No. Background checks verify that documents and history exist for a claimed name. They do not verify that the person in the video interview is the owner of that name. This is the structural gap that identity proofing closes.

What is the NIST IAL2 standard and why does it matter for hiring?

NIST Identity Assurance Level 2 (IAL2) requires government-issued document validation and biometric verification of the live person. It defines what “verifying a person’s identity” actually means in a hiring context — beyond document review to live human confirmation.

Can structured unpredictability techniques be defeated by more advanced deepfake technology?

As deepfake tools improve, some prompts may become less effective. But the principle remains sound: requiring spontaneous, physically verifiable actions creates ongoing friction for any overlay system. The specific prompts should evolve, but the technique category persists.

What legal liability does a company face for unknowingly hiring a DPRK operative?

Potential OFAC sanctions violations regardless of intent — the US Department of Justice has taken action against more than 300 firms. Negligent hiring liability is also a growing concern: given public FBI warnings, courts may conclude employers should have implemented verification controls.

Why are tech and engineering roles the primary targets for deepfake candidates?

High salaries, remote work as standard, access to sensitive systems and IP. Engineering positions have a normalised global talent pool and remote-first interviewing, which provides cover for candidates avoiding in-person verification. DPRK operations target these roles for revenue and intelligence access.

Is identity proofing practical without a dedicated security team?

Yes. Identity proofing is available as SaaS that integrates into existing hiring workflows. Verification happens at specific checkpoints — no dedicated security team, no continuous monitoring required. Implementation options for smaller budgets are covered in the next article in this series.

AUTHOR

James A. Wondrasek James A. Wondrasek

SHARE ARTICLE

Share
Copy Link

Related Articles

Need a reliable team to help achieve your software goals?

Drop us a line! We'd love to discuss your project.

Offices Dots
Offices

BUSINESS HOURS

Monday - Friday
9 AM - 9 PM (Sydney Time)
9 AM - 5 PM (Yogyakarta Time)

Monday - Friday
9 AM - 9 PM (Sydney Time)
9 AM - 5 PM (Yogyakarta Time)

Sydney

SYDNEY

55 Pyrmont Bridge Road
Pyrmont, NSW, 2009
Australia

55 Pyrmont Bridge Road, Pyrmont, NSW, 2009, Australia

+61 2-8123-0997

Yogyakarta

YOGYAKARTA

Unit A & B
Jl. Prof. Herman Yohanes No.1125, Terban, Gondokusuman, Yogyakarta,
Daerah Istimewa Yogyakarta 55223
Indonesia

Unit A & B Jl. Prof. Herman Yohanes No.1125, Yogyakarta, Daerah Istimewa Yogyakarta 55223, Indonesia

+62 274-4539660
Bandung

BANDUNG

JL. Banda No. 30
Bandung 40115
Indonesia

JL. Banda No. 30, Bandung 40115, Indonesia

+62 858-6514-9577

Subscribe to our newsletter