Remote engineering hiring has no natural identity checkpoint. No receptionist checked a driver’s licence. No office visit confirmed a face matched a photo. That absence is a documented attack surface — and North Korean IT worker operations exploited it at more than 300 US companies, using stolen identities and AI-generated personas that sailed through standard background checks and multiple video interviews.
Most hiring processes have the same architectural flaw: a single identity checkpoint at offer stage. One gate. Synthetic identities are specifically engineered to defeat it. What they cannot withstand is multiple independent verification signals spread across the full hiring lifecycle.
This article lays out a concrete, layered defence stack for synthetic candidate fraud in remote hiring — ordered by implementation cost at each stage, so you can start today without waiting on budget approval. It includes a pre-hire identity verification checklist and a vendor evaluation framework built on what tools actually do, not what their marketing says.
Why Does a Layered Defence Beat a Single Hiring Checkpoint?
A background check answers one question: does a coherent identity record exist for this name? It does not answer the question that matters — is the person on the other side of the screen actually that person?
Synthetic identities produce coherent records. The records check out because the records are real. The person is not.
Layered defence spreads detection across four hiring stages — application, interview, offer and onboarding, post-hire — each targeting different fraud signals with different methods. Device intelligence catches location masking before a recruiter invests time. Liveness detection catches deepfake video before an offer is extended. Identity proofing catches synthetic documents before access is granted. UEBA monitoring catches anomalous behaviour before data is exfiltrated.
As Daon’s team puts it: “The most effective approaches don’t rely on any single countermeasure but instead create multiple verification checkpoints throughout the hiring process.”
Controls at each stage are ordered by implementation cost — free first, then process changes, then vendor tooling. Human review stays in the loop at every stage. Automation filters; humans decide. For context on why the threat warrants these controls, our guide to synthetic candidate fraud in remote engineering hiring covers the full threat landscape — including documented attack patterns and why remote engineering roles are the primary target.
What Fraud Signals Can You Check Before the First Interview?
The application stage is your cheapest detection opportunity in the hiring lifecycle — no recruiter time has been invested yet.
Document metadata analysis [FREE]
PDF metadata is a zero-cost fraud signal most hiring teams never examine. Open a CV in a standard PDF viewer, check the file properties, and look at creation date, edit history, software, and author field. Red flags: CVs created within days of the application, identical metadata patterns across multiple applications, author names that do not match the applicant.
Okta‘s threat intelligence team recommends verifying the edit history of CVs for duplication and reuse patterns as a baseline control. It takes sixty seconds per application.
Identity consistency checking [FREE / LOW-COST]
Cross-reference what the candidate provides: name, phone number, email address, claimed location, timezone. Mismatches are a fraud signal. A candidate claiming Austin with a VoIP number registered to New York, a Gmail account created six weeks ago, and an application submitted at 3 AM Austin time has given you four pieces of inconsistent information.
Sardine.ai automates this as part of their Job Application Fraud Detection product, scoring email risk, phone signals, and location integrity with a clear low, medium, or high risk classification.
Device intelligence via ATS webhook [VENDOR]
When a candidate submits to Greenhouse, Lever, or Workday, the ATS sends an outbound HTTP POST to a fraud scoring API. The API analyses IP geolocation, VPN detection, device fingerprints, and browser metadata, then returns a JSON risk score that surfaces as a field in the ATS candidate record. Recruiter workflow is unchanged. Sardine also offers a custom interview link that extends device intelligence into live video calls.
Together, these three controls work as a graduated filter — catching a significant share of fraudulent applications before a recruiter has read a single line of a CV.
How Do You Turn Every Video Interview into an Identity Checkpoint?
The interview stage is where attacks shift from synthetic documents to live impersonation — deepfake video, AI overlays, and coached proxies performing in real time.
Behavioural signals [FREE]
Train recruiters to log specific red flags during video interviews. No tooling required:
- Long response pauses — more than 3–5 seconds before common questions suggests AI processing latency, not thoughtfulness
- Scripted or over-polished answers — rehearsed responses that don’t actually engage with the question
- Off-screen eye movement — eyes consistently moving to a fixed point outside the frame suggests real-time coaching
- Camera avoidance — persistent “technical issues” or insistence on face-obscuring filters
- Inability to answer follow-up questions — ask “walk me through a specific debugging session you remember from that role” and watch the response
Structured unpredictability technique [FREE]
Issue spontaneous, unscripted physical prompts that a deepfake overlay or pre-recorded feed cannot respond to:
- “Hold four fingers up in front of your face and look toward the camera.”
- “Pick up something nearby — your phone, a cup — and hold it up.”
- “Can you switch to your phone camera for the next question?”
Deepfake models track a human face. Interrupting that tracking — covering half the face, forcing a device switch — causes the model to fail or reveal artefacts. Switching to a phone camera is particularly effective because deepfake models running on a laptop will not run on a mobile device.
Two to three unpredictable prompts per interview, spaced at unexpected points. Do not standardise the sequence — adversaries read published guidance.
Liveness detection [PROCESS → VENDOR]
At the free end: the prompts above are manual liveness detection — confirming a real, live human by requiring real-time physical responses.
At the vendor end: formal liveness detection tools like 1Kosmos LiveID and Proof automate this with tamper-resistant biometric verification. Specify active liveness — where the user performs an action — over passive liveness for engineering roles with privileged access.
Train recruiters to log signals, not impressions. “Something felt off” is not actionable. “Candidate paused 8 seconds, eyes moved off-screen, declined phone camera switch citing battery issues” is a documented fraud signal. A 60-minute training session is the highest-ROI single investment in the defence stack.
What Should Identity Proofing and Chain-of-Trust Recordkeeping Look Like at the Offer Stage?
The offer and onboarding stage is where identity proofing operates — the anchor control of the defence stack. This is where you verify not just that an identity record exists, but that the person in front of you is the actual holder of that identity.
NIST IAL2 as the benchmark
NIST SP 800-63A Identity Assurance Level 2 requires three things: personal information (name, address, date of birth), at least one piece of identity evidence (a government-issued photo ID), and a biometric (a live selfie). The service provider validates document authenticity and verifies through biometric matching that the individual is the true holder of that document.
For remote engineering roles — where the hire will receive privileged access to code repositories, infrastructure, and potentially customer data — IAL2 is the right assurance level.
Identity proofing process
The candidate submits a government-issued ID. The system validates document authenticity. A biometric liveness check confirms the person presenting the ID physically matches the document photo and is live. The result is a verified identity linked to a real biometric — not just a coherent set of records.
For remote-first companies, this replaces the natural in-person checkpoint that a physical workplace provided. Remote work eliminated it. Identity proofing reinstates it.
Chain-of-trust recordkeeping
Every verification step should produce an audit log: who was verified, when, by what method, and with what result. Operationally, it supports post-incident investigation. Legally, it documents “reasonable controls” for negligent hiring liability — as our guide to chain-of-trust recordkeeping as legal documentation covers, courts will ask what steps you took before granting access. “The goal isn’t perfect detection; it’s documented reasonableness.”
How Do You Apply Zero Trust Monitoring During a New Hire’s First 90 Days?
Offer-stage controls establish verified identity. Post-hire controls pick up from there — because no pre-hire defence stack is perfect.
Least-privilege onboarding [FREE]
Minimum necessary access for a developer on day one: read-only code repository access, sandboxed development environment, no production database credentials, no customer data access, no admin privileges. Permissions ladder up over 30–90 days as the hire demonstrates consistent work output.
This costs nothing — it requires policy documentation and IAM configuration enforcement.
New hire UEBA monitoring [PROCESS → VENDOR]
Establish access pattern baselines during the first 30–90 days and flag anomalies: off-hours logins inconsistent with claimed timezone, unusual data access volumes, VPN use from unexpected locations, credential sharing indicators, attempts to access systems outside role scope.
The first 30 days are the highest-risk window — a fraudulent hire’s objective is to exfiltrate data or establish persistent access before detection. Least-privilege limits blast radius; UEBA catches anomalous behaviour early.
Continuous identity assurance [VENDOR]
Post-hire identity verification does not end at onboarding. A verified employee can hand off credentials to an unverified third party after they’re set up. 1Kosmos LiveID provides periodic re-verification confirming the person doing the work remains the person biometrically verified at hire. Use it for privileged-access engineering roles.
If a fraudulent hire is discovered, the response framework is covered in our step-by-step guide on what to do when a fraudulent hire is discovered.
Pre-Hire Identity Verification Checklist for Remote Engineering Roles
A copy-ready reference for each stage. Cost tags: [FREE], [PROCESS] (training or workflow change), [VENDOR] (third-party integration required).
Application Stage
- [ ] [FREE] Inspect CV/resume PDF metadata: creation date, edit history, software, author field — flag very recent creation, mass-production patterns, mismatched author names
- [ ] [FREE] Cross-reference name, phone, email, and claimed location for internal consistency (timezone, area code, email domain, IP-implied location should align)
- [ ] [FREE] Flag suspicious email addresses: newly created accounts, disposable domains
- [ ] [FREE] Check for minimal or newly created social media inconsistent with claimed work history
- [ ] [VENDOR] Enable ATS webhook integration (sardine.ai or equivalent) for device intelligence scoring at application submission
- [ ] [VENDOR] Flag VPN-masked locations or device fingerprints appearing across multiple distinct applications
Interview Stage
- [ ] [PROCESS] Brief recruiters on behavioural red flags: long pauses, scripted answers, off-screen eye movement, camera avoidance, inability to answer experiential follow-ups
- [ ] [PROCESS] Include 2–3 structured unpredictability prompts per interview, spaced at unexpected points: hold an object up, turn head, switch to phone camera
- [ ] [PROCESS] Log specific observed signals — not general impressions — in a structured interview signal log
- [ ] [VENDOR] Use sardine.ai’s custom interview link to collect device and location intelligence during the video call
- [ ] [VENDOR] Deploy formal liveness detection (1Kosmos LiveID, Proof, or equivalent) for privileged-access engineering roles
Offer and Onboarding Stage
- [ ] [VENDOR] Require government-issued ID validation with biometric liveness check aligned to NIST IAL2
- [ ] [VENDOR] Initiate chain-of-trust recordkeeping: who was verified, when, by what method, and with what result
- [ ] [PROCESS] Verify offer-stage identity details match application-stage data (name, email, phone, location)
- [ ] [FREE] Confirm verified identity is stored in HR systems linked to the chain-of-trust audit log
Post-Hire Stage (First 30–90 Days)
- [ ] [FREE] Implement least-privilege access from day one: read-only code, sandboxed environment, no production credentials, no customer data, no admin privileges
- [ ] [FREE] Require named user accounts — no shared logins
- [ ] [FREE] Require peer review for code merges and production deployments
- [ ] [PROCESS] Establish UEBA baselines; define expected hours, access scope, login locations
- [ ] [PROCESS] Monitor for anomalies: off-hours logins, large downloads, VPN from unexpected locations, credential sharing, out-of-scope access
- [ ] [PROCESS] Define permission escalation criteria and who approves each step
- [ ] [VENDOR] Deploy continuous identity assurance (1Kosmos LiveID or equivalent) for privileged-access engineering roles
How Do You Evaluate Identity Verification Vendors Without Buying Into Vendor Hype?
The identity verification vendor market is saturated with marketing claims. Here is what to actually assess.
Document validation quality — Does the tool validate authenticity, or just capture an image? Ask how they detect a high-quality fraudulent document that passes visual inspection.
Liveness detection: active vs passive — Active liveness requires the user to perform an action; passive does background analysis. Active provides higher assurance — specify it for engineering roles with privileged access.
ATS integration options — Does the vendor support webhook integration with your specific ATS (Greenhouse, Lever, Workday)? How many days of integration effort?
Chain-of-trust records — Does the vendor produce a verifiable audit log suitable for legal discovery? Request a sample before committing.
Documented NIST IAL2 compliance — Ask for the actual compliance documentation, not a marketing claim. Test liveness detection against photo replay, video replay, and basic face swap during evaluation.
Vendor landscape by function
sardine.ai — device intelligence, ATS webhook integration (Greenhouse, Lever, Workday), identity consistency scoring, interview-stage detection via custom interview link.
Proof — identity proofing (document authentication, biometric liveness, biometric match), chain-of-trust recordkeeping, ATS integration, NIST IAL2-aligned workflows.
1Kosmos — LiveID biometric liveness, government credential cross-reference, continuous identity assurance throughout the employee lifecycle.
Daon and iProov — biometric identity verification, document validation, active and passive liveness detection, deepfake and injection attack detection.
One procurement constraint worth flagging: US state biometric privacy laws (Illinois BIPA, Texas CUBI) require notice, consent, and defined data retention policies before you deploy any liveness detection tool. Build this into procurement — not as an afterthought.
Frequently Asked Questions
What is the difference between identity proofing and a background check? A background check verifies that an identity record is internally consistent — employment history, education, criminal records. Identity proofing confirms a government-issued ID is valid and unaltered, then verifies through biometric liveness that the person presenting it physically matches the document. Synthetic identities defeat background checks because fabricated records are coherent; identity proofing defeats them because it ties claims to a biometrically verified person.
Can a small company with no security team implement this defence stack? Yes. Document metadata analysis, behavioural signal training, structured unpredictability prompts, and least-privilege onboarding are all free and require no tooling. You can implement these today. Vendor-required controls layer on top as budget permits.
How does ATS webhook integration work with Greenhouse or Lever? The ATS sends an outbound HTTP POST to a fraud scoring API when a candidate applies. The API returns a JSON risk score that surfaces as a field in the ATS candidate record — recruiter workflow is unchanged.
What are the most reliable behavioural red flags in a video interview? Long response pauses, scripted answers that don’t engage with the specific question, off-screen eye movement indicating real-time coaching, consistent camera avoidance, and inability to answer experiential follow-ups — “describe a specific production incident you resolved” is a reliable test.
How does the structured unpredictability technique defeat deepfake overlays? Deepfake models track a human face. Interrupting the tracking — covering half the face, forcing a device switch, asking the candidate to switch to a phone camera — causes the model to fail or reveal artefacts. Virtual camera deepfakes cannot respond to spontaneous physical requests they were not designed for.
What access should a new engineering hire have on day one? Read-only code repositories, a sandboxed development environment, no production database credentials, no customer data access, no admin privileges. Permissions escalate over 30–90 days as the hire demonstrates consistent work output.
Does biometric liveness detection expose my company to legal risk? Potentially. Illinois BIPA, Texas CUBI, and similar state laws require notice, consent, and data retention policies before collecting biometric data. Verify compliance documentation and build consent processes into candidate workflows — this is a procurement consideration, not a reason to avoid liveness detection.
What is continuous identity assurance and why does it matter post-hire? Continuous identity assurance periodically re-verifies that the person performing work is the same person verified at hire — closing the gap where a verified employee could hand off credentials to an unverified third party. 1Kosmos LiveID provides this capability.
How do I know if a vendor’s identity proofing meets NIST IAL2? Ask for their NIST SP 800-63A IAL2 compliance documentation explicitly — not a marketing claim, the actual documentation. Then test their liveness detection against photo replay and video replay during evaluation.
What is chain-of-trust recordkeeping and why does it matter legally? It is a verifiable audit log documenting who was verified, when, by what method, and with what result. In a negligent hiring liability claim, it is evidence of “reasonable controls” — that the organisation took documented, defensible steps before granting access.
Can device intelligence detect VPN usage by fraudulent applicants? Yes. Platforms like sardine.ai flag applications where the claimed location does not match network signals, or where a device fingerprint appears across multiple distinct applications — both common indicators of coordinated fraud.
Should I require in-person identity verification for remote hires? NIST IAL2-aligned identity proofing is the practical standard for remote-first companies. For high-risk roles, a single physical handoff during onboarding — equipment pickup at a verified partner location — “breaks many synthetic workflows and raises the attacker’s cost dramatically.” Belt-and-suspenders, not a replacement.
Where to Go From Here
The controls in this article are designed to be implemented incrementally — start with the free tier, validate results, then layer in vendor tooling where ROI justifies it. No single control stops all fraud; the stack’s value is in the cumulative cost it imposes on adversaries.
Two adjacent topics are worth reading alongside this article. If your chain-of-trust recordkeeping also needs to serve as legal documentation — for negligent hiring liability, OFAC sanctions exposure, or biometric privacy compliance — the chain-of-trust recordkeeping as legal documentation guide covers what “reasonable controls” means in a board and legal context. If the defence stack fails and a fraudulent hire is discovered, the what to do when a fraudulent hire is discovered playbook walks through containment, evidence preservation, and law enforcement engagement step by step.
For the complete picture of this threat — how synthetic candidate fraud works, who is behind it, and why remote engineering roles are the primary target — see the full guide to synthetic candidate fraud in remote hiring.