Global AI regulation is here, but nobody agrees on what “high-risk AI” actually means. The EU AI Act, Colorado SB 24-205, NYC Local Law 144, and South Korea’s AI Basic Act all draw the line in different places. A single product feature can be caught by all four laws at once — or by none of them — depending on how and where it’s used.
If you’re an engineering lead or product owner at an SMB in SaaS, FinTech, HealthTech, or EdTech, this is where the anxiety comes from. You know the global AI regulation landscape is shifting fast, but that doesn’t tell you whether your HR screener, loan feature, or health advice module is actually covered by any of it.
That’s what this article is for. We map four common SMB product types against each jurisdiction’s classification criteria, spell out what the revenue thresholds mean for you, and give you a clear prioritisation framework for what needs action now versus what you can sit and watch for a while.
What Does “High-Risk AI” Actually Mean, and Why Does It Differ Across Jurisdictions?
“High-risk AI” is not a universal legal term. Each jurisdiction defines it differently, so the same feature can qualify as high-risk under one law and be completely exempt under another.
The EU AI Act uses a sector-and-use-case approach. Annex III lists specific categories — employment, credit, medical devices, biometrics, critical infrastructure, education — and your system must match both a listed sector AND a listed use case to be classified as high-risk. Being adjacent to a sector is not enough.
Colorado SB 24-205 takes a decision-impact approach. It covers any AI-driven output that significantly affects a consumer’s access to employment, credit, housing, healthcare, education, or essential services. No sector restriction. What matters is whether the output materially influences a high-stakes eligibility determination.
NYC Local Law 144 goes narrower and more function-specific. It covers any automated employment decision tool (AEDT) used for hiring or promotion, regardless of industry or accuracy. Resume screeners, candidate ranking systems, interview assessment platforms, and skills-based testing tools with machine learning components all qualify — no exceptions.
South Korea’s AI Basic Act mirrors EU Annex III sectors — employment, credit, healthcare, critical infrastructure, biometrics — but adds a threshold layer. It applies to organisations above 1 trillion KRW (~$700M USD) in total revenue, 10 billion KRW (~$7M USD) in Korean domestic sales, or 1 million daily Korean users. Hit any one of those and you’re covered.
The practical upshot is what compliance people call jurisdiction stacking: the same AI-powered HR screener can be high-risk under all four jurisdictions simultaneously if your company meets each law’s scope conditions. That’s the heart of the compliance complexity problem for multi-market products.
Which US State AI Laws Apply If Your Revenue Is Below $500 Million?
Most of the coverage you read about US state AI law focuses on California SB 53 and the New York RAISE Act. Both primarily target frontier model developers and both include a $500 million annual revenue threshold. Most SMBs aren’t covered and never will be.
The key distinction here is frontier model developer versus deployer. A company building products on top of OpenAI, Anthropic, or Google APIs is a deployer, not a frontier model developer. California SB 53 defines a “large frontier developer” as a company with more than $500M in revenues training models at more than 10^26 floating-point operations. Both laws are aimed at the companies building the largest models — not the companies integrating them.
If you’re at $80M ARR using the OpenAI API: SB 53 and the RAISE Act don’t apply. Full stop.
Two US state laws do apply regardless of revenue:
Colorado SB 24-205 (effective June 30, 2026) covers any developer or deployer of a high-risk AI system making consequential decisions affecting Colorado residents. No revenue floor, no minimum company size. The only size-based exemption is for deployers with fewer than 50 employees — and that goes away the moment you’ve used your own data to train or customise the AI system.
NYC Local Law 144 (in effect since July 5, 2023) applies to any employer using an AEDT for hiring or promotion affecting NYC-based positions. No revenue threshold, no minimum headcount. A 10-person company using AI for hiring has the same obligations as a Fortune 500 firm.
Texas HB 149 (effective January 1, 2026) prohibits using AI to incite violence, capture biometric identifiers without consent, or discriminate on protected characteristics. For most SMBs this is a compliance hygiene check, not a major programme build.
Illinois HB 3773 (effective January 1, 2026) amends the Illinois Human Rights Act to prohibit discriminatory AI in employment and requires employers to notify applicants when AI is used in hiring.
How Does the EU AI Act Classify High-Risk AI Systems?
The EU AI Act has four risk tiers: unacceptable risk (prohibited), high-risk (full compliance obligations), limited risk (transparency obligations only), and minimal risk (voluntary guidelines). Misclassification carries penalties up to €35 million or 7% of global annual turnover. You don’t want to get this wrong.
High-risk classification is a two-part test. The system must fall within a sector listed in Annex III, AND it must perform the specific use case listed for that sector. Touching an Annex III sector is not enough on its own.
The Annex III categories most relevant to SMB products:
- Employment and worker management: CV screening, candidate ranking, performance evaluation, task allocation
- Access to essential services: credit scoring, insurance risk assessment, healthcare triage
- Education and vocational training: AI deciding admissions, exam scoring, or training outcomes
- Biometric identification: any system that identifies individuals by biometric characteristics
The EU AI Act applies extraterritorially — same as GDPR. Any provider placing an AI system on the EU market or affecting EU users must comply regardless of where they’re incorporated.
The Article 6(3) derogation is the main escape hatch. Even if your system matches Annex III, it can avoid high-risk classification if it performs only a narrow procedural task, improves a previously completed human activity without replacing human assessment, or detects patterns without issuing decisions — AND does not profile natural persons.
That AND condition is the one that catches people out. Profiling means automated processing of personal data to evaluate or predict aspects of a person’s performance, behaviour, or reliability. Any profiling of candidates blocks the derogation automatically. A resume screener that ranks applicants cannot claim Article 6(3) regardless of how you describe it.
The August 2026 deadline makes high-risk obligations — conformity assessment, CE marking, EU AI database registration, human oversight — fully enforceable from August 2, 2026. The Digital Omnibus proposal could push some obligations to December 2027, but it’s still just a proposal. Plan for August 2026.
How Do Colorado SB 24-205 and NYC Local Law 144 Apply to AI Deployers Regardless of Size?
These are the two most immediately actionable US AI laws for SMBs. Both apply with no revenue threshold and no minimum company size. This is where most SMBs need to focus.
Colorado SB 24-205 imposes a duty of reasonable care on any deployer of a high-risk AI system making consequential decisions affecting Colorado residents. The key obligations: adopt a written governance framework, conduct a documented impact assessment before deployment and annually after that, test for algorithmic discrimination, confirm vendor compliance through due diligence, provide human review of adverse AI-influenced decisions, and notify the Colorado Attorney General within 90 days of discovering a foreseeable risk of algorithmic discrimination.
NYC Local Law 144 requires any employer using an AEDT for NYC hiring or promotion to commission an independent bias audit annually, publish the results publicly, and provide written notice to each NYC-based applicant at least 10 business days before the AEDT is used. Penalties are $375 per violation per day — not per incident. A non-compliant tool processing 100 applications per day generates 100 violations daily. That adds up fast.
Impact assessments and bias audits are not the same thing. A Colorado impact assessment is a broad risk review covering system purpose, training data limitations, affected populations, and mitigation measures. A NYC bias audit is a focused independent statistical test applying the four-fifths (80%) rule — if any protected group’s selection rate falls below 80% of the highest-scoring group’s rate, the tool has presumed disparate impact. If you’re subject to both, you need both.
Companies with HR AI tools affecting Colorado residents and NYC-based candidates at the same time trigger both laws simultaneously. For guidance on building these compliance artefacts, see the documentation artefacts triggered by high-risk classification.
Does Your Product Use Case Qualify as High-Risk? A Decision Framework for Common SMB Features
Four product feature types map differently across jurisdictions. For each, the key questions are: Does it match Annex III? Does it make consequential decisions affecting Colorado residents? Does it qualify as an AEDT for NYC employment decisions? Does it affect Korean users at scale?
HR and hiring tools are high-risk across all four jurisdictions — Annex III employment (EU), consequential employment decisions (Colorado), AEDT definition (NYC), high-impact AI in employment (South Korea). If you’ve built an AI-powered candidate screener, ranking system, or interview assessment tool, you need to address all applicable laws. No way around it.
Loan-adjacent features — credit scoring, affordability assessment, loan eligibility — are high-risk under three of the four frameworks. EU Annex III covers credit and insurance access. Colorado covers consequential financial decisions. South Korea covers credit decisions for Korean users. NYC Local Law 144 doesn’t apply unless the feature is specifically used in an employment decision. Verdict: three out of four — act now if you have EU, Colorado, or Korean market reach.
Health advice modules are the most contextual. They are high-risk under the EU AI Act if the output constitutes clinical decision support. Colorado applies if outputs influence healthcare access. The critical question is whether the AI output substitutes for or substantially influences a clinical judgement. A chatbot surfacing health information without recommending specific treatment is less likely to qualify. A symptom checker recommending specialist referrals or modifying treatment pathways is more likely to qualify.
General-purpose recommendation engines are most likely to escape coverage. The EU Article 6(3) derogation may apply if the engine detects patterns without replacing human assessment and doesn’t profile natural persons. Colorado doesn’t apply unless recommendations influence access to consequential domains. NYC and South Korea don’t cover general recommendation engines. The short version: assume coverage for employment, credit, and healthcare features; start with the Article 6(3) derogation analysis for everything else.
For guidance on the compliance documentation these classifications trigger, refer to the documentation artefacts triggered by high-risk classification.
What Does South Korea’s AI Basic Act Require, and When Does It Apply to Non-Korean Companies?
South Korea’s AI Basic Act came into effect in January 2026. It applies extraterritorially — physical presence in Korea is not required. The trigger is market effect. Three thresholds apply, and any one is sufficient:
- Total organisational revenue exceeds 1 trillion KRW (~$700M USD)
- Korean domestic sales exceed 10 billion KRW (~$7M USD)
- The system has 1 million or more daily active Korean users
For most SMBs, the first threshold is the least likely to be an issue. A SaaS generating $7M in Korean sales hits the second. The 1 million daily user threshold is the one to watch as your Korean user base grows.
The classification criteria closely mirror EU Annex III, so an EU AI Act compliance programme gives you a strong foundation for South Korea as well. The maximum penalty is KRW 30 million (~USD $20,500) — significantly lower than EU penalties, which reflects South Korea’s support-first model.
If your Korean user base is below 1 million daily users and Korean revenue is below 10 billion KRW, South Korea obligations don’t currently apply. If either threshold is approaching, build EU AI Act compliance first — it transfers directly.
How Do You Prioritise Which AI Laws Need Action Now Versus Monitoring?
Multiple laws are active or coming into force in 2026. With limited compliance resources, you need to prioritise based on whether the law applies to your product now, the deadline, and what happens if you get it wrong.
Immediate Action
NYC Local Law 144 — in effect now. If you’re using any AEDT for hiring affecting NYC-based candidates, commission a bias audit. Penalties of $375 per violation per day start accumulating from the first non-compliant application processed.
Colorado SB 24-205 — effective June 30, 2026. Start your impact assessment programme now. These take months to prepare properly. Starting in May 2026 is too late.
EU AI Act high-risk obligations — effective August 2, 2026. If any product feature maps to Annex III and you have EU customers, begin conformity assessment planning now. The Digital Omnibus hasn’t been adopted — plan for August 2026.
Texas HB 149 and Illinois HB 3773 — in effect January 1, 2026. Compliance hygiene checks: verify your AI use cases don’t involve prohibited biometric capture, protected-characteristic discrimination, or behaviour manipulation.
Monitor Only
California SB 53 and New York RAISE Act — monitor for revenue growth approaching $500M. These don’t apply to API-using deployers below the threshold.
Trump preemption strategy (DOJ AI Litigation Task Force, EO 14365) — signals federal intent to challenge state AI laws, including Colorado’s. But no court has ruled Colorado SB 24-205 preempted. Deferring compliance on preemption grounds risks a catch-up programme under enforcement pressure if preemption fails. Treat it as a monitoring item, not a current exemption.
South Korea AI Basic Act — monitor if your Korean user base is growing toward 1 million daily users. Build EU AI Act compliance as the proxy programme.
Worked Example: $80M ARR Company with an HR Screener, No EU Customers
Priority 1 — NYC Local Law 144: Commission bias audit now. Priority 2 — Colorado SB 24-205: Begin impact assessment programme now. Priority 3 — EU AI Act: Monitor; begin assessment if EU market entry is planned. Priority 4 — California SB 53 / RAISE Act: No action — below threshold, deployer not developer. Priority 5 — South Korea AI Basic Act: No action unless Korean user base exceeds 100K daily and growing.
The most common planning mistake is treating the frontier-model laws (SB 53, RAISE Act) as the main compliance obligations while underweighting Colorado and NYC — the deployer-focused laws that apply without revenue floors. For most SMBs, the SB 53 analysis resolves in a single sentence. Colorado and NYC need action now.
For broader regulatory context, refer to our overview of AI compliance obligations in 2026.
Frequently Asked Questions
Does SB 53 apply to a company with $80M ARR that uses the OpenAI API?
No. California SB 53 targets “large frontier developers” with more than $500M in revenues training models at more than 10^26 FLOPs. A company at $80M ARR using a third-party API is a deployer, not a frontier model developer. SB 53 doesn’t apply. Colorado SB 24-205 and NYC Local Law 144 may still apply depending on use case and user location.
What is the difference between being an AI developer and an AI deployer under US state laws?
A developer creates or substantially modifies an AI model. A deployer integrates a third-party AI system into their product. Most SMBs using OpenAI, Google, or Anthropic APIs are deployers. Under Colorado SB 24-205, deployers are responsible for impact assessments, vendor due diligence, and consumer notification — not model-level conformity assessments.
Does Colorado SB 24-205 apply to companies outside Colorado?
Yes, if the AI system makes consequential decisions affecting Colorado residents. Scope is determined by where the consumer is located, not where the company is based.
What triggers the EU AI Act’s high-risk classification?
Two conditions must both be met: (1) the AI system falls within a sector listed in Annex III, AND (2) it performs the specific use case listed for that sector. The Article 6(3) derogation must also not apply.
What is the sub-50-employee exemption under Colorado SB 24-205, and does it apply to my company?
Colorado SB 24-205 exempts deployers with fewer than 50 employees — but this exemption is forfeited if you’ve used your own proprietary data to train or fine-tune the AI system. Customising a third-party model on your own dataset loses the exemption regardless of headcount.
What is an automated employment decision tool (AEDT) under NYC Local Law 144?
An AEDT is any computational process — machine learning, statistical modelling, data analytics, or AI — that issues a score, classification, or recommendation used to substantially assist or replace a discretionary employment decision. Resume screeners, candidate ranking systems, interview assessment platforms, and skills-based testing tools with ML components all qualify.
Is the Article 6(3) derogation available for my recommendation engine?
Possibly. The derogation applies if the system performs only a narrow procedural task, improves a previously completed human activity without replacing human assessment, or detects patterns without issuing decisions — AND does not profile natural persons. A general recommendation engine not affecting consequential domains is a plausible candidate. One used in employee performance management or financial product targeting is not.
Does the Trump preemption strategy mean I can delay Colorado SB 24-205 compliance?
No. No court has ruled Colorado SB 24-205 preempted. The law is enforceable from June 30, 2026. Deferring compliance is a gamble — if preemption fails, you’re running a catch-up programme under enforcement pressure. Build the programme now.
How does the South Korea AI Basic Act differ from the EU AI Act for a FinTech SaaS product?
The classification criteria are similar — South Korea mirrors EU Annex III sectors. The key difference is the threshold layer: South Korea adds market-presence triggers (1 trillion KRW revenue, 10 billion KRW Korean sales, or 1 million daily Korean users) that the EU AI Act doesn’t impose. An EU-aligned classification programme gives you a strong foundation for South Korea compliance.
Can I rely on my AI vendor’s compliance claims to satisfy Colorado SB 24-205?
No. Compliance obligations don’t transfer to vendors. Colorado requires deployers to conduct their own impact assessments, confirm vendor compliance through independent due diligence, and maintain their own records. The Mobley v. Workday litigation shows vendor liability is expanding — but it’s no substitute for your own programme.
What does “algorithmic discrimination” mean under Colorado SB 24-205, and how does it differ from disparate impact under NYC Local Law 144?
Algorithmic discrimination under Colorado SB 24-205 covers an AI system producing discriminatory outcomes in consequential decisions based on a protected characteristic, including facially neutral outcomes with disparate effects. NYC Local Law 144 applies the four-fifths (80%) rule to measure whether selection rates for a protected group fall below 80% of the highest-scoring group’s rate. Colorado’s standard is broader and principles-based; NYC’s is statistical and binary. Both must be satisfied independently.