Skip to main content

Documentation

Everything you need to know about AI compliance.

Beginner10 min read

Which AI Laws Apply to You?

Answer 5 questions to determine exactly which regulations apply to your business.

AssessmentGeographic ScopeUse Cases

Which AI Laws Apply to Your Business? (2-Minute Assessment)

Last Updated: January 23, 2026
Next Review: April 23, 2026


"I Have No Idea Which Laws Apply to Us"

A Series B founder called us last month, panicking.

His AI-powered recruiting platform had just landed their first enterprise customer—a Fortune 500 company with offices in NYC, Colorado, and California.

The customer's legal team asked: "Which AI regulations do you comply with?"

His answer: "Uh... GDPR?"

Their response: "What about NYC Local Law 144? Colorado AI Act? Our offices are in those jurisdictions."

His realization: He had no idea which laws applied to his business.

The deal stalled for 3 months while he figured it out. Cost: $500K in delayed revenue.

Here's how to avoid that mistake.


The 5-Question Framework

Determining which AI laws apply isn't complicated. Answer these 5 questions:

Question 1: Where Are Your Users Located?

Why it matters: AI laws apply based on where your users are, not where your company is.

Example: You're a startup in Texas. But if you have customers in NYC, you're subject to NYC Local Law 144.

Check:

  • [ ] Do you have users in New York City? → NYC Local Law 144 applies
  • [ ] Do you have users in Colorado? → Colorado AI Act applies
  • [ ] Do you have users in Illinois? → Illinois BIPA applies (if biometric AI)
  • [ ] Do you have users in California? → CCPA/CPRA applies
  • [ ] Do you have users in EU? → GDPR + EU AI Act apply
  • [ ] Do you sell to federal government? → AI Executive Order 14110 applies

Tool: Law Finder - Get personalized results in 2 minutes


Question 2: What Does Your AI Do?

Why it matters: "High-risk" AI triggers more regulations.

High-risk AI (more regulations):

  • Hiring/recruitment - Resume screening, candidate ranking, interview scheduling
  • Credit decisions - Loan approvals, credit scoring, interest rate determination
  • Insurance - Coverage decisions, premium pricing, claims processing
  • Healthcare - Diagnosis, treatment recommendations, patient triage
  • Housing - Rental/mortgage approvals, tenant screening
  • Education - Admissions, financial aid, grading
  • Biometric identification - Facial recognition, fingerprint scanning, voice recognition

Low-risk AI (fewer regulations):

  • Customer service chatbots - General inquiries, FAQs
  • Content recommendations - Product suggestions, content feeds
  • Internal analytics - Business intelligence, forecasting
  • Spam filters - Email filtering, content moderation

Check your AI:

  • [ ] Does it make or substantially assist decisions about people?
  • [ ] Does it affect access to opportunities (jobs, credit, housing)?
  • [ ] Does it process biometric data?
  • [ ] Does it process health data?

If yes to any: You're likely subject to AI-specific regulations.


Question 3: What Data Does Your AI Process?

Why it matters: Personal data triggers data protection laws.

Personal data (triggers GDPR, CCPA, HIPAA):

  • Names, addresses, phone numbers, emails
  • Social Security numbers, driver's licenses
  • Financial data (bank accounts, credit cards)
  • Health data (medical records, diagnoses)
  • Biometric data (facial scans, fingerprints)
  • Location data (GPS, IP addresses)
  • Behavioral data (browsing history, app usage)

Check:

  • [ ] Do you process EU citizens' data? → GDPR applies (up to €20M or 4% revenue)
  • [ ] Do you process California residents' data? → CCPA/CPRA applies
  • [ ] Do you process health data (US)? → HIPAA applies
  • [ ] Do you process biometric data (Illinois)? → BIPA applies ($1,000-$5,000 per violation)

Even if you don't collect it directly: If your AI vendor processes personal data on your behalf, you're still responsible.


Question 4: Who Are Your Customers?

Why it matters: Selling to regulated industries adds requirements.

Regulated industry customers:

  • Healthcare → HIPAA, FDA requirements
  • Finance → FINRA, SEC, GLBA requirements
  • Government → FedRAMP, AI Executive Order 14110
  • Education → FERPA requirements
  • Critical infrastructure → CISA requirements

What they'll require:

  • SOC 2 Type II report
  • Industry-specific compliance attestations
  • Vendor risk assessments
  • Business Associate Agreements (healthcare)
  • Data Processing Agreements (EU)

Check:

  • [ ] Do you sell to healthcare organizations? → HIPAA + BAA required
  • [ ] Do you sell to financial institutions? → FINRA compliance required
  • [ ] Do you sell to federal government? → FedRAMP + AI EO 14110 required
  • [ ] Do you sell to EU companies? → GDPR DPA required

Question 5: How Automated Is Your AI?

Why it matters: Fully automated decisions trigger stricter rules.

GDPR Article 22 applies when:

  • Decision is solely automated (no human involvement)
  • Decision has legal or similarly significant effects
  • Processes personal data of EU citizens

Examples of "significant effects":

  • Automatic loan denial
  • Automatic job rejection
  • Automatic insurance denial
  • Automatic benefit termination

Check:

  • [ ] Does your AI make decisions without human review?
  • [ ] Do these decisions significantly affect people's lives?
  • [ ] Do you process EU citizens' data?

If yes to all three: GDPR Article 22 applies. You need:

  • Explicit consent OR legal necessity
  • Meaningful information about decision logic
  • Human review process
  • Right to contest decisions

The Compliance Matrix

Based on your answers, here's what applies:

Scenario 1: AI Hiring Tool (NYC Users)

Your situation:

  • AI screens resumes for NYC employers
  • Makes hiring recommendations
  • Processes candidate personal data

Laws that apply:

  1. NYC Local Law 144

    • Annual bias audit required ($15K-$50K)
    • Candidate notice (10+ days before screening)
    • Publish audit results
    • Penalty: $500-$1,500/day
  2. EEOC Guidance

    • Disparate impact testing
    • Reasonable accommodation
    • Vendor liability doesn't eliminate employer liability
  3. GDPR (if EU candidates) ✅

    • Article 22 automated decision-making
    • DPIA required
    • Consent mechanisms
    • Penalty: Up to €20M or 4% revenue

Action required: Commission bias audit, implement candidate notice, conduct DPIA (if EU)

Timeline: Before deployment or immediately if already deployed


Scenario 2: AI Credit Scoring (Colorado)

Your situation:

  • AI determines loan approvals
  • Serves Colorado residents
  • Fully automated decisions

Laws that apply:

  1. Colorado AI Act

    • Impact assessment required
    • Risk management policy
    • Consumer disclosures
    • Penalty: Up to $20,000/violation
    • Effective: February 1, 2026
  2. FCRA (Fair Credit Reporting Act) ✅

    • Adverse action notices
    • Accuracy requirements
    • Dispute process
  3. ECOA (Equal Credit Opportunity Act) ✅

    • Non-discrimination requirements
    • Reasons for adverse action

Action required: Complete impact assessment before Feb 1, 2026

Timeline: Urgent (8 days until enforcement)


Scenario 3: Healthcare AI (Diagnosis)

Your situation:

  • AI analyzes medical images
  • Provides diagnosis recommendations
  • Used by US hospitals

Laws that apply:

  1. FDA Regulation

    • Software as Medical Device (SaMD)
    • Premarket approval (510(k) or PMA)
    • Clinical validation
    • Post-market monitoring
  2. HIPAA

    • Protected Health Information (PHI) safeguards
    • Business Associate Agreement required
    • Security Rule compliance
    • Breach notification
    • Penalty: $100-$50,000/violation
  3. State medical device laws

    • Varies by state

Action required: FDA clearance, HIPAA compliance, BAA with customers

Timeline: FDA clearance before marketing (6-18 months)


Scenario 4: Facial Recognition (Illinois)

Your situation:

  • AI identifies people from photos
  • Used in Illinois
  • Collects biometric data

Laws that apply:

  1. Illinois BIPA

    • Written policy for biometric data
    • Informed consent before collection
    • Disclosure of purpose and duration
    • No selling without consent
    • Penalty: $1,000-$5,000/violation
    • Private right of action (individuals can sue)
  2. EU AI Act (if EU users) ✅

    • Biometric identification = high-risk AI
    • Conformity assessment required
    • CE marking
    • Registration in EU database
    • Penalty: Up to €35M or 7% revenue

Action required: Obtain consent, written policy, consider if worth the risk

Timeline: Before collecting any biometric data

Note: Facebook paid $650M BIPA settlement in 2021. High-risk area.


Scenario 5: Customer Service Chatbot

Your situation:

  • AI answers customer questions
  • No consequential decisions
  • Processes basic contact info

Laws that apply:

  1. GDPR (if EU users) ✅

    • Privacy policy
    • Consent for data processing
    • Data retention limits
    • Penalty: Up to €20M or 4% revenue
  2. CCPA (if CA users, >$25M revenue) ✅

    • Privacy policy
    • Opt-out mechanism
    • Data deletion rights
    • Penalty: $2,500-$7,500/violation
  3. FTC Guidance

    • Don't exaggerate AI capabilities
    • Disclose limitations
    • No deceptive claims

Action required: Privacy policy, consent mechanisms, accurate marketing

Timeline: Before deployment

Good news: Lower risk than consequential AI. Fewer requirements.


Geographic Breakdown

United States

Federal:

  • AI Executive Order 14110 - Federal contractors, critical infrastructure, large AI models
  • FTC Section 5 - Deceptive AI claims, algorithmic discrimination
  • EEOC Guidance - AI in employment decisions

State:

  • New York City - Local Law 144 (AI in hiring)
  • Colorado - AI Act SB24-205 (high-risk AI, effective Feb 1, 2026)
  • Illinois - BIPA (biometric AI, enforced since 2008)
  • Utah - AI Policy Act (regulated occupations, effective May 1, 2024)
  • California - CCPA/CPRA (data privacy), pending AI bills (AB 2013, SB 1047)

Industry-Specific:

  • Healthcare - FDA (medical device AI), HIPAA (health data)
  • Finance - FINRA (trading/advisory AI), GLBA (financial data)
  • Credit - FCRA, ECOA (credit decisions)

European Union

EU-Wide:

  • EU AI Act - Risk-based regulation (phased 2025-2027)

    • Prohibited AI: €35M or 7% revenue
    • High-risk AI: Conformity assessment, CE marking
    • Limited risk: Transparency obligations
  • GDPR - Data protection (effective 2018)

    • Article 22: Automated decision-making
    • Up to €20M or 4% revenue

Member State Laws: Individual countries may have additional requirements


Other Jurisdictions

United Kingdom:

  • Pro-innovation approach (sector-specific)
  • GDPR retained (UK GDPR)
  • No comprehensive AI law yet

Canada:

  • AIDA (Artificial Intelligence and Data Act) - Proposed, part of Bill C-27
  • High-impact AI systems
  • Similar to EU approach

China:

  • Multiple AI regulations (2023-2024)
  • Generative AI rules
  • Recommendation algorithm rules
  • Content moderation requirements

The Decision Tree

START: Do you use AI?
│
├─ NO → No AI-specific compliance (but check data privacy)
│
└─ YES → Where are your users?
    │
    ├─ NYC → NYC Local Law 144 (if hiring AI)
    │
    ├─ Colorado → Colorado AI Act (if high-risk AI, Feb 1, 2026)
    │
    ├─ Illinois → BIPA (if biometric AI)
    │
    ├─ California → CCPA/CPRA (if personal data)
    │
    ├─ EU → GDPR + EU AI Act
    │
    └─ Federal → AI EO 14110 (if contractor/critical infrastructure)
    
    THEN: What does your AI do?
    │
    ├─ Hiring → NYC LL144, Colorado AI Act, EEOC
    │
    ├─ Credit → Colorado AI Act, FCRA, ECOA, FINRA
    │
    ├─ Healthcare → FDA, HIPAA
    │
    ├─ Biometric → BIPA, EU AI Act
    │
    └─ Low-risk → GDPR/CCPA (if personal data), FTC
    
    THEN: Check data type
    │
    ├─ Personal data → GDPR, CCPA
    │
    ├─ Health data → HIPAA
    │
    ├─ Biometric → BIPA
    │
    └─ No personal data → Minimal requirements

Use our interactive tool: Law Finder - Get personalized results in 2 minutes


Common Scenarios

"We're a B2B SaaS company. Do AI laws apply?"

Depends on what your AI does.

If your AI helps customers make decisions about people (hiring, credit, etc.):

  • You're subject to the same laws as your customers
  • Example: HR tech with AI resume screening → NYC LL144 applies if customers are in NYC

If your AI is internal (analytics, forecasting):

  • Fewer requirements
  • But still need data privacy compliance (GDPR, CCPA)

What your customers will require:

  • SOC 2 Type II (60% of enterprises require it)
  • Bias audit reports (if AI affects hiring)
  • Data Processing Agreements (if EU customers)

"We use AI from OpenAI/Anthropic/etc. Do we still need compliance?"

Yes. You're still responsible.

Why:

  • NYC LL144 requires employer-specific bias audits (vendor audits don't count)
  • GDPR makes you a "data controller" (vendor is "data processor")
  • You're liable for how you use the AI, even if you didn't build it

What you need:

  • Review vendor's compliance documentation
  • Commission your own audits when required
  • Ensure vendor contracts allocate responsibilities
  • Implement your own controls (logging, monitoring, human oversight)

Real case: Retail chain paid $225,000 for relying on vendor audit instead of commissioning employer-specific audit.


"We're pre-revenue. Do we need to worry about this now?"

Yes, if:

  • You're processing personal data (GDPR, CCPA apply immediately)
  • You're in a regulated industry (healthcare, finance)
  • You're raising funding (investors check compliance)

No, if:

  • You're still in R&D with no users
  • You're not processing any real data yet

But: Build compliance in from day one. Retrofitting is 8x more expensive.

Minimum for pre-revenue:

  • AI inventory
  • Privacy policy
  • Terms of service
  • Basic audit logging

Cost: $5,000-$15,000 (mostly legal review)


What to Do Next

Step 1: Determine Which Laws Apply (2 minutes)

Use our tool: Law Finder

Or manually check:

  • [ ] User locations → Geographic laws
  • [ ] AI use case → High-risk laws
  • [ ] Data types → Data protection laws
  • [ ] Customer industries → Industry laws
  • [ ] Automation level → GDPR Article 22

Step 2: Assess Your Gaps (15 minutes)

Use our tool: Self-Audit

Or manually assess:

  • [ ] Do you have required audits? (bias audit, impact assessment)
  • [ ] Do you have required disclosures? (privacy policy, candidate notice)
  • [ ] Do you have required controls? (logging, monitoring, human oversight)
  • [ ] Do you have required documentation? (AI inventory, policies)

Step 3: Calculate Your Risk (5 minutes)

Use our tool: Penalty Calculator

Or manually calculate:

  • NYC LL144: $500-$1,500/day × days of violation
  • Colorado: $20,000 × number of violations
  • EU AI Act: €35M or 7% revenue (for prohibited/high-risk)
  • GDPR: €20M or 4% revenue
  • BIPA: $1,000-$5,000 × number of violations

Step 4: Create Your Roadmap

Based on urgency:

Immediate (within 30 days):

  • [ ] If using AI in hiring (NYC) → Commission bias audit
  • [ ] If high-risk AI (Colorado) → Complete impact assessment (due Feb 1)
  • [ ] If biometric AI (Illinois) → Obtain consent, written policy
  • [ ] If processing EU data → Conduct DPIA

Short-term (within 90 days):

  • [ ] Implement logging and monitoring
  • [ ] Update privacy policies
  • [ ] Create AI governance documentation
  • [ ] Train team on compliance

Long-term (within 6-12 months):

  • [ ] Get SOC 2 Type II (if selling to enterprises)
  • [ ] Establish compliance program
  • [ ] Hire compliance expertise
  • [ ] Set up ongoing monitoring

Full guide: AI Compliance: 30-Day Action Plan


Frequently Asked Questions

What if laws conflict?

Follow the strictest requirement.

Example: You have users in NYC (LL144) and Colorado (AI Act).

  • NYC requires annual bias audits
  • Colorado requires impact assessments
  • Do both

Why: Each law applies independently. You can't pick one.


What if I'm not sure if a law applies?

When in doubt, assume it applies.

Why: Cost of compliance < cost of violation

Example: Not sure if your AI qualifies as "high-risk" under Colorado AI Act?

  • Cost of impact assessment: $5,000-$20,000
  • Cost of violation: Up to $20,000 per violation
  • Do the assessment

Get help: Book consultation - 30 min, free


Do I need a lawyer?

For legal review: Yes

For implementation: Not necessarily

What you need lawyers for:

  • Reviewing policies and contracts
  • Interpreting ambiguous regulations
  • Responding to enforcement actions
  • Advising on legal strategy

What you can do yourself:

  • AI inventory
  • Gap analysis
  • Control implementation
  • Documentation

Hybrid approach: DIY + legal review = most cost-effective


How often do I need to reassess?

Triggers for reassessment:

  • [ ] New AI system deployed
  • [ ] Material changes to existing AI
  • [ ] New geographic markets entered
  • [ ] New regulations take effect
  • [ ] Customer requirements change

Minimum frequency: Quarterly

Best practice: Ongoing monitoring with quarterly formal reviews


Next Steps

If you're just starting:

  1. Run Law Finder - Which laws apply (2 min)
  2. Run Self-Audit - Identify gaps (15 min)
  3. Read: What is AI Compliance - Understand requirements
  4. Download Checklist - 30-day action plan

If you know which laws apply:

  1. Run Penalty Calculator - Understand risk (5 min)
  2. Read: 30-Day Action Plan - Implementation guide
  3. Read: Common Pitfalls - Avoid mistakes
  4. Book Consultation - Get expert help (30 min, free)

If you need help:

  1. Schedule Demo - See HAIEC platform
  2. Talk to Sales - Enterprise compliance program
  3. Read: Why Compliance Matters - Business case

Disclaimer

This is educational content, not legal advice. AI compliance requirements vary by jurisdiction, industry, and specific use case. Consult qualified legal counsel for advice specific to your situation.

HAIEC provides compliance tools and educational resources but is not a law firm and does not provide legal advice.


Last Updated: January 23, 2026
Next Review: April 23, 2026
Regulatory Sources:

  • NYC Local Law 144 (2021)
  • Colorado SB24-205 (2024)
  • EU AI Act (Regulation 2024/1689)
  • GDPR (Regulation 2016/679)
  • Illinois BIPA (740 ILCS 14)
  • AI Executive Order 14110 (2023)

Questions? Contact us or book a free consultation.