Free AI Law Check

Do AI Laws Apply to Your Business?

Answer 6 simple questions to find out which AI regulations you need to comply with.

Question 1 of 617% complete

Where is your business primarily located?

This helps us determine which regional AI laws may apply to you.

Trusted by 500+ companies

About the Author

Subodh KC is the founder of HAIEC and author of the Instruction Stack Audit Framework (ISAF). His research on AI accountability has been published on Zenodo (DOI: 10.5281/zenodo.14555643).

Last reviewed: January 2026

How Regulators Enforce AI Compliance Laws

Understanding enforcement mechanisms helps you assess real compliance risk—not just theoretical obligations.

NYC Local Law 144: Complaint-Driven + Proactive Audits

The NYC Department of Consumer and Worker Protection (DCWP) enforces Local Law 144 through multiple channels:

Complaint-Driven Investigations

  • Job candidates can file complaints if they suspect AEDT use without proper notice
  • Current employees can report missing bias audit disclosures
  • Complaints trigger formal investigations with document requests
  • DCWP has subpoena power to obtain employer records

Proactive Enforcement Actions

  • DCWP monitors public job postings for AI screening disclosures
  • Routine audits of employers in high-risk sectors (finance, tech, healthcare)
  • Cross-referencing bias audit publication requirements with careers pages
  • Checking for 10-day advance notice in application workflows

Real Enforcement Timeline
Enforcement began July 5, 2023 [1]. Initial focus was on education and warning letters. As of 2024, DCWP has shifted to penalty assessment for repeat violations.

Penalty Structure

  • First violation: $500 per day (often warning letter instead)
  • Subsequent violations: $1,500 per day
  • Separate violation streams for: missing audit, missing notice, missing publication
  • Penalties compound—three violation types × 180 days = $270,000-$810,000 exposure

Colorado AI Act: Attorney General Enforcement (Starting February 2026)

Colorado's AI Act enforcement mirrors the existing Consumer Protection Act framework:

Enforcement Triggers

  • Consumer complaints about algorithmic discrimination
  • Pattern-or-practice violations identified through market surveillance
  • Failure to respond to Attorney General inquiries
  • Missing or inadequate impact assessments

Penalty Framework

  • Up to $20,000 per violation
  • No daily accumulation (unlike NYC LL144)
  • Enhanced penalties for intentional violations
  • Injunctive relief requiring system changes

EU AI Act: Multi-Tier Enforcement (Phased 2025-2027)

Penalty Tiers

  • Prohibited AI systems: Up to €35M or 7% global revenue
  • High-risk non-compliance: Up to €15M or 3% global revenue
  • Documentation failures: Up to €7.5M or 1.5% global revenue
  • Incorrect information: Up to €7.5M or 1% global revenue

Common AI Compliance Violations We See

These patterns emerge across industries and jurisdictions:

1. Shadow AI: Unknown Systems Creating Exposure

The Problem: Employees adopt AI tools without IT/compliance approval. Marketing uses AI copywriting. Sales uses AI lead scoring. HR uses AI resume screening from their ATS vendor.

Why It's Risky:

  • You can't comply with laws you don't know apply
  • Vendor AI doesn't exempt you from deployer obligations
  • Shadow AI often lacks documentation, logging, or oversight

Real Example: A financial services firm discovered their recruiting team had enabled AI resume screening in their ATS. No bias audit. No candidate notice. 6 months of violations = $90,000-$270,000 exposure under NYC LL144.

2. Vendor Compliance Confusion

The Problem: "Our vendor says they're compliant, so we're covered."

Why It's Wrong:

  • NYC LL144: Employer must commission independent bias audit (vendor audit doesn't count)
  • Colorado AI Act: Deployer obligations exist even if developer is compliant
  • EU AI Act: Deployer must verify provider compliance and conduct own assessments

3. Missing Candidate/Consumer Notice

Violation Examples:

  • NYC LL144: No 10-day advance notice to job candidates
  • Colorado AI Act: No disclosure of AI use in consequential decisions
  • GDPR Article 13: No information about automated decision-making logic

Why It Matters: Each affected individual can be a separate violation. 1,000 candidates × $500/day = $500,000/day exposure.

Frequently Asked Questions

Do AI laws apply if I only use ChatGPT for internal productivity?

Generally no, but context matters. Most AI laws target automated decision-making that affects external parties (consumers, employees, job candidates). Internal productivity tools typically fall outside scope. However, if ChatGPT outputs influence hiring decisions, NYC LL144 may apply. If it processes customer data, GDPR applies. Document how internal AI is used and confirm it doesn't influence consequential decisions.

What if my AI vendor says they're compliant with all laws?

Vendor compliance doesn't transfer to you. NYC LL144 requires the employer to commission an independent bias audit—vendor audits don't satisfy this requirement. Colorado AI Act places distinct obligations on deployers that can't be outsourced. EU AI Act requires deployers to verify provider compliance and conduct their own conformity assessments. Request vendor compliance documentation, verify it covers your specific use case, and conduct your own deployer-level assessment.

How often do I need to update my AI compliance documentation?

Minimum frequencies: NYC LL144 requires annual bias audits. Colorado AI Act requires impact assessment updates when systems change materially. EU AI Act requires continuous technical documentation updates. GDPR requires DPIA updates when processing changes. Best practice: Review all compliance documentation annually, plus triggered reviews when new AI systems are deployed, existing systems are modified materially, new regulations take effect, or after any AI-related incident.

Can I be fined for AI I didn't know my company was using?

Yes. Ignorance is not a defense. Regulators hold the organization accountable, not individual employees. 'We didn't know' doesn't excuse non-compliance. Real example: A company's marketing team used AI content generation tools without IT approval, processing customer data without proper GDPR consent. €50,000 fine despite leadership being unaware. Prevent this by conducting AI inventory across all departments, implementing AI procurement approval workflow, and training employees on AI governance policies.

Do these laws apply to startups and small businesses?

It depends on the law. NYC LL144 applies to all employers using AEDT in NYC, regardless of size—no small business exemption. Colorado AI Act has a carve-out: deployers with <50 employees are exempt from some requirements but still prohibited from algorithmic discrimination. EU AI Act has SME-specific provisions for compliance support, but core obligations still apply. GDPR applies to all organizations processing EU personal data. Bottom line: Size may reduce some requirements, but core prohibitions (discrimination, transparency) apply universally.

What's the difference between a bias audit and an impact assessment?

Bias Audit (NYC LL144): Narrow focus on detecting discriminatory outcomes in hiring/promotion AI through statistical analysis of impact ratios. Conducted by independent auditor. Annual frequency. Published summary required. Impact Assessment (Colorado/EU AI Act): Comprehensive evaluation of all risks from high-risk AI systems including accuracy, security, transparency, human oversight. Can be conducted internally. Updated when systems change materially. Internal documentation, not publicly published.

If my bias audit shows disparate impact, can I still use the AI?

Yes under NYC LL144, but with caveats. Having disparate impact doesn't automatically prohibit AEDT use—you must publish the audit results (including disparate impact findings) and can continue using the tool. However, other legal risks exist: Title VII (federal) disparate impact can trigger discrimination claims, state fair employment laws may prohibit discriminatory tools, and there's reputational risk from published bias audit. If audit shows significant bias, consult employment counsel about Title VII exposure and consider remediation before continued use.