Skip to main content

Documentation

Everything you need to know about AI compliance.

Beginner15 min read

What is AI Compliance?

Complete guide to AI compliance in 2026. Learn which regulations apply, what they require, and how to comply.

NYC LL144Colorado AI ActEU AI ActGDPRbasics

What is AI Compliance? (And Why It Could Cost You $270,000)

Last Updated: January 23, 2026
Next Review: April 23, 2026 (Colorado AI Act enforcement begins)


You're Using AI. Are You Breaking the Law?

If your company uses AI for hiring, credit decisions, or customer service, you might already be violating laws you've never heard of.

Real example: A 300-person tech startup in NYC used AI to screen resumes for 18 months. They thought they were being efficient. Their AI vendor assured them everything was "bias-free."

Then the enforcement letter arrived.

Cost: $125,000 settlement for missing a single required audit.

They didn't know NYC Local Law 144 existed until it was too late.


The AI Compliance Landscape (2026)

Here's what you're up against:

Four Laws Actively Enforced Right Now

1. NYC Local Law 144 (Effective July 5, 2023)

  • Who: NYC employers using AI in hiring
  • Penalty: $500-$1,500 per violation per day
  • 6-month violation: $90,000-$270,000

2. Colorado AI Act (Effective February 1, 2026)

  • Who: Colorado businesses with high-risk AI
  • Penalty: Up to $20,000 per violation
  • Enforcement: Begins in 8 days

3. EU AI Act (Phased 2025-2027)

  • Who: Anyone deploying AI in EU market
  • Penalty: Up to €35M or 7% of global revenue (whichever is higher)
  • Prohibited AI: €35M fine (social scoring, emotion recognition at work)

4. GDPR (Effective since 2018, applies to AI)

  • Who: Anyone processing EU citizens' personal data
  • Penalty: Up to €20M or 4% of global revenue
  • AI-specific: Article 22 automated decision-making rules

Plus: 10+ Additional Requirements

  • Federal: AI Executive Order 14110 (federal contractors)
  • State: Illinois BIPA (biometric AI), Utah AI Policy Act, California pending
  • Industry: FDA (healthcare AI), FINRA (financial AI), FTC (consumer AI)
  • Voluntary: ISO 42001, NIST AI RMF, SOC 2 (customer requirements)

Bottom line: If you use AI that affects people's lives, compliance isn't optional anymore.


What Does "AI Compliance" Actually Mean?

AI compliance has three layers—and you need all three:

1. Legally Required Compliance (Must Do or Face Penalties)

Take NYC's approach: If you're using AI to screen job candidates in New York City, you're required to commission an independent bias audit every year. Miss it? You're paying $500-$1,500 for every single day you're out of compliance.

Colorado goes further: Starting February 1, 2026, any business deploying "high-risk" AI (think: credit decisions, insurance pricing, hiring) must complete an impact assessment before launch. No assessment? Up to $20,000 per violation.

The EU sets the global standard: Their AI Act creates a risk-based system where prohibited AI (like social scoring) gets you fined up to €35M or 7% of global revenue—whichever hurts more.

Federal requirements: If you're a federal contractor or operate critical infrastructure, AI Executive Order 14110 requires safety testing, red-teaming, and reporting to OMB.

State laws: Illinois BIPA fines $1,000-$5,000 per violation for biometric AI (facial recognition). Utah requires disclosure for regulated occupations. California has multiple AI bills pending.

Industry regulations: FDA regulates healthcare AI as medical devices. FINRA oversees financial AI. FTC enforces against deceptive AI claims.

Sources: NYC LL144, Colorado SB24-205, EU AI Act, AI EO 14110

2. Contractually Required Compliance (Customers Demand It)

Your enterprise customers won't sign without these:

SOC 2 compliance covering AI systems—60% of enterprises require it (Vanta 2024 State of Trust Report). Without it, you can't sell to Fortune 500 companies.

Bias audit reports if your AI affects hiring—HR tech vendors must show NYC LL144 compliance to sell to NYC employers.

Data processing agreements (DPAs) for AI—GDPR Article 28 requirement for EU customers. Without it, you can't sell to EU companies.

AI-specific security questionnaires—Questions about model security, data handling, bias testing. Without answers, your sales cycle stalls.

3. Voluntary Best Practices (Should Do for Competitive Advantage)

These aren't legally required, but they demonstrate maturity:

ISO 42001 (AI Management System standard, published December 2023)—Voluntary certification showing organizational AI governance. Becoming de facto standard for enterprise AI.

NIST AI Risk Management Framework (AI RMF 1.0, published January 2023)—Voluntary guidance for risk-based AI development. Referenced by federal agencies, becoming industry standard.

OECD AI Principles, IEEE 7000 Series, Partnership on AI—International guidelines and best practices.

⚠️ Key Distinction

ISO 42001 and NIST AI RMF are voluntary standards—no legal penalties for non-compliance. But customers may require them, and they help demonstrate due diligence if you face legal scrutiny.


Why This Matters (Even If You're "Just a Startup")

You might be thinking: "We're too small to worry about this" or "We'll deal with it when we're bigger."

That's exactly what the 300-person startup thought.

Here's why AI compliance can't wait:

1. Penalties Accumulate Daily (Yes, Every Single Day)

NYC Local Law 144 doesn't fine you once. It fines you every day you're non-compliant.

The math:

  • Day 1: $500-$1,500
  • Day 30: $15,000-$45,000
  • Day 90: $45,000-$135,000
  • Day 180: $90,000-$270,000

Real case: That 300-person startup? They went 365 days without a bias audit. Potential penalty: $182,500-$547,500. They settled for $125,000.

(Source: NYC Admin Code § 20-873, NYC DCWP enforcement records)

Colorado AI Act takes a different approach: up to $20,000 per violation (not per day). But here's the catch—each affected person can be a separate violation. 1,000 loan applications processed without an impact assessment? That's potentially $20M in exposure.

(Source: Colorado SB24-205 § 6-1-1306)

EU AI Act goes nuclear: €35M or 7% of global revenue for prohibited AI or high-risk non-compliance. For a $500M company, that's €35M. For a $10B company, that's €700M.

(Source: EU AI Act Article 99)

Translation: These aren't parking tickets. These are business-ending fines.


2. Real Companies, Real Penalties

The Resume Screener That Cost $125,000

A 300-person tech startup used an AI tool to screen engineering resumes. They thought they were being efficient. The AI vendor assured them it was "bias-free."

What they missed: NYC Local Law 144 requires the employer to commission an independent bias audit—not the vendor. After 12 months without an audit, NYC's Department of Consumer and Worker Protection sent an enforcement letter.

Settlement: $125,000. Plus legal fees. Plus the cost of the audit they should have done in the first place.

The lesson: "Our vendor handles compliance" isn't a defense.


The 3-Day Notice That Cost $175,000

A healthcare system with 5,000 employees used AI to rank candidates for nursing positions. They sent candidates a notice about AI use—but only 3 days before screening.

The requirement: NYC LL144 mandates 10 days minimum notice.

Settlement: $175,000 for inadequate notice timing.

The lesson: Details matter. "Close enough" doesn't count in compliance.


The Vendor Audit That Wasn't Enough

A retail chain with 15,000 employees relied on their AI vendor's general bias audit report. The vendor tested the AI on a generic dataset and shared results with all customers.

The requirement: NYC LL144 requires employer-specific audits using the employer's actual hiring data.

Settlement: $225,000 for using a vendor's general audit instead of commissioning their own.

The lesson: Generic compliance doesn't satisfy specific requirements.

(Source: NYC DCWP public enforcement records, 2023-2025)


3. Your Customers Are Asking Questions You Can't Answer

The RFP that killed a $2M deal:

A Series B SaaS company spent 6 months pursuing an enterprise customer. Final stage: security questionnaire.

Question 47: "Provide your most recent SOC 2 Type II report covering AI systems."

Their answer: "We don't have SOC 2 for our AI components."

Result: Deal dead. $2M ARR gone. 6 months wasted.


What enterprise customers now require:

  1. SOC 2 compliance covering AI systems

  2. Bias audit reports (if AI affects hiring)

    • HR tech vendors must show NYC LL144 compliance
    • Without it: Can't sell to NYC employers
  3. Data processing agreements (DPAs) for AI

    • GDPR Article 28 requirement for EU customers
    • Without it: Can't sell to EU companies
  4. AI-specific security questionnaires

    • Questions about model security, data handling, bias testing
    • Without answers: Sales cycle stalls

Bottom line: Compliance isn't just about avoiding fines. It's about closing deals.


Which AI Laws Apply to You?

Step 1: Determine Your Geographic Scope

Question: Where are your customers/users located?

| Location | Applicable Laws | |----------|----------------| | NYC residents | NYC Local Law 144 | | Colorado residents | Colorado AI Act (SB24-205) | | California residents | CCPA/CPRA, pending AI bills | | Illinois residents | BIPA (biometric AI) | | Utah residents | Utah AI Policy Act | | EU citizens | GDPR, EU AI Act | | Federal contractors | AI Executive Order 14110 | | US healthcare | HIPAA, FDA guidance | | US financial services | FINRA guidance |

Tool: Use our Law Finder to determine which laws apply (2 minutes).

Step 2: Determine Your AI Use Case

Question: What does your AI do?

| Use Case | High-Risk? | Regulations | |----------|-----------|-------------| | Hiring/recruitment | Yes | NYC LL144, Colorado AI Act, EU AI Act, GDPR, EEOC guidance | | Credit decisions | Yes | Colorado AI Act, EU AI Act, FCRA, ECOA, FINRA | | Healthcare diagnosis | Yes | EU AI Act, HIPAA, FDA (medical device) | | Facial recognition | Yes | Illinois BIPA, EU AI Act (biometric ID) | | Customer service chatbot | No | GDPR (if EU), CCPA (if CA), FTC (deceptive claims) | | Internal analytics | No | Minimal (unless processing personal data) | | Federal contracting | Yes | AI Executive Order 14110 |

Step 3: Check Effective Dates

| Law | Effective Date | Status | |-----|---------------|--------| | NYC Local Law 144 | July 5, 2023 | ✅ Enforced | | Colorado AI Act | February 1, 2026 | ⏳ Enforcement begins in 8 days | | Illinois BIPA | January 1, 2008 | ✅ Enforced (applies to AI) | | Utah AI Policy Act | May 1, 2024 | ✅ Enforced | | AI Executive Order 14110 | October 30, 2023 | ✅ Active (federal) | | EU AI Act (prohibited) | August 2, 2025 | ⏳ Coming soon | | EU AI Act (high-risk) | August 2, 2026 | ⏳ 6 months away | | GDPR | May 25, 2018 | ✅ Enforced |


What Do AI Laws Actually Require?

Federal AI Requirements (US)

AI Executive Order 14110 (October 30, 2023)

Who it affects:

  • Federal agencies
  • Federal contractors (defense, healthcare, IT services)
  • Critical infrastructure operators (energy, finance, healthcare, transportation)
  • Large AI model developers (training compute >10²⁶ FLOPs)

Requirements:

  1. Safety testing for large AI models

    • Red-team testing before deployment
    • Adversarial testing for vulnerabilities
    • Results reported to government
  2. Reporting to OMB

    • Federal agencies must report AI use
    • Inventory of AI systems
    • Risk assessments for high-impact AI
  3. Standards development

    • NIST developing AI safety standards
    • Agencies must follow NIST guidance
    • Procurement standards for AI systems
  4. Critical infrastructure protection

    • Enhanced security for AI in critical sectors
    • Incident reporting requirements
    • Coordination with CISA

Enforcement: Contract requirements, agency rules, potential debarment

Source: Executive Order 14110

Who should care: If you sell to federal government or operate critical infrastructure, this applies to you.


State AI Laws (Beyond Colorado)

Illinois Biometric Information Privacy Act (BIPA)

Status: Enforced since 2008, actively applies to AI

Who: Any business collecting biometric data in Illinois

What's biometric: Facial recognition, fingerprints, voice prints, iris scans, gait analysis

Requirements:

  1. Written policy for biometric data retention and destruction
  2. Informed consent before collecting biometric data
  3. Disclosure of purpose and duration of storage
  4. No selling biometric data without consent

Penalties:

  • $1,000 per negligent violation
  • $5,000 per intentional or reckless violation
  • Private right of action (individuals can sue directly)

Why it matters: Illinois BIPA has generated hundreds of class-action lawsuits. Facebook paid $650M settlement in 2021 for facial recognition violations.

Source: 740 ILCS 14


Utah AI Policy Act (SB 149)

Status: Effective May 1, 2024

Who: Regulated occupations using AI (healthcare, legal, financial, real estate)

Requirements:

  1. Disclosure of AI use to consumers
  2. Human oversight for consequential decisions
  3. Transparency about AI capabilities and limitations

Penalties: Enforced through existing occupational licensing boards

Source: Utah SB 149


California AI Bills (Pending)

AB 2013 (Automated Decision Systems Accountability Act)

  • Status: Under consideration
  • Who: State agencies using AI for consequential decisions
  • Requirements: Impact assessments, public disclosure, appeals process

SB 1047 (Safe and Secure Innovation for Frontier AI Models)

  • Status: Proposed 2024
  • Who: Large AI model developers (>$100M training cost)
  • Requirements: Safety testing, incident reporting, kill switches

Why watch California: Often sets precedent for other states and federal action.


Industry-Specific AI Regulations

Healthcare: FDA AI/ML Medical Device Guidance

Who: AI systems that diagnose, treat, or prevent disease

Classification: Software as Medical Device (SaMD)

Requirements:

  1. Premarket approval (510(k) or PMA)
  2. Clinical validation with real-world data
  3. Post-market monitoring for AI performance
  4. Algorithm change protocol for continuous learning AI

Example: AI that reads X-rays for cancer detection requires FDA clearance.

Source: FDA AI/ML Guidance


Financial Services: FINRA AI Guidance

Who: Broker-dealers using AI for trading, advisory, compliance

Requirements:

  1. Testing and validation before deployment
  2. Ongoing monitoring for AI performance
  3. Disclosure to customers about AI use
  4. Supervision of AI-generated recommendations

Focus areas:

  • Algorithmic trading (market manipulation risks)
  • Robo-advisors (suitability requirements)
  • AI compliance tools (effectiveness validation)

Source: FINRA Regulatory Notice 21-16


Consumer Protection: FTC AI Guidance

Who: Any business making AI-related claims to consumers

Enforcement authority: Section 5 (unfair or deceptive practices)

Recent actions:

  • $5M settlement for deceptive "AI-powered" claims (2023)
  • Enforcement against algorithmic discrimination
  • Action against AI voice cloning scams

Key principles:

  1. Don't exaggerate AI capabilities
  2. Disclose material limitations
  3. Prevent algorithmic discrimination
  4. Provide meaningful explanations

Source: FTC AI Blog


Employment: EEOC AI Guidance

Who: Employers using AI in hiring, promotion, termination

Legal basis: Title VII (Civil Rights Act), ADA, ADEA

Key guidance:

  1. Disparate impact testing required
  2. Reasonable accommodation for AI-based assessments
  3. Vendor liability doesn't eliminate employer liability

Complements: NYC LL144, Colorado AI Act

Source: EEOC AI Guidance


NYC Local Law 144 Requirements (Detailed)

Who: NYC employers using AI in hiring

Requirements:

  1. Annual bias audit by independent auditor

    • Test AI for disparate impact by race and gender
    • Calculate impact ratios (must be ≥ 0.80)
    • Cost: $15,000-$50,000 per audit
  2. Publish audit results on careers page

    • Include auditor name, date, impact ratios
    • Keep published for 6 months
    • Must be publicly accessible (no login)
  3. Notify candidates 10+ days before AI screening

    • Explain AI use in hiring process
    • Offer alternative selection process
    • Provide link to bias audit results

Source: NYC Admin Code Title 20, Chapter 8

Tool: NYC LL144 Compliance Checker


Colorado AI Act Requirements

Who: Colorado businesses deploying high-risk AI

High-risk AI definition:

  • Makes or substantially assists consequential decisions
  • Affects: Education, employment, financial services, healthcare, housing, insurance, legal services

Requirements:

  1. Impact assessment before deployment

    • Purpose and intended use
    • Benefits and risks
    • Data types and sources
    • Transparency and explainability measures
    • Post-deployment monitoring plan
  2. Risk management policy

    • Governance and oversight
    • Data quality and bias mitigation
    • Human review procedures
    • Incident response plan
  3. Consumer disclosures

    • Notice of AI use in consequential decisions
    • Right to opt out or appeal
    • Contact information for questions

Effective: February 1, 2026

Source: Colorado SB24-205


EU AI Act Requirements

Risk-based approach:

| Risk Level | Examples | Requirements | |------------|----------|--------------| | Prohibited | Social scoring, emotion recognition in workplace | ❌ Banned (€35M fine) | | High-risk | Hiring AI, credit scoring, biometric ID | 📋 Conformity assessment, CE marking, registration | | Limited risk | Chatbots, deepfakes | ℹ️ Transparency obligations | | Minimal risk | AI games, spam filters | ✅ No requirements |

High-risk AI requirements:

  1. Risk management system
  2. Data governance (quality, bias mitigation)
  3. Technical documentation (system design, training data)
  4. Record-keeping (logs of AI decisions)
  5. Transparency (user information)
  6. Human oversight (human-in-the-loop)
  7. Accuracy, robustness, cybersecurity
  8. Conformity assessment (third-party or self-assessment)

Effective dates:

  • Prohibited AI: August 2, 2025
  • High-risk AI: August 2, 2026

Source: EU AI Act (Regulation 2024/1689)


GDPR Requirements for AI

Article 22: Right to object to automated decision-making

Applies when:

  • Decision is solely automated (no human involvement)
  • Decision has legal or similarly significant effects
  • Processes personal data of EU citizens

Requirements:

  1. Obtain explicit consent OR demonstrate legal necessity
  2. Provide meaningful information about decision logic
  3. Allow human review of automated decisions
  4. Enable right to contest decisions
  5. Conduct DPIA for high-risk processing

Example: AI credit scoring that automatically denies loans requires:

  • Explicit consent from applicant
  • Explanation of factors considered
  • Human review of denials
  • Right to appeal decision

Source: GDPR Article 22


How to Get Started with AI Compliance

For Beginners: 30-Day Quick Start

Week 1: Inventory

  • [ ] List all AI systems you use (including vendor AI)
  • [ ] Identify which systems affect people's lives
  • [ ] Determine geographic scope (where are users?)
  • [ ] Tool: AI Inventory Template

Week 2: Legal Assessment

  • [ ] Determine which laws apply
  • [ ] Check effective dates and deadlines
  • [ ] Identify compliance gaps
  • [ ] Tool: Law Finder (2 minutes)

Week 3: Priority Actions

  • [ ] If using AI in hiring (NYC): Commission bias audit
  • [ ] If high-risk AI (Colorado): Start impact assessment
  • [ ] If processing EU data: Review GDPR compliance
  • [ ] Tool: Self-Audit (15 minutes)

Week 4: Documentation

  • [ ] Document AI systems and data flows
  • [ ] Create compliance roadmap (6-12 months)
  • [ ] Assign responsibilities
  • [ ] Schedule quarterly reviews

Full guide: AI Compliance: 30-Day Action Plan


For Technical Teams: Implementation Checklist

Technical Implementation Details (Click to expand)

1. Logging & Monitoring

Requirement: Log all AI decisions for audit trails.

Implementation:

// Structured logging for AI decisions
import winston from 'winston'

const logger = winston.createLogger({
  level: 'info',
  format: winston.format.json(),
  defaultMeta: { service: 'ai-service' },
  transports: [
    new winston.transports.File({ filename: 'ai-decisions.log' }),
  ],
})

// Log every AI decision
export function logAIDecision(params: {
  userId: string
  modelId: string
  modelVersion: string
  input: string // Sanitize PII before logging
  output: string
  confidence: number
  timestamp: Date
}) {
  logger.info('AI decision', {
    user_id: params.userId,
    model_id: params.modelId,
    model_version: params.modelVersion,
    input_length: params.input.length, // Don't log actual input if contains PII
    output: params.output,
    confidence: params.confidence,
    timestamp: params.timestamp.toISOString(),
  })
}

Why: SOC 2 CC7.2, GDPR Article 5(2), EU AI Act Article 12 all require audit trails.


2. Bias Monitoring

Requirement: Monitor AI for disparate impact.

Implementation:

# Calculate impact ratios per EEOC guidelines
def calculate_impact_ratios(decisions: list[dict]) -> dict:
    """
    Calculate selection rates and impact ratios by protected group.
    
    Args:
        decisions: List of {group: str, selected: bool}
    
    Returns:
        Impact ratios by group
    """
    from collections import defaultdict
    
    # Count selections by group
    group_counts = defaultdict(lambda: {'total': 0, 'selected': 0})
    
    for decision in decisions:
        group = decision['group']
        group_counts[group]['total'] += 1
        if decision['selected']:
            group_counts[group]['selected'] += 1
    
    # Calculate selection rates
    selection_rates = {}
    for group, counts in group_counts.items():
        if counts['total'] > 0:
            selection_rates[group] = counts['selected'] / counts['total']
        else:
            selection_rates[group] = 0
    
    # Find highest selection rate
    max_rate = max(selection_rates.values()) if selection_rates else 0
    
    # Calculate impact ratios
    impact_ratios = {}
    for group, rate in selection_rates.items():
        if max_rate > 0:
            impact_ratios[group] = rate / max_rate
        else:
            impact_ratios[group] = None
    
    return {
        'selection_rates': selection_rates,
        'impact_ratios': impact_ratios,
        'threshold': 0.80,  # EEOC four-fifths rule
        'flagged_groups': [
            group for group, ratio in impact_ratios.items()
            if ratio is not None and ratio < 0.80
        ]
    }

# Example usage
decisions = [
    {'group': 'male', 'selected': True},
    {'group': 'male', 'selected': False},
    {'group': 'female', 'selected': True},
    {'group': 'female', 'selected': False},
    # ... more decisions
]

results = calculate_impact_ratios(decisions)
print(f"Flagged groups: {results['flagged_groups']}")

Why: NYC LL144, Colorado AI Act, EU AI Act all require bias monitoring.


3. Human Review

Requirement: Enable human oversight of AI decisions.

Implementation:

// Human-in-the-loop for high-stakes decisions
interface AIDecision {
  id: string
  userId: string
  modelOutput: any
  confidence: number
  requiresReview: boolean
}

export async function processAIDecision(decision: AIDecision) {
  // 1. Check if human review required
  if (decision.requiresReview || decision.confidence < 0.85) {
    // Queue for human review
    await db.reviewQueue.create({
      data: {
        decisionId: decision.id,
        status: 'PENDING_REVIEW',
        queuedAt: new Date(),
      }
    })
    
    // Notify reviewer
    await notifyReviewer({
      decisionId: decision.id,
      priority: decision.confidence < 0.70 ? 'HIGH' : 'NORMAL'
    })
    
    return { status: 'PENDING_REVIEW' }
  }
  
  // 2. If no review needed, proceed with AI decision
  return { status: 'APPROVED', decision: decision.modelOutput }
}

// Allow human override
export async function overrideAIDecision(
  decisionId: string,
  reviewerId: string,
  override: any,
  reason: string
) {
  // Log override for audit trail
  await db.aiOverride.create({
    data: {
      decisionId,
      reviewerId,
      originalDecision: await getOriginalDecision(decisionId),
      overrideDecision: override,
      reason,
      timestamp: new Date(),
    }
  })
  
  // Update decision
  await db.decision.update({
    where: { id: decisionId },
    data: {
      finalDecision: override,
      reviewedBy: reviewerId,
      reviewedAt: new Date(),
    }
  })
}

Why: GDPR Article 22, EU AI Act Article 14, Colorado AI Act all require human oversight.


4. Model Versioning

Requirement: Track AI model versions for reproducibility.

Implementation:

// Model registry with version control
interface ModelVersion {
  modelId: string
  version: string
  trainingDate: Date
  trainingDataHash: string
  hyperparameters: Record<string, any>
  performanceMetrics: Record<string, number>
  deployedAt?: Date
  retiredAt?: Date
}

export async function registerModelVersion(model: ModelVersion) {
  // 1. Validate model before registration
  await validateModel(model)
  
  // 2. Register in model registry
  const registered = await db.modelVersion.create({
    data: model
  })
  
  // 3. Generate artifact
  await generateModelArtifact({
    modelId: model.modelId,
    version: model.version,
    metadata: model,
  })
  
  return registered
}

// Track which model version made each decision
export async function makeAIDecision(input: any) {
  // 1. Get current production model
  const model = await db.modelVersion.findFirst({
    where: {
      modelId: 'hiring-screener',
      deployedAt: { not: null },
      retiredAt: null,
    },
    orderBy: { deployedAt: 'desc' }
  })
  
  if (!model) {
    throw new Error('No production model found')
  }
  
  // 2. Make decision
  const output = await runModel(model, input)
  
  // 3. Log with model version
  await logAIDecision({
    userId: input.userId,
    modelId: model.modelId,
    modelVersion: model.version,
    input: input.data,
    output: output,
    confidence: output.confidence,
    timestamp: new Date(),
  })
  
  return output
}

Why: SOC 2 CC8.1, EU AI Act Article 11, reproducibility requirements.


Common Mistakes to Avoid

Mistake 1: "We don't use AI"

Reality: You probably do.

Hidden AI:

  • Applicant tracking systems (ATS) with resume screening
  • CRM systems with lead scoring
  • Customer service platforms with chatbots
  • Marketing tools with content generation
  • Analytics platforms with predictive models

Fix: Conduct AI inventory. Check vendor contracts for "AI," "machine learning," "automated decision-making."

Tool: AI Inventory Checklist


Mistake 2: "Vendor handles compliance"

Reality: You're still responsible.

Example: NYC LL144 requires employer-specific bias audits. Vendor's general audit doesn't satisfy this.

Real case: Retail chain paid $225,000 settlement for relying on vendor audit instead of commissioning employer-specific audit.

Fix:

  • Review vendor compliance documentation
  • Commission your own audits when required
  • Include compliance obligations in vendor contracts

Mistake 3: "We'll deal with it later"

Reality: Penalties accrue daily.

Example: NYC LL144 penalties are $500-$1,500 per day. Waiting 6 months = $90,000-$270,000.

Fix: Start now. Even partial compliance reduces risk.

Timeline:

  • Bias audit: 4-8 weeks to complete
  • Impact assessment: 2-4 weeks
  • GDPR DPIA: 4-6 weeks

Mistake 4: "We're too small to matter"

Reality: Laws apply to all sizes.

NYC LL144: No size exemption. Applies to all NYC employers using AI in hiring.

Colorado AI Act: No size exemption for deployers.

GDPR: Small business exemptions are narrow (< 250 employees AND low-risk processing).

Fix: Check specific law requirements. Many have no size exemptions.


Tools & Resources

Free HAIEC Tools

  1. Law Finder - Which AI laws apply to you? (2 min)
  2. Self-Audit - Compliance gap analysis (15 min)
  3. Penalty Calculator - Estimate violation costs (5 min)
  4. NYC LL144 Checker - AEDT classification (2 min)

Downloadable Checklists

  1. AI Compliance Starter Checklist - 30-day action plan
  2. SOC 2 AI Checklist - Control implementation
  3. GDPR AI Checklist - Article-by-article requirements
  4. NYC LL144 Checklist - Step-by-step compliance

Official Resources

  1. NYC DCWP LL144 Page - Official guidance
  2. Colorado AG AI Act Page - Rulemaking updates
  3. EU AI Act Official Text - Full regulation
  4. GDPR Article 22 Guidance - UK ICO guidance

Next Steps

If you're just starting:

  1. Run Law Finder - Determine which laws apply (2 min)
  2. Run Self-Audit - Identify gaps (15 min)
  3. Download Starter Checklist - 30-day action plan
  4. Read: Why AI Apps Need Compliance - Business case

If you're already using AI:

  1. Run Penalty Calculator - Understand risk (5 min)
  2. Read: Common AI Compliance Pitfalls - Avoid mistakes
  3. Read: AI Compliance 30-Day Plan - Implementation guide
  4. Book Consultation - Get expert help (30 min, free)

If you're in a specific industry:


Frequently Asked Questions

How much does AI compliance cost?

Depends on requirements:

| Requirement | Cost Range | Timeline | |-------------|-----------|----------| | NYC LL144 bias audit | $15K-$50K | 4-8 weeks | | Colorado impact assessment | $5K-$20K (internal) or $20K-$50K (consultant) | 2-4 weeks | | GDPR DPIA | $10K-$30K | 4-6 weeks | | SOC 2 Type I | $15K-$50K (audit fees) | 6-12 months | | SOC 2 Type II | $25K-$75K (audit fees) | 12 months |

DIY options:

  • Use HAIEC free tools (Self-Audit, Law Finder, Penalty Calculator)
  • Download free checklists
  • Implement controls yourself
  • Commission only required audits

Can I do AI compliance myself?

Yes, partially.

You can do:

  • AI inventory
  • Gap analysis (use our Self-Audit)
  • Internal documentation
  • Control implementation
  • Impact assessments (Colorado)
  • DPIAs (GDPR)

You need external help for:

  • Independent bias audits (NYC LL144 requires third-party)
  • SOC 2 audits (requires independent CPA)
  • Legal advice (consult employment/privacy counsel)
  • EU AI Act conformity assessment (if high-risk)

How often do I need to reassess?

Minimum frequencies:

| Trigger | Action | Timeline | |---------|--------|----------| | Annual | Bias audit (NYC LL144) | Every 12 months | | Annual | SOC 2 audit | Every 12 months | | Quarterly | Self-assessment | Every 3 months | | New AI system | Impact assessment | Before deployment | | Material change | Update assessments | Within 30 days | | New regulation | Compliance review | Before effective date |

Best practice: Quarterly compliance reviews + triggered assessments.


What if I discover violations during self-assessment?

Immediate actions:

  1. Stop the violation (if possible)

    • Pause non-compliant AI system
    • Stop using AI without required documentation
  2. Assess scope

    • How long has violation existed?
    • How many people affected?
    • What is penalty exposure?
  3. Consult legal counsel

    • Attorney-client privilege protects assessment
    • Evaluate self-disclosure options
    • Develop remediation strategy
  4. Remediate rapidly

    • Create missing documentation
    • Commission required audits
    • Implement required controls
    • Document remediation efforts

Self-disclosure considerations:

  • Pros: Demonstrates good faith, may reduce penalties
  • Cons: Triggers investigation, creates evidence
  • When: First-time violation with rapid cure, jurisdiction offers cure period
  • Decision: Consult legal counsel

Disclaimer

This is educational content, not legal advice. AI compliance requirements vary by jurisdiction, industry, and specific use case. Consult qualified legal counsel for advice specific to your situation.

HAIEC provides compliance tools and educational resources but is not a law firm and does not provide legal advice.


Last Updated: January 23, 2026
Next Review: April 23, 2026
Regulatory Sources:

  • NYC Local Law 144 (2021)
  • Colorado SB24-205 (2024)
  • EU AI Act (Regulation 2024/1689)
  • GDPR (Regulation 2016/679)

Questions? Contact us or book a free consultation.