Skip to main content
Updated March 2026

AI Fines and Lawsuits
Are Already Happening

$17M FTC$1M FTC$500–$1,500/day (NYC)$200k (Texas)

Most companies fail for predictable, preventable reasons.

9
Total Events
4
Fines Issued
2
Active Lawsuits
3
Penalty Exposure
Showing 9 of 9 events

Enforcement Events

Cleo AI

FineDeception / Consumer Protection2025
$17.0M
Final

FTC fined Cleo AI for making deceptive claims about its AI-powered financial advice capabilities and misleading users about potential financial outcomes.

Trigger
Misleading users about financial outcomes and AI capabilities
HAIEC Prevention
PARTIAL
Prevention Engines
Compliance WizardEvidence Generation
FTCUnited States

accessiBe

FineDeceptive AI Claims2025
$1.0M
Final

FTC enforcement for making unsubstantiated claims that AI could automatically ensure website accessibility compliance without proper validation or testing.

Trigger
Claimed AI could ensure WCAG compliance without validation
HAIEC Prevention
PARTIAL
Prevention Engines
Static EngineRuntime Testing
FTCUnited States

DoNotPay

FineMisrepresentation ("AI Lawyer" Claims)2025
$0.2M
Final

FTC action against DoNotPay for falsely claiming its AI could replace lawyers without validating the accuracy or legal soundness of its outputs.

Trigger
No validation of legal capability claims
HAIEC Prevention
YES
Prevention Engines
Static EngineRuntime TestingCompliance Wizard
FTCUnited States

Replika (Luka Inc.)

FinePrivacy / Data Processing / Age Safeguards2025
$5.3M
Final

Italian DPA fined Replika for unlawful processing of personal data in AI chatbot interactions and inadequate age verification mechanisms.

Trigger
Unlawful personal data use and weak age verification
HAIEC Prevention
PARTIAL
Prevention Engines
Static EngineData Leakage Detection

Workday (Mobley v. Workday)

LawsuitAI Hiring Discrimination2025-2026
Pending
Ongoing

Active federal lawsuit alleging Workday's AI screening rejected candidates based on race, age, and disability. Collective action allowed. Represents active federal liability for AI hiring discrimination.

Trigger
Automated filtering before human review, lack of explainability
HAIEC Prevention
YES
Prevention Engines
Static EngineRuntime TestingNYC LL144 ModuleAudit Orchestrator
US District CourtUnited States

iTutorGroup (EEOC v. iTutorGroup)

LawsuitAge Discrimination in AI Hiring2023
$0.4M
Settled

EEOC enforcement action (settled) against iTutorGroup for AI system that automatically rejected applicants over 55 years old, violating Age Discrimination in Employment Act.

Trigger
AI system rejected older applicants automatically
HAIEC Prevention
YES
Prevention Engines
Static EngineRuntime TestingNYC LL144 Module
EEOCUnited States

NYC Local Law 144

ExposureAI Hiring Compliance2023-Present
$1,500/day
Active

NYC Local Law 144 penalty exposure: $500-$1,500 per day for using AI hiring tools without bias audit, public disclosure, or candidate notice.

Trigger
No bias audit, no public disclosure, no candidate notice
HAIEC Prevention
YES
Prevention Engines
NYC LL144 ModuleAudit OrchestratorEvidence Generation
NYC DCWPNew York City

NY Algorithmic Pricing Law

ExposureAlgorithmic Pricing Disclosure2024-Present
$1,000/day
Active

New York State penalty exposure: $1,000 per violation for failure to disclose use of algorithmic pricing systems.

Trigger
No disclosure of algorithmic pricing
HAIEC Prevention
PARTIAL
Prevention Engines
Compliance Wizard
NY State LegislatureNew York State

Texas AI Law

ExposureProhibited AI Practices2024-Present
$0.2M
Active

Texas AI law penalty exposure: $10k-$12k for curable violations, $80k-$200k for uncurable violations of prohibited AI practices.

Trigger
Prohibited AI practices
HAIEC Prevention
PARTIAL
Prevention Engines
Static EngineCompliance Wizard

Federal AI Hiring Risk

Federal law already applies to AI hiring. Companies are being sued BEFORE fines exist.

Key Statement

"AI does not reduce liability. It amplifies it."

Workday and iTutorGroup cases demonstrate that AI hiring systems face immediate federal liability under Title VII and Age Discrimination in Employment Act—without waiting for new AI-specific regulations.

Mobley v. Workday

Active federal lawsuit. Collective action allowed. Alleges AI screening discriminates by race, age, and disability.

EEOC v. iTutorGroup

Settled enforcement. AI automatically rejected applicants over 55. Violated Age Discrimination in Employment Act.

Enforcement Patterns

Most AI enforcement is triggered by data misuse and unsupported claims, not model architecture

Federal lawsuits are emerging before regulatory fines exist

Hiring AI is the highest immediate legal risk surface

Per-violation penalties scale with usage (NYC LL144: $500-$1,500/day)

Small and mid-size companies are already being targeted (DoNotPay: $193k)

AI does not reduce liability. It amplifies it.

Why Companies Actually Get Fined or Sued

Each failure mode maps to real enforcement cases above.

No Audit Trail

AI systems deployed without evidence of testing, validation, or decision logging

Mapped Cases
Cleo AIDoNotPay
HAIEC Prevention
Static EngineEvidence GenerationAudit Orchestrator

No Bias Testing

Hiring AI used without bias audits or disparate impact analysis

Mapped Cases
WorkdayiTutorGroupNYC LL144
HAIEC Prevention
NYC LL144 ModuleRuntime TestingStatic Engine

Automated Filtering Without Oversight

AI makes final decisions without human review or explainability

Mapped Cases
WorkdayiTutorGroup
HAIEC Prevention
Static Engine (R2, R3)Runtime TestingDecision Pipeline

Unsupported AI Claims

Marketing AI capabilities without validation or evidence

Mapped Cases
DoNotPayaccessiBeCleo AI
HAIEC Prevention
Compliance WizardEvidence Generation

No Disclosure

Failure to notify users/candidates that AI is being used

Mapped Cases
NYC LL144NY Algorithmic Pricing
HAIEC Prevention
NYC LL144 ModuleCompliance Wizard

Data Misuse

Unlawful collection, processing, or transfer of personal data

Mapped Cases
Replika
HAIEC Prevention
Static Engine (R7)Data Leakage Detection

HAIEC Prevention Layer

Static Engine

Prevents
  • No audit trail
  • Automated filtering
  • Data misuse
Rules: R1-R14

Runtime Testing

Prevents
  • No bias testing
  • Unsupported claims
  • Automated filtering
Rules: SP001-SP060

NYC LL144 Module

Prevents
  • No bias testing
  • No disclosure
  • Hiring discrimination
Rules: Bias audit + disclosure

Compliance Wizard

Prevents
  • Unsupported claims
  • No disclosure
Rules: Evidence generation

Audit Orchestrator

Prevents
  • No audit trail
  • Federal hiring risk
Rules: Unified evidence

Decision Pipeline

Prevents
  • Automated filtering
Rules: Human oversight

Don't Become the Next Case Study

Federal lawsuits and FTC enforcement are already happening. Check your exposure before regulators do.