AI Fines and Lawsuits
Are Already Happening
Most companies fail for predictable, preventable reasons.
Enforcement Events
Cleo AI
FTC fined Cleo AI for making deceptive claims about its AI-powered financial advice capabilities and misleading users about potential financial outcomes.
accessiBe
FTC enforcement for making unsubstantiated claims that AI could automatically ensure website accessibility compliance without proper validation or testing.
DoNotPay
FTC action against DoNotPay for falsely claiming its AI could replace lawyers without validating the accuracy or legal soundness of its outputs.
Replika (Luka Inc.)
Italian DPA fined Replika for unlawful processing of personal data in AI chatbot interactions and inadequate age verification mechanisms.
Workday (Mobley v. Workday)
Active federal lawsuit alleging Workday's AI screening rejected candidates based on race, age, and disability. Collective action allowed. Represents active federal liability for AI hiring discrimination.
iTutorGroup (EEOC v. iTutorGroup)
EEOC enforcement action (settled) against iTutorGroup for AI system that automatically rejected applicants over 55 years old, violating Age Discrimination in Employment Act.
NYC Local Law 144
NYC Local Law 144 penalty exposure: $500-$1,500 per day for using AI hiring tools without bias audit, public disclosure, or candidate notice.
NY Algorithmic Pricing Law
New York State penalty exposure: $1,000 per violation for failure to disclose use of algorithmic pricing systems.
Texas AI Law
Texas AI law penalty exposure: $10k-$12k for curable violations, $80k-$200k for uncurable violations of prohibited AI practices.
Federal AI Hiring Risk
Federal law already applies to AI hiring. Companies are being sued BEFORE fines exist.
Key Statement
"AI does not reduce liability. It amplifies it."
Workday and iTutorGroup cases demonstrate that AI hiring systems face immediate federal liability under Title VII and Age Discrimination in Employment Act—without waiting for new AI-specific regulations.
Mobley v. Workday
Active federal lawsuit. Collective action allowed. Alleges AI screening discriminates by race, age, and disability.
EEOC v. iTutorGroup
Settled enforcement. AI automatically rejected applicants over 55. Violated Age Discrimination in Employment Act.
Enforcement Patterns
Most AI enforcement is triggered by data misuse and unsupported claims, not model architecture
Federal lawsuits are emerging before regulatory fines exist
Hiring AI is the highest immediate legal risk surface
Per-violation penalties scale with usage (NYC LL144: $500-$1,500/day)
Small and mid-size companies are already being targeted (DoNotPay: $193k)
AI does not reduce liability. It amplifies it.
Why Companies Actually Get Fined or Sued
Each failure mode maps to real enforcement cases above.
No Audit Trail
AI systems deployed without evidence of testing, validation, or decision logging
No Bias Testing
Hiring AI used without bias audits or disparate impact analysis
Automated Filtering Without Oversight
AI makes final decisions without human review or explainability
Unsupported AI Claims
Marketing AI capabilities without validation or evidence
No Disclosure
Failure to notify users/candidates that AI is being used
Data Misuse
Unlawful collection, processing, or transfer of personal data
HAIEC Prevention Layer
Static Engine
- • No audit trail
- • Automated filtering
- • Data misuse
Runtime Testing
- • No bias testing
- • Unsupported claims
- • Automated filtering
NYC LL144 Module
- • No bias testing
- • No disclosure
- • Hiring discrimination
Compliance Wizard
- • Unsupported claims
- • No disclosure
Audit Orchestrator
- • No audit trail
- • Federal hiring risk
Decision Pipeline
- • Automated filtering
Don't Become the Next Case Study
Federal lawsuits and FTC enforcement are already happening. Check your exposure before regulators do.