AI Compliance in 2025: Every Law You Need to Know
EU AI Act, NYC LL144, Colorado AI Act — complete breakdown of requirements, deadlines, and how to prepare.
Major AI Regulations You Need to Know
AI regulation is accelerating globally. Here are the laws that matter most in 2025.
EU AI Act
Critical PriorityKey Requirements
- Risk classification of AI systems
- Prohibited practices (social scoring, emotion recognition at work)
- High-risk AI: conformity assessments, human oversight, transparency
- General-purpose AI: technical documentation, copyright compliance
Key Dates
- Feb 2, 2025: Prohibited AI practices ban takes effect
- Aug 2, 2025: GPAI model requirements apply
- Aug 2, 2026: High-risk AI system requirements apply
Who it affects: Any company deploying or developing AI systems used in the EU
NYC Local Law 144
High PriorityKey Requirements
- Annual bias audit by independent auditor
- Public posting of audit results
- Candidate notification 10 days before use
- Alternative application process option
Key Dates
- Already Active: Full enforcement in effect
- Annual: Bias audits must be conducted yearly
Who it affects: Employers using AI/automated tools for hiring decisions in NYC
Colorado AI Act (SB24-205)
High PriorityKey Requirements
- Risk management policy and program
- Annual impact assessments
- Consumer disclosure of AI use
- Human oversight for consequential decisions
Key Dates
- Feb 1, 2026: Law takes effect
- Ongoing: Annual impact assessments required
Who it affects: Deployers of high-risk AI systems affecting Colorado consumers
ISO 42001
Medium PriorityKey Requirements
- AI management system (AIMS)
- AI risk assessment and treatment
- AI policy and objectives
- Continuous monitoring and improvement
Key Dates
- Now: Certification available
- 2025-2026: Expected to become enterprise requirement
Who it affects: Organizations developing, providing, or using AI systems
Illinois BIPA (AI Context)
High PriorityKey Requirements
- Written consent before collecting biometrics
- Retention and destruction policies
- No sale of biometric data
Key Dates
- Active: Applies to AI using biometric data
Who it affects: Companies using AI with facial recognition or biometric data
EU AI Act Risk Categories
The EU AI Act classifies AI systems into four risk categories. Your obligations depend on which category your AI falls into.
Prohibited AI (Banned)
These AI applications are completely banned under the EU AI Act
- •Social scoring by governments
- •Real-time biometric identification in public (with exceptions)
- •Emotion recognition in workplace/education
- •AI exploiting vulnerabilities of specific groups
- •Predictive policing based on profiling
High-Risk AI
Requires conformity assessment, registration, and ongoing monitoring
- •AI in hiring and recruitment
- •Credit scoring and lending decisions
- •Educational assessment and admissions
- •Healthcare diagnosis assistance
- •Law enforcement applications
- •Border control and immigration
Limited Risk AI
Transparency obligations only
- •Chatbots (must disclose AI interaction)
- •Emotion recognition systems
- •Deepfake generators (must label content)
- •Biometric categorization
Minimal Risk AI
No specific requirements (voluntary codes of conduct)
- •AI-enabled video games
- •Spam filters
- •AI in manufacturing optimization
- •Recommendation systems (non-manipulative)
Your 2025-2026 AI Compliance Timeline
Conduct AI system inventory
Classify AI systems by risk level
Implement prohibited AI bans (EU AI Act)
Establish AI governance framework
GPAI model compliance (EU AI Act)
Prepare for Colorado AI Act (Feb 2026)
Colorado AI Act takes effect
High-risk AI requirements (EU AI Act)
Not Sure Which AI Laws Apply to You?
Take our free assessment to get a personalized AI compliance roadmap based on your specific situation.
Start ISO 42001 AssessmentFrequently Asked Questions
Does the EU AI Act apply to US companies?
Yes, if your AI systems are used in the EU or affect EU residents. Like GDPR, the EU AI Act has extraterritorial reach. If you deploy AI that impacts EU users, you must comply regardless of where your company is based.
What is a "high-risk" AI system?
High-risk AI systems are those used in sensitive areas like hiring, credit decisions, education, healthcare, law enforcement, and critical infrastructure. The EU AI Act provides a specific list in Annex III. These systems require conformity assessments, human oversight, and ongoing monitoring.
Do I need ISO 42001 certification?
ISO 42001 is voluntary, but it's becoming a de facto requirement for enterprise AI vendors. If you sell AI products to large enterprises or government, expect ISO 42001 to be requested in RFPs by 2025-2026. It also demonstrates due diligence for regulatory compliance.
What happens if I don't comply with AI regulations?
Penalties vary: EU AI Act fines up to €35M or 7% of global revenue. NYC LL144 fines $500-$1,500 per violation per day. Beyond fines, non-compliance risks contract losses, reputational damage, and potential lawsuits from affected individuals.
How do I know if my AI system needs a bias audit?
If you use AI/automated tools for hiring decisions and have candidates in NYC, you need an annual bias audit under LL144. For the EU AI Act, high-risk AI systems in employment require conformity assessments that include bias testing.
What's the difference between EU AI Act and ISO 42001?
The EU AI Act is a law with mandatory requirements and penalties. ISO 42001 is a voluntary international standard for AI management systems. They're complementary: ISO 42001 provides a framework that helps demonstrate EU AI Act compliance, but certification alone doesn't guarantee legal compliance.
Get Ahead of AI Regulations
Don't wait for enforcement. Start your AI compliance journey today with our free assessment.
Ready to Get Compliant?
Start your compliance journey with HAIEC. Free assessment, automated evidence, audit-ready documentation.
Explore compliance frameworks: