SAMPLE DOCUMENT — This is a demonstration of HAIEC's Colorado AI Act Impact Assessment template. All data is fictional.

High-Risk AI System Impact Assessment

Colorado AI Act (SB24-205) — CRS §6-1-1703

Organization: Meridian Financial Services
AI System: TalentFlow AI v3.2
Assessment Date: January 15, 2026
Next Review: January 15, 2027
Role: Deployer
Industry: Financial Services

1. AI System Description

System Name & Version
TalentFlow AI (v3.2)
System Description
TalentFlow AI is a machine learning-based candidate screening and ranking system used in the hiring process for financial analyst, loan officer, and customer service positions. The system analyzes resumes, application responses, and assessment scores to generate candidate rankings and hiring recommendations.
Purpose
To improve hiring efficiency and consistency by providing data-driven candidate rankings that reduce time-to-hire from 45 days to 18 days while maintaining quality-of-hire metrics.
Intended Use
Used as a screening tool in the initial phase of hiring. Human recruiters review all AI-generated rankings before making interview decisions. The system does not make autonomous hiring decisions.
Intended Beneficiaries
Job applicants (faster response times, consistent evaluation criteria), hiring managers (reduced screening burden, data-driven insights), and the organization (improved hiring efficiency and reduced bias in initial screening).

2. Consequential Decision Categories

3. Training Data

Data Description
The model was trained on 847,000 anonymized hiring records from 2018–2024, including resume text, structured application data, assessment scores, and hiring outcomes. Data was sourced from 12 financial services organizations across the United States.

Data Sources

4. Known Limitations & Biases

Known Limitations

Training data is predominantly from large financial institutions; may not generalize well to credit unions or community banks with different hiring patterns.

System performance degrades for roles with fewer than 50 historical hiring records in the training data.

Resume parsing accuracy drops to 78% for non-standard resume formats (e.g., creative layouts, non-English sections).

Known Biases

Historical hiring data reflects industry-wide underrepresentation of women in senior financial analyst roles (32% vs 50% population baseline). Mitigation applied via reweighting.

Name-based features were removed after detecting proxy discrimination patterns in v2.1 audit. Current version uses anonymized candidate IDs during scoring.

Mitigation Measures

Demographic reweighting applied to training data to correct for historical underrepresentation in senior roles.

All personally identifiable information (name, address, age indicators) removed before model scoring. Candidates are evaluated on skills, experience, and assessment scores only.

Quarterly disparate impact testing using 4/5ths rule across race, gender, age, and disability status. Results published internally and available upon request.

Human review required for all candidates ranked in bottom 20% before rejection — ensures AI recommendations are not sole basis for adverse decisions.

5. Performance Metrics

Metric Value Benchmark
Selection Rate Parity (Gender) 0.87 ≥ 0.80 (4/5ths rule)
Selection Rate Parity (Race) 0.83 ≥ 0.80 (4/5ths rule)
Selection Rate Parity (Age 40+) 0.91 ≥ 0.80 (4/5ths rule)
Quality of Hire Score 4.2/5.0 3.8/5.0 (industry avg)
Time-to-Hire Reduction 60% 30% (industry avg)
False Negative Rate 8.3% ≤ 15%

6. Affected Populations & Potential Harms

Affected Populations

Potential Harms

Qualified candidates may be ranked lower than warranted if their experience doesn't match patterns in training data (e.g., career changers, non-traditional backgrounds).

Candidates from underrepresented groups may face compounded disadvantage if historical bias in training data is not fully mitigated.

Over-reliance on AI rankings by hiring managers could reduce consideration of qualitative factors not captured by the model.

7. Human Oversight Measures

Assessment Certification

I certify that this impact assessment has been conducted in good faith and accurately represents the current state of the AI system described above, in accordance with CRS §6-1-1703.

Sarah Chen, Ph.D.
Assessor Name
VP of AI Governance
Title
1/15/2026
Date