title: 'Bias Detection Tool' description: 'Detect and measure algorithmic bias in AI hiring systems - NYC LL144 compliant bias audits'
Bias Detection Tool
Detect and measure algorithmic bias in your AI hiring systems with HAIEC's automated bias detection tool. Generate NYC LL144-compliant bias audit reports.
What is Bias Detection?
Bias detection analyzes AI hiring systems to identify disparate impact - when an AI system selects candidates from different demographic groups at significantly different rates.
Why It Matters
Legal Requirements:
- NYC Local Law 144 requires annual bias audits for AI hiring tools
- EEOC Guidelines use the "four-fifths rule" (80% threshold)
- Title VII prohibits employment discrimination
Business Impact:
- Avoid penalties ($500-$1,500 per day per violation)
- Reduce discrimination lawsuits
- Build diverse, high-performing teams
- Improve employer brand
Understanding Bias Metrics
Impact Ratio
The impact ratio compares selection rates between demographic groups.
Formula:
Impact Ratio = (Selection Rate for Group A) / (Selection Rate for Group B)
Where:
- Selection Rate = (Selected Candidates / Total Candidates) × 100%
Example:
Female candidates:
- Applied: 200
- Selected: 30
- Selection rate: 30/200 = 15%
Male candidates:
- Applied: 200
- Selected: 40
- Selection rate: 40/200 = 20%
Impact Ratio for females: 15% / 20% = 0.75
The 80% Rule (Four-Fifths Rule)
EEOC Standard:
- Impact ratio < 0.80 suggests disparate impact
- Not automatically illegal, but requires justification
- Triggers closer scrutiny
Interpretation:
| Impact Ratio | Interpretation | Action Required | |--------------|----------------|-----------------| | ≥ 0.80 | No disparate impact | Continue monitoring | | 0.70-0.79 | Borderline | Investigate and monitor | | 0.60-0.69 | Significant bias | Remediation recommended | | < 0.60 | Severe bias | Immediate action required |
Protected Categories
NYC LL144 Requirements
Must test for bias across:
1. Sex/Gender
- Male
- Female
- Non-binary (if sufficient data)
2. Race/Ethnicity (EEOC Categories)
- White
- Black or African American
- Hispanic or Latino
- Asian
- Native American or Alaska Native
- Native Hawaiian or Pacific Islander
- Two or More Races
3. Intersectional Categories
- Sex × Race/Ethnicity combinations
- Example: Black Female, Asian Male, etc.
Data Requirements
Minimum Sample Size:
- At least 100 individuals per category (if available)
- If fewer than 100, use all available data
- Can use test data if insufficient historical data
Data Recency:
- Most recent 12 months of AI system use
- Or test data from pre-deployment testing
- Cannot use data older than 12 months
Using the Bias Detection Tool
Step 1: Prepare Your Data
Upload candidate data in CSV format with required fields:
Required Columns:
candidate_id,gender,race_ethnicity,ai_score,selected
001,Female,Black or African American,75,1
002,Male,White,82,1
003,Female,Asian,68,0
004,Male,Hispanic or Latino,79,1
Field Definitions:
candidate_id- Unique identifier (anonymized)gender- Male, Female, Non-binaryrace_ethnicity- EEOC categoryai_score- AI system output (0-100)selected- 1 if selected, 0 if not
Data Privacy
- Remove all PII (names, emails, addresses)
- Use anonymized candidate IDs
- Data is encrypted in transit and at rest
- Automatically deleted after analysis
Step 2: Configure Analysis
Select Analysis Type:
-
NYC LL144 Audit (Recommended)
- Full compliance with NYC Local Law 144
- All required metrics and categories
- Publication-ready report
-
EEOC Compliance Check
- Federal four-fifths rule analysis
- Title VII compliance assessment
-
Custom Analysis
- Select specific categories
- Custom thresholds
- Exploratory analysis
Set Parameters:
analysis:
type: nyc_ll144
threshold: 0.80 # Four-fifths rule
confidence_level: 0.95 # Statistical significance
include_intersectional: true
min_sample_size: 100
Step 3: Run Analysis
# Via Web Interface
1. Upload CSV file
2. Select analysis type
3. Click "Run Bias Detection"
4. Wait 30-60 seconds for results
# Via API
curl -X POST https://api.haiec.com/v1/bias-detection \
-H "Authorization: Bearer YOUR_API_KEY" \
-F "file=@candidates.csv" \
-F "analysis_type=nyc_ll144"
Step 4: Review Results
Summary Dashboard:
- Overall bias score (0-100)
- Impact ratios by category
- Statistical significance
- Compliance status
Detailed Findings:
- Category-by-category breakdown
- Intersectional analysis
- Trend analysis (if historical data)
- Recommendations
Interpreting Results
Example Report
=== BIAS DETECTION REPORT ===
Analysis Date: February 3, 2026
Analysis Type: NYC Local Law 144 Audit
Total Candidates: 1,000
Selected Candidates: 200
--- OVERALL RESULTS ---
✅ No significant disparate impact detected
Compliance Status: PASS
--- GENDER ANALYSIS ---
Male:
- Applied: 500
- Selected: 110
- Selection Rate: 22.0%
Female:
- Applied: 500
- Selected: 90
- Selection Rate: 18.0%
Impact Ratio: 0.82 ✅ (Above 0.80 threshold)
--- RACE/ETHNICITY ANALYSIS ---
White:
- Applied: 400
- Selected: 92
- Selection Rate: 23.0%
Black or African American:
- Applied: 250
- Selected: 48
- Selection Rate: 19.2%
Impact Ratio: 0.83 ✅ (Above 0.80 threshold)
Hispanic or Latino:
- Applied: 200
- Selected: 36
- Selection Rate: 18.0%
Impact Ratio: 0.78 ⚠️ (Below 0.80 threshold)
Asian:
- Applied: 150
- Selected: 24
- Selection Rate: 16.0%
Impact Ratio: 0.70 ❌ (Significant bias detected)
--- INTERSECTIONAL ANALYSIS ---
Asian Female:
- Applied: 75
- Selected: 9
- Selection Rate: 12.0%
Impact Ratio: 0.52 ❌ (Severe bias detected)
--- RECOMMENDATIONS ---
1. CRITICAL: Address severe bias against Asian Female candidates
- Impact ratio: 0.52 (well below 0.80 threshold)
- Investigate AI model for bias against this group
- Consider retraining with more diverse data
2. HIGH: Monitor Hispanic or Latino candidates
- Impact ratio: 0.78 (borderline)
- Track trend over time
- Investigate if ratio continues to decline
3. MEDIUM: Overall gender balance acceptable
- Impact ratio: 0.82 (above threshold)
- Continue monitoring
--- COMPLIANCE ASSESSMENT ---
NYC LL144: ⚠️ CONDITIONAL PASS
- Bias detected but publication still required
- Recommend remediation before continued use
- Document business necessity if continuing
EEOC Title VII: ⚠️ RISK IDENTIFIED
- Disparate impact detected for Asian candidates
- May face discrimination claims
- Consult employment counsel
What to Do with Results
✅ No Bias Detected (All ratios ≥ 0.80)
Actions:
- ✅ Publish bias audit summary (NYC LL144 requirement)
- ✅ Continue using AI system
- ✅ Schedule next annual audit
- ✅ Monitor ongoing performance
Sample Publication:
## Bias Audit Summary
**Audit Date:** February 3, 2026
**Auditor:** HAIEC Bias Detection Tool
**AEDT Vendor:** [Your Vendor]
**Results:**
- Gender impact ratio: 0.82 (No disparate impact)
- Race/ethnicity impact ratios: 0.81-0.95 (No disparate impact)
**Conclusion:** No significant bias detected. System meets EEOC standards.
**Next Audit:** February 2027
⚠️ Borderline Bias (0.70-0.79)
Actions:
- ⚠️ Publish results (required)
- ⚠️ Investigate root cause
- ⚠️ Monitor closely
- ⚠️ Consider remediation
Investigation Steps:
- Review AI model features
- Check training data diversity
- Analyze decision thresholds
- Test alternative configurations
❌ Significant Bias (< 0.70)
Actions:
- ❌ Publish results (required)
- ❌ Immediate remediation required
- ❌ Consult employment counsel
- ❌ Consider suspending AI system
Remediation Options:
- Retrain model with diverse data
- Remove biased features
- Add fairness constraints
- Increase human oversight
- Switch to different AI system
Remediation Strategies
1. Data-Level Fixes
Problem: Training data lacks diversity
Solutions:
- Collect more diverse training data
- Oversample underrepresented groups
- Use synthetic data augmentation
- Balance historical data
Example:
# Before: Imbalanced training data
training_data = {
'white_male': 1000,
'asian_female': 50 # Underrepresented
}
# After: Balanced training data
balanced_data = {
'white_male': 1000,
'asian_female': 1000 # Oversampled
}
2. Model-Level Fixes
Problem: Model learned biased patterns
Solutions:
- Remove proxy features (zip code, school names)
- Add fairness constraints
- Use bias mitigation algorithms
- Ensemble multiple models
Example:
# Add fairness constraint
from fairlearn.reductions import DemographicParity
mitigator = DemographicParity()
fair_model = mitigator.fit(X_train, y_train, sensitive_features=gender)
3. Process-Level Fixes
Problem: AI used inappropriately
Solutions:
- Add human review for borderline cases
- Use AI for screening only, not final decisions
- Implement appeal process
- Provide alternative application path
4. Threshold Adjustments
Problem: Decision threshold creates bias
Solutions:
- Use group-specific thresholds
- Implement score banding
- Add randomization for ties
Example:
# Before: Single threshold
selected = candidates[candidates['ai_score'] >= 80]
# After: Group-specific thresholds
thresholds = {
'underrepresented': 75, # Lower threshold
'well_represented': 80
}
NYC LL144 Compliance
Publication Requirements
What to Publish:
- Summary of bias audit results
- Impact ratios for all categories
- Audit date
- Auditor information
- AEDT vendor information
Where to Publish:
- Company careers page
- Publicly accessible (no login required)
- Available for 6+ months
Sample Publication:
# Automated Employment Decision Tool - Bias Audit Summary
**Company:** [Your Company]
**Audit Date:** February 3, 2026
**Auditor:** HAIEC Bias Detection Tool
**AEDT:** [Vendor Name] Resume Screening AI
## Results
### Gender
- Male selection rate: 22.0%
- Female selection rate: 18.0%
- Impact ratio: 0.82
### Race/Ethnicity
[Full results table]
## Auditor Information
- Name: HAIEC Bias Detection Tool
- Methodology: EEOC four-fifths rule
- Sample size: 1,000 candidates
- Data period: February 2025 - January 2026
## Next Audit
Scheduled for February 2027
Independent Auditor Requirement
Is HAIEC an "Independent Auditor"?
Yes, HAIEC qualifies as an independent auditor under NYC LL144 because:
- ✅ No financial interest in your company
- ✅ No financial interest in AEDT vendor
- ✅ No ongoing advisory relationship
- ✅ Automated, objective analysis
However, some employers prefer human auditors for additional assurance. HAIEC can complement or replace traditional audits.
API Integration
Automated Bias Monitoring
import { HAIECClient } from '@haiec/sdk';
const client = new HAIECClient({
apiKey: process.env.HAIEC_API_KEY
});
// Run monthly bias check
async function monthlyBiasCheck() {
// Get last month's candidate data
const candidates = await getCandidateData();
// Run bias detection
const analysis = await client.biasDetection.analyze({
data: candidates,
analysisType: 'nyc_ll144',
threshold: 0.80
});
// Alert if bias detected
if (analysis.hasDisparateImpact) {
await sendAlert({
severity: 'high',
message: `Bias detected: ${analysis.affectedGroups.join(', ')}`,
impactRatios: analysis.impactRatios
});
}
return analysis;
}
Best Practices
1. Test Before Deployment
Run bias detection on test data before deploying AI system.
2. Monitor Continuously
Don't wait for annual audit - check monthly or quarterly.
3. Document Everything
Keep records of:
- All bias audits
- Remediation efforts
- Business necessity justifications
- Consultation with counsel
4. Be Transparent
Publish results even if they show bias. Transparency builds trust.
5. Act on Findings
Don't just publish and ignore. Fix identified bias.
Pricing
| Plan | Price | Features | |------|-------|----------| | Single Audit | $99 | One-time bias detection | | Annual | $499/year | 12 monthly checks + annual audit | | Enterprise | Custom | Unlimited checks + API access |
Support
- 📚 Documentation: Bias Detection Guide
- 💬 Support: support@haiec.com
- 📞 Consultation: Schedule with employment law experts
Related Resources
- NYC LL144 Overview
- Penalty Calculator - Estimate violation costs
- Self-Audit Tool - Compliance readiness check