Skip to main content

title: 'Bias Detection Tool' description: 'Detect and measure algorithmic bias in AI hiring systems - NYC LL144 compliant bias audits'

Bias Detection Tool

Detect and measure algorithmic bias in your AI hiring systems with HAIEC's automated bias detection tool. Generate NYC LL144-compliant bias audit reports.


What is Bias Detection?

Bias detection analyzes AI hiring systems to identify disparate impact - when an AI system selects candidates from different demographic groups at significantly different rates.

Why It Matters

Legal Requirements:

Business Impact:


Understanding Bias Metrics

Impact Ratio

The impact ratio compares selection rates between demographic groups.

Formula:

Impact Ratio = (Selection Rate for Group A) / (Selection Rate for Group B)

Where:
- Selection Rate = (Selected Candidates / Total Candidates) × 100%

Example:

Female candidates:
- Applied: 200
- Selected: 30
- Selection rate: 30/200 = 15%

Male candidates:
- Applied: 200
- Selected: 40
- Selection rate: 40/200 = 20%

Impact Ratio for females: 15% / 20% = 0.75

The 80% Rule (Four-Fifths Rule)

EEOC Standard:

Interpretation:

| Impact Ratio | Interpretation | Action Required | |--------------|----------------|-----------------| | ≥ 0.80 | No disparate impact | Continue monitoring | | 0.70-0.79 | Borderline | Investigate and monitor | | 0.60-0.69 | Significant bias | Remediation recommended | | < 0.60 | Severe bias | Immediate action required |


Protected Categories

NYC LL144 Requirements

Must test for bias across:

1. Sex/Gender

2. Race/Ethnicity (EEOC Categories)

3. Intersectional Categories

Data Requirements

Minimum Sample Size:

Data Recency:


Using the Bias Detection Tool

Step 1: Prepare Your Data

Upload candidate data in CSV format with required fields:

Required Columns:

candidate_id,gender,race_ethnicity,ai_score,selected
001,Female,Black or African American,75,1
002,Male,White,82,1
003,Female,Asian,68,0
004,Male,Hispanic or Latino,79,1

Field Definitions:

Data Privacy

  • Remove all PII (names, emails, addresses)
  • Use anonymized candidate IDs
  • Data is encrypted in transit and at rest
  • Automatically deleted after analysis

Step 2: Configure Analysis

Select Analysis Type:

  1. NYC LL144 Audit (Recommended)

    • Full compliance with NYC Local Law 144
    • All required metrics and categories
    • Publication-ready report
  2. EEOC Compliance Check

    • Federal four-fifths rule analysis
    • Title VII compliance assessment
  3. Custom Analysis

    • Select specific categories
    • Custom thresholds
    • Exploratory analysis

Set Parameters:

analysis:
  type: nyc_ll144
  threshold: 0.80  # Four-fifths rule
  confidence_level: 0.95  # Statistical significance
  include_intersectional: true
  min_sample_size: 100

Step 3: Run Analysis

# Via Web Interface
1. Upload CSV file
2. Select analysis type
3. Click "Run Bias Detection"
4. Wait 30-60 seconds for results

# Via API
curl -X POST https://api.haiec.com/v1/bias-detection \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -F "file=@candidates.csv" \
  -F "analysis_type=nyc_ll144"

Step 4: Review Results

Summary Dashboard:

Detailed Findings:


Interpreting Results

Example Report

=== BIAS DETECTION REPORT ===
Analysis Date: February 3, 2026
Analysis Type: NYC Local Law 144 Audit
Total Candidates: 1,000
Selected Candidates: 200

--- OVERALL RESULTS ---
✅ No significant disparate impact detected
Compliance Status: PASS

--- GENDER ANALYSIS ---
Male:
  - Applied: 500
  - Selected: 110
  - Selection Rate: 22.0%
  
Female:
  - Applied: 500
  - Selected: 90
  - Selection Rate: 18.0%
  
Impact Ratio: 0.82 ✅ (Above 0.80 threshold)

--- RACE/ETHNICITY ANALYSIS ---
White:
  - Applied: 400
  - Selected: 92
  - Selection Rate: 23.0%

Black or African American:
  - Applied: 250
  - Selected: 48
  - Selection Rate: 19.2%
  
Impact Ratio: 0.83 ✅ (Above 0.80 threshold)

Hispanic or Latino:
  - Applied: 200
  - Selected: 36
  - Selection Rate: 18.0%
  
Impact Ratio: 0.78 ⚠️ (Below 0.80 threshold)

Asian:
  - Applied: 150
  - Selected: 24
  - Selection Rate: 16.0%
  
Impact Ratio: 0.70 ❌ (Significant bias detected)

--- INTERSECTIONAL ANALYSIS ---
Asian Female:
  - Applied: 75
  - Selected: 9
  - Selection Rate: 12.0%
  
Impact Ratio: 0.52 ❌ (Severe bias detected)

--- RECOMMENDATIONS ---
1. CRITICAL: Address severe bias against Asian Female candidates
   - Impact ratio: 0.52 (well below 0.80 threshold)
   - Investigate AI model for bias against this group
   - Consider retraining with more diverse data

2. HIGH: Monitor Hispanic or Latino candidates
   - Impact ratio: 0.78 (borderline)
   - Track trend over time
   - Investigate if ratio continues to decline

3. MEDIUM: Overall gender balance acceptable
   - Impact ratio: 0.82 (above threshold)
   - Continue monitoring

--- COMPLIANCE ASSESSMENT ---
NYC LL144: ⚠️ CONDITIONAL PASS
- Bias detected but publication still required
- Recommend remediation before continued use
- Document business necessity if continuing

EEOC Title VII: ⚠️ RISK IDENTIFIED
- Disparate impact detected for Asian candidates
- May face discrimination claims
- Consult employment counsel

What to Do with Results

✅ No Bias Detected (All ratios ≥ 0.80)

Actions:

  1. ✅ Publish bias audit summary (NYC LL144 requirement)
  2. ✅ Continue using AI system
  3. ✅ Schedule next annual audit
  4. ✅ Monitor ongoing performance

Sample Publication:

## Bias Audit Summary

**Audit Date:** February 3, 2026
**Auditor:** HAIEC Bias Detection Tool
**AEDT Vendor:** [Your Vendor]

**Results:**
- Gender impact ratio: 0.82 (No disparate impact)
- Race/ethnicity impact ratios: 0.81-0.95 (No disparate impact)

**Conclusion:** No significant bias detected. System meets EEOC standards.

**Next Audit:** February 2027

⚠️ Borderline Bias (0.70-0.79)

Actions:

  1. ⚠️ Publish results (required)
  2. ⚠️ Investigate root cause
  3. ⚠️ Monitor closely
  4. ⚠️ Consider remediation

Investigation Steps:

❌ Significant Bias (< 0.70)

Actions:

  1. ❌ Publish results (required)
  2. ❌ Immediate remediation required
  3. ❌ Consult employment counsel
  4. ❌ Consider suspending AI system

Remediation Options:


Remediation Strategies

1. Data-Level Fixes

Problem: Training data lacks diversity

Solutions:

Example:

# Before: Imbalanced training data
training_data = {
    'white_male': 1000,
    'asian_female': 50  # Underrepresented
}

# After: Balanced training data
balanced_data = {
    'white_male': 1000,
    'asian_female': 1000  # Oversampled
}

2. Model-Level Fixes

Problem: Model learned biased patterns

Solutions:

Example:

# Add fairness constraint
from fairlearn.reductions import DemographicParity

mitigator = DemographicParity()
fair_model = mitigator.fit(X_train, y_train, sensitive_features=gender)

3. Process-Level Fixes

Problem: AI used inappropriately

Solutions:

4. Threshold Adjustments

Problem: Decision threshold creates bias

Solutions:

Example:

# Before: Single threshold
selected = candidates[candidates['ai_score'] >= 80]

# After: Group-specific thresholds
thresholds = {
    'underrepresented': 75,  # Lower threshold
    'well_represented': 80
}

NYC LL144 Compliance

Publication Requirements

What to Publish:

Where to Publish:

Sample Publication:

# Automated Employment Decision Tool - Bias Audit Summary

**Company:** [Your Company]
**Audit Date:** February 3, 2026
**Auditor:** HAIEC Bias Detection Tool
**AEDT:** [Vendor Name] Resume Screening AI

## Results

### Gender
- Male selection rate: 22.0%
- Female selection rate: 18.0%
- Impact ratio: 0.82

### Race/Ethnicity
[Full results table]

## Auditor Information
- Name: HAIEC Bias Detection Tool
- Methodology: EEOC four-fifths rule
- Sample size: 1,000 candidates
- Data period: February 2025 - January 2026

## Next Audit
Scheduled for February 2027

Independent Auditor Requirement

Is HAIEC an "Independent Auditor"?

Yes, HAIEC qualifies as an independent auditor under NYC LL144 because:

  • ✅ No financial interest in your company
  • ✅ No financial interest in AEDT vendor
  • ✅ No ongoing advisory relationship
  • ✅ Automated, objective analysis

However, some employers prefer human auditors for additional assurance. HAIEC can complement or replace traditional audits.


API Integration

Automated Bias Monitoring

import { HAIECClient } from '@haiec/sdk';

const client = new HAIECClient({
  apiKey: process.env.HAIEC_API_KEY
});

// Run monthly bias check
async function monthlyBiasCheck() {
  // Get last month's candidate data
  const candidates = await getCandidateData();
  
  // Run bias detection
  const analysis = await client.biasDetection.analyze({
    data: candidates,
    analysisType: 'nyc_ll144',
    threshold: 0.80
  });
  
  // Alert if bias detected
  if (analysis.hasDisparateImpact) {
    await sendAlert({
      severity: 'high',
      message: `Bias detected: ${analysis.affectedGroups.join(', ')}`,
      impactRatios: analysis.impactRatios
    });
  }
  
  return analysis;
}

Best Practices

1. Test Before Deployment

Run bias detection on test data before deploying AI system.

2. Monitor Continuously

Don't wait for annual audit - check monthly or quarterly.

3. Document Everything

Keep records of:

4. Be Transparent

Publish results even if they show bias. Transparency builds trust.

5. Act on Findings

Don't just publish and ignore. Fix identified bias.


Pricing

| Plan | Price | Features | |------|-------|----------| | Single Audit | $99 | One-time bias detection | | Annual | $499/year | 12 monthly checks + annual audit | | Enterprise | Custom | Unlimited checks + API access |


Support


Related Resources