Skip to main content
System Architecture

How HAIEC Works

HAIEC uses 7 deterministic engines and 8 platform tools to assess AI risk, generate compliance evidence, and enforce governance in production systems. Every result is reproducible, verifiable, and mapped to real regulations.

System Overview

HAIEC operates with 7 core engines:

1

GitHub Control Signals

Lightweight metadata collection from repository configuration

2

AI Security Static Engine

Source code analysis using deterministic pattern matching

3

AI Security Runtime Engine

Live endpoint testing with attack payloads

4

Compliance Evidence Wizard

Framework-specific evidence collection and artifact generation

5

CSM6 Maturity Assessment

AI governance maturity scoring across 6 layers

6

ISAF Audit Logger

Instruction Stack Audit Framework for ML training lineage

7

Kill Switch Management

Emergency AI execution stop with audit-grade evidence

Engine 1: GitHub Control Signals

The GitHub App collects repository metadata to assess governance readiness:

CI/CD Detection

Identifies GitHub Actions, CircleCI, Jenkins workflows

Branch Protection

Verifies required reviews, status checks, restrictions

Policy Files

Detects SECURITY.md, CODE_OF_CONDUCT.md, LICENSE

Dependency Scanning

Checks for Dependabot, Snyk, or other security tools

No Code Access: The GitHub App only reads repository metadata. It does not clone or analyze source code.

Engine 2: AI Security Static Scanner

The Static Scanner analyzes source code using deterministic pattern matching (Semgrep):

78 Security Rules

Covers OWASP LLM Top 10, prompt injection, tool abuse, data leakage, access control

Compliance Mappings

Every rule maps to SOC 2, ISO 27001, GDPR, HIPAA, OWASP, CWE, NIST AI RMF

Deterministic Results

Same code + same rules = same results. No AI, no probabilistic scoring.

Ephemeral Scanning: Code is cloned to `/tmp`, scanned, and immediately deleted. No code is stored or transmitted.

Engine 3: AI Security Runtime Scanner

The Runtime Scanner tests live AI endpoints with attack payloads:

Behavioral Testing

Sends prompt injection, jailbreak, and misuse attacks to live endpoints

No Code Required

Only needs endpoint URL and authorization. No repository access needed.

Real-World Attacks

Tests actual AI behavior, not just code patterns. Detects runtime vulnerabilities.

No Artifact: Runtime findings are shown in UI only. No badge or artifact is generated (behavioral testing is not audit evidence).

Engine 4: Compliance Evidence Wizard

The Compliance Wizard collects framework-specific evidence:

SOC 2 Type II

Trust Service Criteria evidence collection

ISO 27001/42001

Information security and AI management controls

GDPR & HIPAA

Data protection and healthcare privacy evidence

NYC LL144

Bias audit readiness documentation

Questionnaire-Based: Wizard asks framework-specific questions. Responses are mapped to controls and evidence requirements.

Artifacts & Badges

HAIEC generates cryptographically verifiable artifacts and public badges:

AI Security Attestation

Generated by Static Scanner. Contains scan results, rule violations, compliance mappings.

SHA-256 evidence hash • Tamper-proof timestamps • Public verification URL

Compliance Evidence Mark

Generated by Compliance Wizard. Framework-specific evidence and control mappings.

Cryptographic signature • Evidence lineage • Audit-defensible proof

GitHub Control Signals Badge

Generated by GitHub App. Repository governance readiness indicator.

Metadata only • No code analysis • Public badge URL

Patent Pending: HAIEC's cryptographic artifact generation and evidence lineage system is patent pending, providing unique verifiability for compliance evidence.

Engine 5: CSM6 Maturity Assessment

CSM6 (Compliance Stack Maturity Model) assesses AI governance maturity across 6 layers:

Layer 1: Infrastructure

Cloud security, access control, network isolation

Layer 2: Data

Data governance, privacy, retention policies

Layer 3: Model

Training data provenance, model versioning, bias testing

Layer 4: Application

Prompt injection defense, output validation, rate limiting

Layer 5: Monitoring

Logging, alerting, incident response, drift detection

Layer 6: Governance

Policies, risk assessments, compliance documentation

Maturity Scoring: CSM6 provides a 0-100 score per layer, identifying gaps and prioritizing remediation.

Engine 6: ISAF Audit Logger

ISAF (Instruction Stack Audit Framework) captures ML training lineage for compliance:

Training Data Lineage

Automatically logs data sources, versions, transformations with cryptographic verification

Objective Tracking

Records training objectives, loss functions, fairness constraints for audit trails

Compliance Export

Generates audit-ready reports for EU AI Act, NIST AI RMF, ISO 42001, Colorado AI Act

Python Package: ISAF integrates with PyTorch, TensorFlow, scikit-learn via decorators. Minimal code changes required.

Engine 7: Kill Switch Management

Kill Switch provides emergency AI execution stop with audit-grade evidence:

Application-Level Control

Stops AI execution at application layer before reaching LLM API. No infrastructure changes needed.

RBAC & Audit Logging

Role-based access control for kill switch triggers. Every activation logged with cryptographic proof.

Real-Time Dashboard

Monitor AI execution status, view kill switch history, configure automated triggers.

Production Ready: Kill Switch SDK is production-ready with TypeScript support. Integrates with existing auth systems.

8 Platform Tools

Beyond the 7 engines, HAIEC provides specialized tools for specific compliance needs:

1. AI Inventory

Track API usage, costs, and AI system inventory with SDK wrappers for OpenAI and Anthropic

2. NYC LL144 Bias Audit

Regulator-defensible bias analysis for AI hiring tools (NYC Local Law 144 compliance)

3. Policy Generator

Generate compliance policies for SOC2, ISO27001, HIPAA, GDPR with framework-specific templates

4. AI Vendor Risk Scanner

Assess third-party AI vendor risk by analyzing public claims and regulatory exposure

5. MARPP Governance

Database-level evidence immutability with PostgreSQL triggers (append-only audit logs)

6. Compliance Twin

Alert engine, anomaly detector, and compliance simulation for continuous monitoring

7. Bootstrap Packages

Compliance starter kits with pre-configured policies, templates, and evidence checklists

8. LLMverify

AI output verification library for risk detection (open-source npm package)

Deterministic Guarantees

Reproducible Results

Same inputs always produce same outputs. No randomness, no AI guessing.

Explainable Logic

Every finding traces to a specific rule. Every rule maps to a regulation.

Verifiable Evidence

Cryptographic hashes prove evidence integrity. Public URLs enable third-party verification.

Ready to Explore?

Explore our engines • Learn about our approach • No commitment required