GDPR & HIPAA for AI Systems
AI systems create new data protection obligations — automated decisions, PII in prompts, PHI in training data. We check the specific GDPR articles and HIPAA safeguards that apply.
Scope note: GDPR has 99 articles; HIPAA has 174+ pages of regulations. We test the AI-relevant subset — primarily data security (Art. 32), automated decisions (Art. 22), and PHI handling. Full compliance requires a qualified Data Protection Officer and legal review.
Why AI Creates New GDPR/HIPAA Obligations
Traditional data protection controls weren't designed for systems that process personal data through language models, make automated decisions, or ingest documents containing PHI.
PII in Prompts
User messages often contain names, emails, account numbers, or health data. Standard DLP tools don't inspect LLM prompt payloads. We do.
GDPR Art. 5, Art. 32 · HIPAA § 164.312
Automated Decisions
AI systems making credit, hiring, medical, or legal decisions trigger GDPR Art. 22 obligations — right to explanation, human review option, and explicit consent.
GDPR Art. 22, Art. 35
Training Data & PHI
Fine-tuning models on customer data or medical records without proper de-identification violates HIPAA § 164.514. We flag raw PHI patterns in training pipelines.
HIPAA § 164.514(b)
GDPR Articles We Assess for AI
We focus on the articles most frequently relevant to AI systems processing EU personal data.
Data Minimisation & Accuracy
AI prompts must not include unnecessary personal data. We scan for PII patterns routed to LLM prompts without minimisation.
Rules: R3, R5Automated Decision-Making
Solely automated decisions with significant effects require safeguards. We flag AI pipelines making binding decisions without human oversight.
Rules: R2, R9, R9.1–R9.6Data Protection by Design
AI systems must be designed to process minimum data. We check for missing input validation and unfiltered data ingestion.
Rules: R3, R9.6Security of Processing
Technical measures must prevent unauthorised access. We test prompt injection defenses, credential exposure, and API key hardcoding.
Rules: R1, R5Data Protection Impact Assessment
High-risk AI processing (profiling, biometrics, systematic monitoring) requires a DPIA. We identify AI processing likely to trigger this requirement.
Rules: Assessment checklistPenalties
Up to €20M or 4% of global annual revenue for serious violations. AI data breaches via prompt injection or data exfiltration are a direct exposure.
Rules: R1, R7, R8HIPAA Safeguards for AI Healthcare Systems
If your AI system touches Protected Health Information (PHI) — patient records, diagnostic data, billing — these safeguards apply.
Security Standards (General)
AI systems handling PHI must implement reasonable safeguards. We scan for missing authentication, encryption, and access control in AI pipelines.
Rules: R1, R5, R9Administrative Safeguards
AI vendors accessing PHI must have security officer, risk analysis, and workforce training documented. We flag AI systems lacking audit controls.
Rules: R9.7, R9.8Access Control
Unique user identification and emergency access procedures required. We test for prompt injection bypassing access controls in AI medical systems.
Rules: R1, R9.4Audit Controls
Must implement mechanisms to record and examine activity. We check for missing audit logging in AI inference calls and tool executions.
Rules: R9.8, R-RT09De-identification for AI Training
Training AI on PHI requires removing 18 identifiers. We scan for raw PHI patterns in AI training code and data pipelines.
Rules: R3What the Assessment Produces
Privacy Impact Assessment
Article-by-article gap analysis showing which AI data processing activities need additional controls.
DPIA Trigger Checklist
Structured checklist to determine if a Data Protection Impact Assessment is required for your AI system.
BAA Requirement Analysis
Identifies which AI vendors and sub-processors require Business Associate Agreements under HIPAA.
Prioritised Remediation Roadmap
Ranked list of gaps to close: what's critical now vs. what can be addressed in the next sprint.
Who This Assessment Is For
Assess Your AI Data Protection Posture
~10 minutes. Covers GDPR Articles 5, 22, 25, 32, 35 and HIPAA Security Rule safeguards relevant to AI systems.
No signup required • Free • Not legal advice