Skip to main content
GDPR + HIPAA DATA PRIVACY

GDPR & HIPAA for AI Systems

AI systems create new data protection obligations — automated decisions, PII in prompts, PHI in training data. We check the specific GDPR articles and HIPAA safeguards that apply.

Scope note: GDPR has 99 articles; HIPAA has 174+ pages of regulations. We test the AI-relevant subset — primarily data security (Art. 32), automated decisions (Art. 22), and PHI handling. Full compliance requires a qualified Data Protection Officer and legal review.

6 GDPR AI articles assessed5 HIPAA safeguard categories~10 minutes

Why AI Creates New GDPR/HIPAA Obligations

Traditional data protection controls weren't designed for systems that process personal data through language models, make automated decisions, or ingest documents containing PHI.

PII in Prompts

User messages often contain names, emails, account numbers, or health data. Standard DLP tools don't inspect LLM prompt payloads. We do.

GDPR Art. 5, Art. 32 · HIPAA § 164.312

Automated Decisions

AI systems making credit, hiring, medical, or legal decisions trigger GDPR Art. 22 obligations — right to explanation, human review option, and explicit consent.

GDPR Art. 22, Art. 35

Training Data & PHI

Fine-tuning models on customer data or medical records without proper de-identification violates HIPAA § 164.514. We flag raw PHI patterns in training pipelines.

HIPAA § 164.514(b)

GDPR Articles We Assess for AI

We focus on the articles most frequently relevant to AI systems processing EU personal data.

Art. 5

Data Minimisation & Accuracy

AI prompts must not include unnecessary personal data. We scan for PII patterns routed to LLM prompts without minimisation.

Rules: R3, R5
Art. 22

Automated Decision-Making

Solely automated decisions with significant effects require safeguards. We flag AI pipelines making binding decisions without human oversight.

Rules: R2, R9, R9.1–R9.6
Art. 25

Data Protection by Design

AI systems must be designed to process minimum data. We check for missing input validation and unfiltered data ingestion.

Rules: R3, R9.6
Art. 32

Security of Processing

Technical measures must prevent unauthorised access. We test prompt injection defenses, credential exposure, and API key hardcoding.

Rules: R1, R5
Art. 35

Data Protection Impact Assessment

High-risk AI processing (profiling, biometrics, systematic monitoring) requires a DPIA. We identify AI processing likely to trigger this requirement.

Rules: Assessment checklist
Art. 83

Penalties

Up to €20M or 4% of global annual revenue for serious violations. AI data breaches via prompt injection or data exfiltration are a direct exposure.

Rules: R1, R7, R8

HIPAA Safeguards for AI Healthcare Systems

If your AI system touches Protected Health Information (PHI) — patient records, diagnostic data, billing — these safeguards apply.

§ 164.306

Security Standards (General)

AI systems handling PHI must implement reasonable safeguards. We scan for missing authentication, encryption, and access control in AI pipelines.

Rules: R1, R5, R9
§ 164.308

Administrative Safeguards

AI vendors accessing PHI must have security officer, risk analysis, and workforce training documented. We flag AI systems lacking audit controls.

Rules: R9.7, R9.8
§ 164.312(a)

Access Control

Unique user identification and emergency access procedures required. We test for prompt injection bypassing access controls in AI medical systems.

Rules: R1, R9.4
§ 164.312(b)

Audit Controls

Must implement mechanisms to record and examine activity. We check for missing audit logging in AI inference calls and tool executions.

Rules: R9.8, R-RT09
§ 164.514(b)

De-identification for AI Training

Training AI on PHI requires removing 18 identifiers. We scan for raw PHI patterns in AI training code and data pipelines.

Rules: R3
Note: HIPAA applies to Covered Entities (healthcare providers, health plans) and their Business Associates (AI vendors processing PHI on their behalf). Business Associate Agreements (BAAs) are required before accessing PHI — our assessment includes a BAA checklist.

What the Assessment Produces

Privacy Impact Assessment

Article-by-article gap analysis showing which AI data processing activities need additional controls.

DPIA Trigger Checklist

Structured checklist to determine if a Data Protection Impact Assessment is required for your AI system.

BAA Requirement Analysis

Identifies which AI vendors and sub-processors require Business Associate Agreements under HIPAA.

Prioritised Remediation Roadmap

Ranked list of gaps to close: what's critical now vs. what can be addressed in the next sprint.

Who This Assessment Is For

SaaS companies with EU customers using AI features
Healthcare AI vendors building on patient data
HR tech companies using AI for hiring decisions
Fintech platforms with automated credit decisions
Any company fine-tuning models on customer data
AI companies needing GDPR Article 35 DPIA documentation

Assess Your AI Data Protection Posture

~10 minutes. Covers GDPR Articles 5, 22, 25, 32, 35 and HIPAA Security Rule safeguards relevant to AI systems.

No signup required • Free • Not legal advice