HIPAA Compliance for AI
AI systems introduce new HIPAA risk vectors — PHI in LLM prompts, AI agents querying medical records, models trained on patient data. We test the Security Rule safeguards that apply.
Scope note: HIPAA has 3 rules (Privacy, Security, Breach Notification). We focus on the Security Rule technical safeguards most relevant to AI systems. Full HIPAA compliance requires legal counsel, a covered entity relationship analysis, and executed Business Associate Agreements with every AI vendor handling ePHI.
Why AI Creates New HIPAA Exposure
Traditional HIPAA controls were designed for EHR systems with known data flows. AI systems create novel pathways for PHI to travel, be processed, or be inadvertently disclosed.
PHI in AI Prompts
Patient names, diagnoses, insurance IDs often appear in AI prompts. Most LLM providers explicitly disclaim HIPAA coverage unless a BAA is in place — and even then, data handling must be verified.
HIPAA § 164.312(a) · § 164.504(e)
AI Training on Patient Data
Fine-tuning on EHR data, clinical notes, or billing records without de-identification violates § 164.514(b). The 18 HIPAA identifiers must be removed before any ML pipeline can legally process the data.
HIPAA § 164.514(b) · § 164.308
AI Agents Accessing Medical Records
Agentic AI systems that query patient databases through tool calls must log every access. Missing audit trails in AI agent frameworks are one of the most common HIPAA violations in healthcare AI deployments.
HIPAA § 164.312(b)
HIPAA Security Rule Safeguards for AI
We assess the Security Rule provisions most likely to be triggered by AI healthcare systems. Each mapped to specific HAIEC scan rules.
Security Standards (General Requirements)
AI systems that create, receive, maintain, or transmit ePHI must implement reasonable and appropriate safeguards. We scan AI pipelines for missing authentication, encryption gaps, and unprotected inference endpoints that handle patient data.
Rules: R1, R5, R9Administrative Safeguards
AI vendors accessing PHI must have security officer, documented risk analysis, and workforce training in place. We flag AI systems that log PHI without audit controls, and identify missing security management process documentation.
Rules: R9.7, R9.8Physical Safeguards
AI model servers and GPU infrastructure must have appropriate facility access controls. We identify cloud AI deployments lacking workload isolation and flag AI inference endpoints exposed beyond the secure perimeter.
Rules: R9.4, R9.5Technical Access Controls
Unique user identification and emergency access procedures required for all ePHI systems. We test for prompt injection bypassing access controls in medical AI systems, and flag missing authentication on AI endpoints.
Rules: R1, R9.4Audit Controls
Must implement mechanisms to record and examine activity in systems containing ePHI. We check for missing audit logging in AI inference calls, tool executions by AI agents, and unlogged access to medical data retrieval systems.
Rules: R9.8, R-RT09Transmission Security
ePHI transmitted over open networks must be encrypted. We flag AI pipelines that send patient data to third-party LLM APIs without explicit encryption confirmation, and detect unencrypted PHI in AI webhook payloads.
Rules: R5De-identification for AI Training
Fine-tuning models on patient data requires removing all 18 HIPAA identifiers. We scan AI training pipelines and RAG document stores for raw PHI patterns — names, dates, geographic data, account numbers, device identifiers.
Rules: R3Business Associate Agreements
AI vendors accessing PHI must have a signed BAA with the covered entity. We generate a checklist of AI sub-processors in your stack (LLM APIs, vector DBs, AI platforms) that require BAAs and flag those that disclaim HIPAA compliance.
Rules: BAA checklistThe BAA Problem in AI Stacks
Most AI stacks include multiple sub-processors that may handle ePHI. Each requires analysis and potentially a BAA. Many popular AI platforms explicitly disclaim HIPAA coverage.
LLM API providers
OpenAI, Anthropic, Google AI — only specific enterprise tiers have BAA availability
Vector databases
Pinecone, Weaviate, Chroma — if RAG retrieves patient records, BAA required
AI observability tools
LangSmith, Helicone, Braintrust — if prompt logs contain PHI, BAA needed
Self-hosted models
Ollama, vLLM, Hugging Face Inference Endpoints — BAA burden shifts to deployment host
What the Assessment Produces
Technical Safeguard Gap Report
§ 164.312 access controls, audit controls, and transmission security assessed for your AI pipeline. Pass/fail per safeguard with evidence snippets.
BAA Sub-Processor Checklist
List of AI vendors in your stack that require Business Associate Agreements — with notes on which have HIPAA-eligible offerings and which disclaim coverage.
PHI Pattern Detection Report
Scan of AI code and pipelines for the 18 HIPAA identifiers — names, dates, geographic data, device identifiers — routed through AI without de-identification.
Audit Logging Assessment
Check for missing audit trails in AI inference calls, agent tool executions, and medical data retrieval operations. Prioritised remediation by violation severity.
Who This Assessment Is For
Assess Your AI Healthcare Compliance
~10 minutes. Covers HIPAA Security Rule technical safeguards for AI systems with BAA sub-processor checklist.
No signup required • Free • Not legal advice • BAA required separately