COMPLETE RULE LIBRARY

AI Security Rules
Deterministic Pattern Library

Every rule documented with examples.

Complete library of deterministic security rules with code examples, severity levels, and remediation guidance. No AI guessing—just facts.

14
Core Rules
5
Categories
6
Frameworks
100%
Deterministic

Showing 15 of 15 core AI security rules. These are the deterministic patterns used by the scanner engine.

Rule engine version: v2026.1.0-ea (Early Access). Rules are continuously updated.

R1

User Input Reaches System Prompt

Prompt Injection

CRITICAL

Detects when user-controlled input flows into system or developer prompts without sanitization. This is the primary vector for prompt injection attacks.

Code Example:

// Detected Pattern
const systemPrompt = `You are a helpful assistant. User context: ${userInput}`;
await openai.chat.completions.create({
  messages: [{ role: 'system', content: systemPrompt }]
});
// CRITICAL: User input in system prompt

Remediation:

Never interpolate user input into system prompts. Use separate user message roles. Implement input validation and content filtering before AI processing.

Frameworks:SOC 2EU AI ActISO 27001
R2

Model Output Triggers Privileged Action

Tool Abuse

CRITICAL

Detects when AI model output directly triggers privileged operations (database writes, API calls, file operations) without human approval or validation.

Code Example:

// Detected Pattern
const action = await model.generateAction(userRequest);
await executePrivilegedAction(action); // No validation!
// CRITICAL: AI output directly executes

Remediation:

Implement human-in-the-loop for privileged actions. Add output validation before execution. Use allowlists for permitted actions.

Frameworks:SOC 2EU AI Act
R3

Tool Arguments Without Validation

Tool Abuse

CRITICAL

AI function calling passes arguments to tools without schema validation. Attackers can manipulate AI to pass malicious arguments.

Code Example:

// Detected Pattern
const toolCall = response.tool_calls[0];
await tools[toolCall.name](toolCall.arguments);
// CRITICAL: No argument validation

Remediation:

Validate all tool arguments against strict schemas (Zod, JSON Schema). Implement argument sanitization. Use typed function signatures.

Frameworks:SOC 2EU AI Act
R4

RAG Poisoning / Indirect Injection

Prompt Injection

HIGH

Retrieved documents from RAG systems flow to AI context without content boundary markers. Attackers can poison documents with hidden instructions.

Code Example:

// Detected Pattern
const docs = await vectorStore.similaritySearch(query);
const context = docs.map(d => d.content).join('\n');
// HIGH: No boundary markers on retrieved content

Remediation:

Add clear boundary markers around retrieved content. Implement content filtering on RAG results. Use separate context windows for trusted vs untrusted content.

Frameworks:EU AI ActSOC 2
R5

Missing Authentication on AI Endpoint

Access Control

HIGH

AI API endpoints lack authentication middleware. Unauthenticated users can access AI features, leading to abuse and cost amplification.

Code Example:

// Detected Pattern
export async function POST(req: Request) {
  const { prompt } = await req.json();
  return await callAI(prompt);
  // HIGH: No auth check before AI call
}

Remediation:

Add authentication middleware to all AI endpoints. Verify session/token before processing. Implement rate limiting per user.

Frameworks:SOC 2ISO 27001HIPAA
R6

Missing Tenant Scoping on AI Data

Access Control

HIGH

AI-accessible data queries lack tenant isolation. One user's AI queries could access another tenant's data.

Code Example:

// Detected Pattern
const data = await db.query(aiGeneratedQuery);
// HIGH: No tenant filter on AI query

Remediation:

Always include tenant ID in data queries. Use row-level security. Validate AI-generated queries include proper scoping.

Frameworks:SOC 2GDPRHIPAA
R6-Agent

Agent Loops Without Guardrails

Stability

HIGH

Autonomous AI agent execution lacks termination conditions, bounded loops, or human-in-the-loop controls. Can lead to runaway execution.

Code Example:

// Detected Pattern
while (!task.complete) {
  await agent.executeStep();
  // HIGH: No max iterations or timeout
}

Remediation:

Implement max iteration limits. Add timeout controls. Require human approval for long-running agents. Log all agent actions.

Frameworks:EU AI ActSOC 2
R7

Secrets Flow to AI or Logs

Data Leakage

HIGH

API keys, passwords, or tokens flow into AI prompts or are logged. Secrets could be leaked through AI responses or log files.

Code Example:

// Detected Pattern
const prompt = `Connect to DB with password: ${dbPassword}`;
console.log('Processing:', prompt);
// HIGH: Secret in prompt and logs

Remediation:

Never include secrets in AI prompts. Implement secret detection in logging. Use secret managers with runtime injection.

Frameworks:SOC 2ISO 27001PCI DSS
R8

LLM-Chosen URL / SSRF

Data Leakage

HIGH

AI model output is used to construct URLs for external requests. Attackers can manipulate AI to access internal services (SSRF).

Code Example:

// Detected Pattern
const url = await model.generateUrl(userRequest);
const response = await fetch(url);
// HIGH: AI-controlled URL fetch

Remediation:

Implement URL allowlists. Validate AI-generated URLs against permitted domains. Block internal network access from AI-triggered requests.

Frameworks:SOC 2ISO 27001
R9

Non-Deterministic AI on Privileged Path

Stability

MEDIUM

AI operations on security-sensitive paths use high temperature or variable configurations. Non-deterministic outputs make compliance validation difficult.

Code Example:

// Detected Pattern
await openai.chat.completions.create({
  model: 'gpt-4',
  temperature: 0.9, // HIGH temperature
  messages: securityDecisionPrompt
});

Remediation:

Use temperature=0 for security-critical AI operations. Set fixed seeds for reproducibility. Document AI configuration for audit.

Frameworks:SOC 2EU AI Act
R10

Privileged AI Operation Lacks Authorization

Access Control

HIGH

AI operations that perform privileged actions do not verify user has required permissions. Attackers can escalate privileges through AI.

Code Example:

// Detected Pattern
async function aiAdminAction(userId, action) {
  await executeAdminAction(action);
  // HIGH: No permission check
}

Remediation:

Implement RBAC checks before AI operations. Verify user permissions match required action level. Audit all privileged AI actions.

Frameworks:SOC 2ISO 27001
R11

AI Output Flows to Dangerous Sink

Stability

HIGH

AI-generated content flows to code execution, database writes, or external APIs without output validation. Enables XSS, SQL injection, command injection.

Code Example:

// Detected Pattern
const html = await model.generateHTML(userRequest);
element.innerHTML = html;
// HIGH: AI output to innerHTML

Remediation:

Validate and sanitize all AI output before use. Use schema validation for structured output. Implement content security policies.

Frameworks:SOC 2EU AI Act
R12

AI Endpoint Lacks Rate Limiting

Stability

MEDIUM

AI endpoint does not have rate limiting protection. Attackers can cause resource exhaustion or denial of wallet attacks.

Code Example:

// Detected Pattern
export async function POST(req: Request) {
  // No rate limit check
  return await expensiveAICall(req);
}

Remediation:

Implement rate limiting per user/IP. Add request quotas. Monitor for abuse patterns. Implement circuit breakers.

Frameworks:SOC 2
R13

No Context Window Overflow Protection

Stability

MEDIUM

User input is not truncated or validated for length before AI context. Attackers can exhaust context window, truncating important instructions.

Code Example:

// Detected Pattern
const messages = [
  { role: 'system', content: systemPrompt },
  { role: 'user', content: userInput } // Unbounded!
];

Remediation:

Validate input length before AI context. Implement token counting. Truncate user input to safe limits. Reserve space for system prompts.

Frameworks:EU AI Act
R14

User-Controlled Model Selection

Stability

LOW

User input can influence which AI model is used. Attackers can switch to cheaper/weaker models without safety filters.

Code Example:

// Detected Pattern
const model = req.query.model || 'gpt-4';
await openai.chat.completions.create({ model });
// LOW: User controls model selection

Remediation:

Use fixed model selection or strict allowlist. Do not expose model choice to users. Document model selection rationale.

Frameworks:EU AI Act

Scan Your AI Code

See all 14 rules in action on your codebase.