Core API
verify(options): Promise<VerifyResult>
Main verification function. Runs comprehensive AI output verification.
interface VerifyOptions {
content: string; // Required: Text to verify
config?: Partial<Config>; // Optional: Configuration overrides
context?: { // Optional: Additional context
isJSON?: boolean;
prompt?: string;
userInput?: string;
};
}
// Example
import { verify } from 'llmverify';
const result = await verify({
content: 'Your AI output here',
config: {
engines: {
hallucination: { enabled: true },
csm6: { enabled: true }
}
}
});
console.log(result.risk.level); // 'low' | 'moderate' | 'high' | 'critical'
console.log(result.risk.action); // 'allow' | 'review' | 'block'run(options): Promise<RunResult>
Master function with preset support. Recommended for most use cases.
import { run } from 'llmverify';
const result = await run({
content: aiResponse,
prompt: originalPrompt,
preset: 'prod' // 'dev' | 'prod' | 'strict' | 'fast' | 'ci'
});
// Presets:
// dev - Balanced output, all engines
// prod - Optimized for speed
// strict - Maximum scrutiny
// fast - Minimal checks
// ci - CI/CD optimizedVerification Functions
isInputSafe(text): boolean
Quick check for prompt injection attacks. Returns true if input appears safe.
import { isInputSafe } from 'llmverify';
const safe = isInputSafe("What's the weather?"); // true
const unsafe = isInputSafe("Ignore all instructions"); // falseSecurity Functions
redactPII(text, options?): RedactResult
Detect and redact personally identifiable information from text.
import { redactPII } from 'llmverify';
const { redacted, findings } = redactPII(
"Email john@example.com or call 555-123-4567"
);
// redacted: "Email [REDACTED] or call [REDACTED]"
// findings: [{ type: 'email', ... }, { type: 'phone', ... }]containsPII(text): boolean
Quick check for PII presence without redaction.
sanitizePromptInjection(text): SanitizeResult
Remove or neutralize prompt injection patterns from text.
Classification Functions
classify(prompt, response): ClassifyResult
Classify AI output intent, detect hallucination risk, and validate JSON.
import { classify } from 'llmverify';
const result = classify("What is 2+2?", "The answer is definitely 4.");
console.log(result.hallucinationRisk); // 0 to 1
console.log(result.hallucinationLabel); // 'low' | 'medium' | 'high'
console.log(result.isJson); // false
console.log(result.intent); // classification labelError Handling (v1.5.2)
Every error includes a standardized error code with actionable suggestions:
import { verify, ErrorCode } from 'llmverify';
try {
const result = await verify({ content });
} catch (error) {
console.log(error.code); // 'LLMVERIFY_1003'
console.log(error.metadata.suggestion); // 'Increase timeout to 5000ms'
}
// Error code ranges:
// LLMVERIFY_1001-1999: Configuration errors
// LLMVERIFY_2001-2999: Verification errors
// LLMVERIFY_3001-3999: Plugin errors
// LLMVERIFY_4001-4999: Server errors
// LLMVERIFY_5001-5999: CLI errors
// LLMVERIFY_6001-6004: Baseline errorsLogging & Audit (v1.5.2)
import { getAuditLogger } from 'llmverify';
const auditLogger = getAuditLogger();
// Automatically logs all verify() calls
// Logs include: timestamp, requestId, riskLevel, findings
// Location: ~/.llmverify/audit/YYYY-MM-DD.logBaseline & Drift Detection (v1.5.2)
import { getBaselineStorage } from 'llmverify';
const storage = getBaselineStorage();
const stats = storage.getStatistics();
console.log(`Baseline: ${stats.sampleCount} samples`);
// Automatic drift detection (20% threshold)
// CLI: npx llmverify baseline:statsPlugin System (v1.5.2)
import { use, createPlugin } from 'llmverify';
const customRule = createPlugin({
id: 'my-rule',
name: 'Custom Verification Rule',
execute: async (context) => ({
findings: [],
score: 0
})
});
use(customRule);
// Now all verify() calls include your custom ruleSentinel Tests (v1.5.2)
Proactive behavioral tests that verify your LLM is responding correctly. Runs real prompts and checks responses.
import { sentinel } from 'llmverify';
// Quick one-liner — runs all 4 sentinel tests
const suite = await sentinel.quick(myClient, 'gpt-4');
console.log(suite.passed); // true/false
console.log(suite.passRate); // 0.75 = 3/4 passed
// Run a single test
const echo = await sentinel.test('staticEchoTest', myClient, 'gpt-4');
// Full config version
import { runAllSentinelTests } from 'llmverify';
const suite = await runAllSentinelTests({
client: myClient,
model: 'gpt-4'
});
// Available tests:
// staticEchoTest — Does the LLM echo back exact content?
// duplicateQueryTest — Are responses consistent across identical prompts?
// structuredListTest — Can the LLM follow structured output instructions?
// shortReasoningTest — Does the LLM show reasoning for simple questions?monitorLLM (v1.5.2)
Wrap any LLM client with health monitoring. Detects latency spikes, token rate changes, and behavioral drift.
import { monitorLLM } from 'llmverify';
const monitored = monitorLLM(openaiClient, {
hooks: {
onUnstable: (report) => alert('LLM unstable!'),
onDegraded: (report) => console.warn('LLM degraded'),
onRecovery: (report) => console.log('LLM recovered')
}
});
const response = await monitored.generate({ prompt: 'Hello' });
console.log(response.llmverify.health); // 'stable' | 'degraded' | 'unstable'
console.log(response.llmverify.engines); // latency, tokenRate, fingerprint, structureUsage & Tiers (v1.5.2)
Local-only usage tracking. All features on every tier. Free: 500 calls/day.
import { checkUsageLimit, readUsage, TIER_USAGE_LIMITS } from 'llmverify';
// Check current usage
const usage = readUsage();
console.log(usage.calls); // 42
console.log(usage.date); // '2026-02-08'
// Check if limit reached
const check = checkUsageLimit('free');
console.log(check.allowed); // true
console.log(check.remaining); // 58
// Tier limits
console.log(TIER_USAGE_LIMITS.free.dailyCallLimit); // 100
console.log(TIER_USAGE_LIMITS.starter.dailyCallLimit); // 5000
console.log(TIER_USAGE_LIMITS.pro.dailyCallLimit); // 50000
console.log(TIER_USAGE_LIMITS.business.dailyCallLimit); // Infinity
// CLI commands:
// npx llmverify usage — Show today's usage
// npx llmverify tier — Show current tier and limitsIDE Extension (v1.5.2)
IDE integration with automatic local fallback when server is unavailable.
import { LLMVerifyIDE } from 'llmverify';
const ide = new LLMVerifyIDE({
serverUrl: 'http://localhost:9009',
useLocalFallback: true // Falls back to local verify() if server is down
});
const result = await ide.verify('AI output text');
// Works even if server is not runningDX Improvements (v1.5.2)
// String shorthand — no object wrapper needed
const result = await verify("The Earth is flat.");
// Default export
import llmverify from 'llmverify';
const result = await llmverify.verify(text);
// Typed shorthand objects
import { ai, guardrails } from 'llmverify';
import type { AiShorthand, GuardrailsAPI } from 'llmverify';
const result = await ai.verify(text);
const safe = await guardrails.check(text);