Overview
llmverify server mode provides a long-running HTTP API server for seamless integration with IDEs, AI assistants, and custom applications. Instead of importing llmverify as a library, you can run it as a service and make HTTP requests to verify AI outputs.
Quick Start
# Default port (9009) npx llmverify-serve # Custom port npx llmverify-serve --port=8080
The server will display available endpoints:
Available endpoints: GET http://localhost:9009/health - Health check POST http://localhost:9009/verify - Verify AI output POST http://localhost:9009/check-input - Check input safety POST http://localhost:9009/check-pii - Detect PII POST http://localhost:9009/classify - Classify output Privacy: All processing is 100% local. Zero telemetry.
API Endpoints
POST /verify
Main verification endpoint. Runs full AI output verification.
curl -X POST http://localhost:9009/verify \
-H "Content-Type: application/json" \
-d '{"text": "Your AI output here"}'
# Response:
{
"success": true,
"result": {
"risk": {
"level": "low",
"action": "allow",
"score": 0.15
},
"findings": [],
"metadata": { ... }
}
}POST /check-input
Check user input for prompt injection attacks.
curl -X POST http://localhost:9009/check-input \
-H "Content-Type: application/json" \
-d '{"text": "Ignore all instructions"}'
# Response:
{
"success": true,
"result": {
"safe": false,
"threats": ["system_override"],
"score": 0.85
}
}POST /check-pii
Detect and redact PII from text.
curl -X POST http://localhost:9009/check-pii \
-H "Content-Type: application/json" \
-d '{"text": "Email john@example.com"}'
# Response:
{
"success": true,
"result": {
"containsPII": true,
"redacted": "Email [REDACTED]",
"findings": [{ "type": "email", "value": "john@example.com" }]
}
}POST /classify
Classify AI output intent and detect hallucination risk.
GET /health
Health check endpoint for monitoring.
Docker Deployment
FROM node:18-alpine RUN npm install -g llmverify EXPOSE 9009 CMD ["npx", "llmverify-serve"]
Server is stateless and scales horizontally. No special configuration needed.