Getting started
PromptGuard sits between your app and LLM providers, scanning every request and response for prompt injection, data leaks, jailbreaks, and other threats before they reach your models. Add one line of code to secure every LLM call in your application:Quick Start (5 min)
Add PromptGuard to an existing app in 5 minutes.
Tutorial (20 min)
Build a protected chatbot from scratch with step-by-step guidance.
Core features
Detection, redaction, and policy enforcement for AI applications. Add protection without rewriting your code.Real-time Protection
Six-layer detection pipeline with ~150ms typical latency. Covers prompt injection, jailbreaks, PII, toxicity, and more.
Zero Setup Required
Works with any LLM provider — OpenAI, Anthropic, Google, Ollama, vLLM, Bedrock
Team & Compliance Features
Organizations, RBAC, SSO (OIDC), audit logs, and SOC 2 / GDPR compliance
Developer Friendly
Auto-instrumentation, Guard API, SDKs, CLI, and MCP server for AI editors
Security capabilities
14 specialized detectors across a six-layer detection architecture: normalization, regex patterns, ML ensemble, content safety classification, multi-turn intent drift analysis, and policy evaluation.Prompt Injection & Jailbreak Defense
Multi-model ML ensemble plus LLM-powered jailbreak analysis across 7 attack categories
Content Safety & Multi-Turn Detection
LLM-based harmful intent classification and crescendo attack detection via semantic drift analysis
PII & Secret Detection
39+ PII entity types across 10+ countries with checksum validation, plus secret key detection with entropy analysis
Agentic & Output Safety
Tool injection detection for agentic workflows (OpenClaw, LangChain, CrewAI), streaming output guardrails, and MCP server security
Custom Policies & LLM Guard
Natural-language business rules, topic filtering, entity blocklists, and YAML policy-as-code
Real-time Monitoring
OWASP LLM Top 10 mapping, threat intelligence, token-level explainability, and compliance reports
How it works
Every request flows through PromptGuard’s six-layer security pipeline:F1 = 0.887
Evaluated across 2,369 samples from 7 peer-reviewed datasets
99.1% Precision
Low false-positive rate validated in production workloads
< 200ms P95
Full ML detection pipeline latency overhead
Next steps
Quick Start
Add PromptGuard to an existing application in 5 minutes.
Detection Architecture
Read the whitepaper on the detection pipeline, benchmarks, and evaluation methodology.