PromptGuard scans every LLM request and response for security threats — prompt injection, jailbreaks, PII, data exfiltration, toxicity, and more — with sub-200ms latency. Add one line of code to protect your entire application.Documentation Index
Fetch the complete documentation index at: https://docs.promptguard.co/llms.txt
Use this file to discover all available pages before exploring further.
- Tab Title
- Tab Title
import promptguard
promptguard.init() # All OpenAI, Anthropic, Google, Cohere, Bedrock calls are now protected
Quickstart
Get protected in 5 minutes
API Reference
REST API with interactive playground
MCP Server
Connect to Cursor, Claude, VS Code
How it works
Three ways to integrate:| Method | Code | Best for |
|---|---|---|
| Auto-instrumentation | promptguard.init() | Most apps — patches SDK calls automatically |
| Guard API | POST /api/v1/guard | Custom workflows, framework callbacks |
| HTTP Proxy | Change base_url | Drop-in, no SDK needed |
What we detect
Prompt Injection and Jailbreaks
ML ensemble plus LLM-powered analysis across 7 attack categories, including multi-turn escalation.
PII and Secrets
39+ entity types with checksum validation. API keys, tokens, and credentials with entropy analysis.
Content Safety
Toxicity, multi-turn intent drift, streaming output guardrails, and MCP tool security.
AI Agent Traps
21 attack vectors from DeepMind’s framework: steganography, RAG poisoning, sub-agent spawning, and more.