Skip to main content
PromptGuard scans every LLM request and response for security threats — prompt injection, jailbreaks, PII, data exfiltration, toxicity, and more — with sub-200ms latency. Add one line of code to protect your entire application.
Python
import promptguard
promptguard.init()  # All OpenAI, Anthropic, Google, Cohere, Bedrock calls are now protected

Quickstart

Get protected in 5 minutes

API Reference

REST API with interactive playground

MCP Server

Connect to Cursor, Claude, VS Code

How it works

Three ways to integrate:
MethodCodeBest for
Auto-instrumentationpromptguard.init()Most apps — patches SDK calls automatically
Guard APIPOST /api/v1/guardCustom workflows, framework callbacks
HTTP ProxyChange base_urlDrop-in, no SDK needed

What we detect

Prompt Injection and Jailbreaks

ML ensemble plus LLM-powered analysis across 7 attack categories, including multi-turn escalation.

PII and Secrets

39+ entity types with checksum validation. API keys, tokens, and credentials with entropy analysis.

Content Safety

Toxicity, multi-turn intent drift, streaming output guardrails, and MCP tool security.

AI Agent Traps

21 attack vectors from DeepMind’s framework: steganography, RAG poisoning, sub-agent spawning, and more.