Skip to main content

Getting started

Protect your AI applications in minutes with enterprise-grade security. PromptGuard sits between your app and LLM providers, automatically blocking threats before they reach your models. Add one line of code to secure every LLM call in your application:
import promptguard
promptguard.init()  # That's it — all LLM calls are now protected

Quick Start Guide

Follow our step-by-step setup guide and start protecting your AI applications today.

Core features

Everything you need to secure your AI applications at scale. Get enterprise-grade protection without rewriting your code.

Real-time Protection

Six-layer detection pipeline with ~150ms typical latency and 99.1% precision

Zero Setup Required

Works with any LLM provider — OpenAI, Anthropic, Google, Ollama, vLLM, Bedrock

Enterprise Ready

Organizations, RBAC, SSO (OIDC), audit logs, and SOC 2 / GDPR compliance

Developer Friendly

Auto-instrumentation, Guard API, SDKs, CLI, and MCP server for AI editors

Security capabilities

Comprehensive protection with 13+ specialized detectors and a six-layer detection architecture: normalization, regex patterns, ML ensemble, content safety classification, multi-turn intent drift analysis, and policy evaluation.

Prompt Injection & Jailbreak Defense

Multi-model ML ensemble plus LLM-powered jailbreak analysis across 7 attack categories

Content Safety & Multi-Turn Detection

LLM-based harmful intent classification and crescendo attack detection via semantic drift analysis

PII & Secret Detection

39+ PII entity types across 10+ countries with checksum validation, plus secret key detection with entropy analysis

Agentic & Output Safety

Tool injection detection for agentic workflows (OpenClaw, LangChain, CrewAI), streaming output guardrails, and MCP server security

Custom Policies & LLM Guard

Natural-language business rules, topic filtering, entity blocklists, and YAML policy-as-code

Real-time Monitoring

OWASP LLM Top 10 mapping, threat intelligence, token-level explainability, and compliance reports

How it works

Every request flows through PromptGuard’s six-layer security pipeline:

F1 = 0.887

Across 2,369 samples from 7 peer-reviewed datasets

99.1% Precision

Near-zero false positives in production

< 200ms P95

Full ML detection with minimal latency overhead

Ready to get started?

Start protecting your AI applications

Sign up free — 10,000 requests/month with full ML detection on every tier.

Read the whitepaper

Deep dive into the detection architecture, benchmarks, and evaluation methodology.