This guide walks you through making your first API request to PromptGuard. You’ll send a chat completion request using OpenAI or Anthropic models and see how PromptGuard protects your AI application.
Prerequisites
- A PromptGuard API key (get one here)
- A terminal or code editor
- 2 minutes
Basic Chat Completion
Let’s start with a simple “Hello, world!” request:Expected Response
Success! If you see a response like this, PromptGuard is working correctly and your AI requests are now protected.
Testing Security Protection
PromptGuard automatically protects against threats. Try a potentially malicious prompt to see it in action:Response Headers
PromptGuard adds helpful headers to every response:| Header | Description |
|---|---|
X-PromptGuard-Event-ID | Unique identifier for tracking this request |
X-PromptGuard-Decision | allow, block, or redact |
Supported Models
PromptGuard supports all major LLM providers. See Supported LLM Providers for the complete list of models.PromptGuard automatically forwards your requests to the appropriate provider using your API keys. You don’t need to change model names or parameters.
Next Steps
Integration Guides
Language and framework-specific guides
Security Configuration
Customize protection for your use case
Monitoring & Alerts
Set up notifications and tracking
Migration Guide
Move existing applications to PromptGuard