By the end of this guide, you’ll have PromptGuard protecting your AI applications against prompt injection, data leaks, and other security threats.
What you’ll accomplish
- Set up PromptGuard as a drop-in replacement for OpenAI
- Protect your first AI request
- View security analytics in the dashboard
- Configure basic security policies
Prerequisites
- An existing OpenAI API integration
- 5 minutes of your time
Step 1: Create your PromptGuard account
- Sign up at app.promptguard.co
- Get your project - A “Production” project is automatically created for you
- Get your API key:
- Navigate to API Keys in your project dashboard
- Click “Create API Key”
- Give it a name (e.g., “Production API”)
- Copy the key (starts with
pg_live_)
What are Projects? Projects help you organize different environments or applications (e.g., “Production”, “Staging”, “Development”). Each project has its own API keys, usage tracking, and security settings. You can create multiple projects from the Projects page.
Step 2: Configure your environment
Add both your PromptGuard API key and your LLM provider API key to your environment variables:.env
Important: PromptGuard uses a pass-through model. You provide your own OpenAI/Anthropic/Groq API keys, and PromptGuard secures requests before forwarding them. You’re only charged for PromptGuard’s security services, not for LLM usage.
Step 3: Update your code
PromptGuard is a drop-in replacement for OpenAI and Anthropic. Just change the base URL and API key:What to change:
- Add
baseURLparameter: Point tohttps://api.promptguard.co/api/v1 - Use PromptGuard API key: Change from
OPENAI_API_KEYtoPROMPTGUARD_API_KEY - Keep your LLM provider key: Still use
OPENAI_API_KEYorANTHROPIC_API_KEYenvironment variable - Keep everything else the same: Your existing code, models, and parameters work unchanged
How dual-auth works: The SDK sends your PromptGuard key as
X-API-Key header and your LLM provider key in Authorization header. PromptGuard verifies your subscription, runs security checks, then forwards your LLM key to OpenAI/Anthropic. This pass-through model means you only pay PromptGuard for security—LLM costs go directly to your provider.OpenAI Integration
Instructions: Find where you initialize the OpenAI client in your code and add thebaseURL parameter:
- Node.js
- Python
- cURL
Anthropic Integration
Instructions: Find where you initialize the Anthropic client in your code and add thebaseURL parameter:
- Node.js
- Python
- cURL
Step 4: Test your integration
Now run your application as normal. PromptGuard will automatically protect all requests. To verify it’s working, try making a request with a potentially malicious prompt:Step 5: View your security dashboard
- Open app.promptguard.co
- Navigate to your project dashboard
- See real-time security events and analytics
🚀 Dashboard Access: Visit app.promptguard.co to view your project dashboard with real-time security events and analytics.
Step 6: Configure security policies
PromptGuard comes with smart defaults, but you can customize protection:- Go to Projects > [Your Project] > Overview in your dashboard
- Choose from use-case-specific presets:
- Default (recommended): Balanced security for general AI applications
- Support Bot: Optimized for customer support chatbots
- Code Assistant: Enhanced protection for coding tools
- RAG System: Maximum security for document-based AI
- Data Analysis: Strict PII protection for data processing
- Creative Writing: Nuanced content filtering for creative applications
What’s happening under the hood?
Every request now flows through PromptGuard’s security engine: PromptGuard automatically protects against:- Prompt injection attacks (“ignore previous instructions…”)
- Data exfiltration attempts (trying to extract system prompts)
- PII leakage (credit cards, SSNs, emails automatically redacted)
- Toxic content generation
- Jailbreak attempts
Performance impact
- Latency: <40ms p95 overhead
- Availability: 99.9% uptime SLA
- Reliability: Fails open (requests proceed if PromptGuard is down)
Next steps
Integration Guides
Detailed setup for Node.js, Python, React, and more
Security Policies
Configure advanced protection for your use case
Migration Guide
Migrate existing OpenAI integrations step-by-step
Monitoring
Set up alerts and track security metrics
Need help?
Email Support
Reach out to our support team
Documentation
Explore our complete API documentation
Troubleshooting
Common issues and solutions
Examples
See real-world integration examples
Community Discord and personalized demos coming soon! For now, reach out via email for any questions.