By the end of this guide, you’ll have PromptGuard protecting your AI applications against prompt injection, data leaks, and other security threats.
What you’ll accomplish
- Secure your AI application with one line of code
- Protect your first AI request
- View security analytics in the dashboard
- Configure basic security policies
Prerequisites
- An existing AI/LLM integration (OpenAI, Anthropic, Google, etc.)
- 5 minutes of your time
Step 1: Create your PromptGuard account
- Sign up at app.promptguard.co
- Get your project - A “Production” project is automatically created for you
- Get your API key:
- Navigate to API Keys in your project dashboard
- Click “Create API Key”
- Give it a name (e.g., “Production API”)
- Copy the key (store it securely - it won’t be shown again)
What are Projects? Projects help you organize different environments or applications (e.g., “Production”, “Staging”, “Development”). Each project has its own API keys, usage tracking, and security settings. You can create multiple projects from the Projects page.
Step 2: Configure your environment
Set your PromptGuard API key as an environment variable:Important: PromptGuard uses a pass-through model. You provide your own LLM provider API keys, and PromptGuard only charges for security services. LLM costs go directly to your provider.
Step 3: Add PromptGuard to your code
Choose the integration method that works best for you:Option A: Auto-Instrumentation (Recommended)
Add one line to your application startup. All LLM calls are secured automatically — works with any framework (LangChain, CrewAI, Vercel AI SDK, etc.).- Python
- Node.js
Option B: HTTP Proxy (Drop-in URL Swap)
Change your LLM base URL to PromptGuard. No SDK needed.OpenAI Integration
Instructions: Find where you initialize the OpenAI client in your code and add thebaseURL parameter:
- Node.js
- Python
- cURL
Anthropic Integration
Instructions: Find where you initialize the Anthropic client in your code and add thebaseURL parameter:
- Node.js
- Python
- cURL
Step 4: Test your integration
Now run your application as normal. PromptGuard will automatically protect all requests. To verify it’s working, try making a request with a potentially malicious prompt:Step 5: View your security dashboard
- Open app.promptguard.co
- Navigate to your project dashboard
- See real-time security events and analytics
🚀 Dashboard Access: Visit app.promptguard.co to view your project dashboard with real-time security events and analytics.
Step 6: Configure security policies
PromptGuard comes with smart defaults, but you can customize protection:- Go to Projects > [Your Project] > Overview in your dashboard
- Choose from use-case-specific presets:
- Default (recommended): Balanced security for general AI applications
- Support Bot: Optimized for customer support chatbots
- Code Assistant: Enhanced protection for coding tools
- RAG System: Maximum security for document-based AI
- Data Analysis: Strict PII protection for data processing
- Creative Writing: Nuanced content filtering for creative applications
What’s happening under the hood?
Every request now flows through PromptGuard’s security engine: PromptGuard automatically protects against:- Prompt injection attacks (“ignore previous instructions…”)
- Data exfiltration attempts (trying to extract system prompts)
- PII leakage (credit cards, SSNs, emails automatically redacted)
- Toxic content generation
- Jailbreak attempts
Performance impact
- Latency: ~0.15s typical overhead (P95: <200ms)
- Availability: 99.9% uptime SLA
- Reliability: Fails open (requests proceed if PromptGuard is down)
Next steps
Integration Guides
Detailed setup for Node.js, Python, React, and more
Security Rules
Configure advanced protection for your use case
Migration Guide
Migrate existing OpenAI integrations step-by-step
Monitoring
Set up alerts and track security metrics
Need help?
Email Support
Reach out to our support team
Documentation
Explore our complete API documentation
Troubleshooting
Common issues and solutions
Examples
See real-world integration examples
Community Discord and personalized demos coming soon! For now, reach out via email for any questions.