Skip to main content
The PromptGuard API is fully compatible with OpenAI’s API structure, making it a seamless drop-in replacement for your existing integrations.

Overview

PromptGuard provides two types of APIs:
API TypeBase URLAuthenticationPurpose
Developer APIhttps://api.promptguard.co/api/v1API Key (X-API-Key)AI requests, usage stats
Dashboard APIhttps://api.promptguard.co/dashboardSession CookieProject management, analytics

Authentication

All PromptGuard API endpoints require authentication. For the Developer API, you’ll use two keys:
  1. PromptGuard API key (in X-API-Key header) - Authenticates your PromptGuard account
  2. LLM provider key (in Authorization header) - Your OpenAI/Anthropic key that gets forwarded to the provider

Developer API Authentication

curl https://api.promptguard.co/api/v1/chat/completions \
  -H "X-API-Key: your_api_key" \
  -H "Authorization: Bearer YOUR_OPENAI_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-5-nano",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'
For detailed authentication setup and code examples, see Authentication.

Dashboard API Authentication

For dashboard applications, use session-based authentication:
curl https://api.promptguard.co/dashboard/projects \
  -H "Cookie: session=YOUR_SESSION_COOKIE"

Base URLs

EnvironmentURL
Productionhttps://api.promptguard.co/api/v1
Staginghttps://staging-api.promptguard.co/api/v1

Available Endpoints

Chat Completions (OpenAI Compatible)

The primary endpoint for AI requests. Fully compatible with OpenAI’s API:
POST /api/v1/chat/completions
Supported parameters:
  • model - Any supported LLM model (OpenAI, Anthropic, Google, Mistral, DeepSeek, Cohere, Groq, Azure OpenAI). See Supported LLM Providers for complete model list
  • messages - Array of message objects
  • temperature, max_tokens, top_p, etc.
  • stream - Enable streaming responses
  • user - Unique user identifier for tracking

Guard API

Scan content for threats without proxying to an LLM provider. Accepts structured messages with direction and context:
POST /api/v1/guard
See Guard API reference for full documentation.

Security Scan

Analyze raw text for prompt injection, jailbreaks, and other threats:
POST /api/v1/security/scan

Security Redact

Strip PII from text and return both original and redacted versions:
POST /api/v1/security/redact
See Security Scan & Redact reference for full documentation.

Agent Security

Validate tool calls and monitor agent sessions:
POST /api/v1/agent/validate-tool
See Agent Security reference for full documentation.

Models

List available models:
GET /api/v1/models

Usage Statistics

Get your current usage:
GET /api/v1/usage/stats

Rate Limits

Rate limits vary by plan:
PlanMonthly LimitType
Free10,000 requestsHard limit (blocks when exceeded)
Pro100,000 requestsHard limit (blocks when exceeded)
Scale1,000,000 requestsSoft limit (alerts only, never blocks)
Infrastructure Rate Limiting: Cloud Armor enforces 100 requests per minute per IP address at the infrastructure level. This is separate from your monthly subscription limits and applies to all plans.
Rate limits are per API key. Distribute load across multiple keys if needed. Contact sales@promptguard.co for higher limits.

Response Headers

PromptGuard adds helpful headers to every response:
HeaderDescription
X-PromptGuard-Event-IDUnique identifier for tracking this request
X-PromptGuard-DecisionSecurity decision: allow, block, or redact
X-PromptGuard-ConfidenceConfidence score of the security decision (0.0 - 1.0)
X-PromptGuard-Threat-TypeType of threat detected (e.g., prompt_injection, pii_leak, none)

Error Handling

PromptGuard uses conventional HTTP response codes:
CodeDescriptionAction
200SuccessRequest processed normally
400Bad RequestCheck request format or security policy violation
401UnauthorizedVerify API key is valid
403ForbiddenRequest blocked by security policy, or check subscription status / API key validity
429Too Many RequestsImplement exponential backoff
500Server ErrorRetry with backoff

Error Response Format

{
  "error": {
    "message": "Request blocked by security policy",
    "type": "policy_violation",
    "code": "request_blocked",
    "event_id": "evt_abc123xyz",
    "dashboard_url": "https://app.promptguard.co/dashboard/projects/{project_id}/interactions?event_id=evt_abc123xyz"
  }
}
Blocked requests return 403. The optional dashboard_url links directly to the event in the dashboard for audit and debugging.

Security Policy Violations

When a request is blocked for security reasons:
{
  "error": {
    "message": "Prompt injection detected",
    "type": "policy_violation",
    "code": "prompt_injection_detected",
    "event_id": "evt_abc123xyz",
    "details": {
      "threat_type": "instruction_override",
      "confidence": 0.95
    }
  }
}

SDKs & Libraries

PromptGuard works with existing OpenAI/Anthropic SDKs by simply changing the base URL:

Node.js / TypeScript

Use the official OpenAI SDK with PromptGuard

Python

Use the official OpenAI Python library

Guard API

Standalone content scanning without proxying

Auto-Instrumentation

One line secures all LLM calls

OpenAPI Specification

The complete OpenAPI specification is available for:
  • Auto-generating client libraries
  • API testing and validation
  • Documentation generation

Download OpenAPI Spec

Get the full OpenAPI specification for the Developer API

Next Steps

Quick Start

Get started with PromptGuard in 5 minutes

First Request

Make your first secure AI request

Authentication

Learn more about API key management

Security Rules

Configure protection for your use case