Use this file to discover all available pages before exploring further.
PromptGuard provides multiple layers of security protection for your AI applications. Configure policies, detection rules, and custom filters to match your security requirements.
Tool Injection Detection: Indirect prompt injection analysis in agentic tool calls and outputs
Content Moderation: Filters inappropriate or harmful content
LLM Guard: Custom natural-language rules and off-topic/topical alignment detection
Custom Rules: Define your own security patterns and policies
MCP Server Security: Validate Model Context Protocol tool calls with server allow/block-listing, argument schema validation, and tool injection detection
Multimodal Safety: Image content analysis via Google Cloud Vision or Azure Content Safety, with OCR-based PII detection on image content
Security Groundedness: Detect security-relevant fabrication including hallucinated CVEs, fake compliance claims, and invented security statistics
PromptGuard provides 14 specialized detectors backed by ~1,000+ detection patterns (built-in rules plus open-source community rules for agent-layer threats) that automatically detect and block threats:
PII Detection: 39+ entity types across 10+ countries - SSNs, credit cards, IBAN, NHS numbers, Aadhaar, and more - with checksum validation (Luhn, IBAN Mod 97, Verhoeff, NHS Mod 11), encoded PII detection (base64/hex/URL-encoded), ML-based NER, and configurable redact/mask/block modes
Secret Key Detection: 40+ credential patterns (OpenAI, Anthropic, AWS, Stripe, Twilio, SendGrid, Slack, connection strings, PEM/SSH keys) plus Shannon entropy analysis and character diversity scoring, with strict/moderate/permissive sensitivity tiers
Developer API Endpoints: The preset configuration endpoints below are part of the Developer API and are included in the OpenAPI spec. They use API key authentication and are suitable for SDK usage.
Configure security policies programmatically using the Developer API:
Python
Node.js
cURL
import requestsimport osapi_key = os.environ.get("PROMPTGUARD_API_KEY")base_url = "https://api.promptguard.co/api/v1/presets"headers = { "X-API-Key": api_key, "Content-Type": "application/json"}# Complete configuration workflowdef configure_security_preset(project_id, use_case, strictness_level="moderate"): """ Configure security preset for a project. Args: project_id: Project ID to configure use_case: Use case template (e.g., 'support_bot', 'code_assistant') strictness_level: Strictness level ('strict', 'moderate', 'permissive') """ try: # Step 1: List available use cases print("Fetching available use cases...") use_cases_response = requests.get( f"{base_url}/use-cases", headers=headers ) if use_cases_response.status_code != 200: print(f"Error fetching use cases: HTTP {use_cases_response.status_code}") return False use_cases = use_cases_response.json() available_keys = [uc.get('key') for uc in use_cases.get('use_cases', [])] # Step 2: Validate use case if use_case not in available_keys: print(f"Error: Use case '{use_case}' not found") print(f"Available use cases: {', '.join(available_keys)}") return False # Step 3: Validate strictness level valid_strictness = ['strict', 'moderate', 'permissive'] if strictness_level not in valid_strictness: print(f"Error: Invalid strictness level '{strictness_level}'") print(f"Valid levels: {', '.join(valid_strictness)}") return False # Step 4: Get current configuration print(f"\nGetting current configuration for project {project_id}...") current_response = requests.get( f"{base_url}/projects/{project_id}/preset", headers=headers ) if current_response.status_code == 200: current = current_response.json() print(f"Current: {current.get('use_case')}:{current.get('strictness_level')}") elif current_response.status_code == 404: print(f"Error: Project {project_id} not found") return False # Step 5: Update preset preset_name = f"{use_case}:{strictness_level}" print(f"\nUpdating preset to: {preset_name}") update_response = requests.put( f"{base_url}/projects/{project_id}/preset", headers=headers, json={"preset_name": preset_name} ) if update_response.status_code == 200: result = update_response.json() print(f"OK: Successfully updated to {preset_name}") # Step 6: Verify configuration print("\nVerifying configuration...") verify_response = requests.get( f"{base_url}/projects/{project_id}/preset", headers=headers ) if verify_response.status_code == 200: verified = verify_response.json() if (verified.get('use_case') == use_case and verified.get('strictness_level') == strictness_level): print("OK: Configuration verified successfully!") return True else: print("Warning: Configuration may not have updated correctly") return False return True elif update_response.status_code == 400: error = update_response.json() print(f"Error: {error.get('detail', 'Invalid request')}") elif update_response.status_code == 404: print(f"Error: Project {project_id} not found") elif update_response.status_code == 401: print("Error: Invalid API key") else: print(f"Error: HTTP {update_response.status_code}") return False except requests.exceptions.RequestException as e: print(f"Network error: {e}") return False except Exception as e: print(f"Unexpected error: {e}") return False# Quick preset updatedef quick_update_preset(project_id, preset_name): """ Quick update using preset name string. Format: 'use_case:strictness' or 'use_case' (defaults to moderate) """ response = requests.put( f"{base_url}/projects/{project_id}/preset", headers=headers, json={"preset_name": preset_name} ) if response.status_code == 200: print(f"OK: Preset updated to: {preset_name}") return response.json() else: error = response.json() if response.content else {} print(f"Error: {error.get('detail', f'HTTP {response.status_code}')}") return None# Example usageif __name__ == "__main__": project_id = "proj_abc123" # Complete workflow configure_security_preset(project_id, "support_bot", "strict") # Quick update quick_update_preset(project_id, "code_assistant:moderate") # Update with default strictness (moderate) quick_update_preset(project_id, "rag_system")
import fetch from 'node-fetch';const apiKey = process.env.PROMPTGUARD_API_KEY;const baseUrl = 'https://api.promptguard.co/api/v1/presets';const headers = { 'X-API-Key': apiKey, 'Content-Type': 'application/json'};// Complete configuration workflowasync function configureSecurityPreset( projectId: string, useCase: string, strictnessLevel: string = 'moderate') { /** * Configure security preset for a project. * @param projectId - Project ID to configure * @param useCase - Use case template (e.g., 'support_bot', 'code_assistant') * @param strictnessLevel - Strictness level ('strict', 'moderate', 'permissive') */ try { // Step 1: List available use cases console.log('Fetching available use cases...'); const useCasesResponse = await fetch(`${baseUrl}/use-cases`, { headers }); if (useCasesResponse.status !== 200) { console.error(`Error fetching use cases: HTTP ${useCasesResponse.status}`); return false; } const useCases = await useCasesResponse.json(); const availableKeys = useCases.use_cases?.map((uc: any) => uc.key) || []; // Step 2: Validate use case if (!availableKeys.includes(useCase)) { console.error(`Error: Use case '${useCase}' not found`); console.error(`Available use cases: ${availableKeys.join(', ')}`); return false; } // Step 3: Validate strictness level const validStrictness = ['strict', 'moderate', 'permissive']; if (!validStrictness.includes(strictnessLevel)) { console.error(`Error: Invalid strictness level '${strictnessLevel}'`); console.error(`Valid levels: ${validStrictness.join(', ')}`); return false; } // Step 4: Get current configuration console.log(`\nGetting current configuration for project ${projectId}...`); const currentResponse = await fetch( `${baseUrl}/projects/${projectId}/preset`, { headers } ); if (currentResponse.status === 200) { const current = await currentResponse.json(); console.log(`Current: ${current.use_case}:${current.strictness_level}`); } else if (currentResponse.status === 404) { console.error(`Error: Project ${projectId} not found`); return false; } // Step 5: Update preset const presetName = `${useCase}:${strictnessLevel}`; console.log(`\nUpdating preset to: ${presetName}`); const updateResponse = await fetch( `${baseUrl}/projects/${projectId}/preset`, { method: 'PUT', headers, body: JSON.stringify({ preset_name: presetName }) } ); if (updateResponse.status === 200) { const result = await updateResponse.json(); console.log(`OK: Successfully updated to ${presetName}`); // Step 6: Verify configuration console.log('\nVerifying configuration...'); const verifyResponse = await fetch( `${baseUrl}/projects/${projectId}/preset`, { headers } ); if (verifyResponse.status === 200) { const verified = await verifyResponse.json(); if (verified.use_case === useCase && verified.strictness_level === strictnessLevel) { console.log('OK: Configuration verified successfully!'); return true; } else { console.log('Warning: Configuration may not have updated correctly'); return false; } } return true; } else if (updateResponse.status === 400) { const error = await updateResponse.json(); console.error(`Error: ${error.detail || 'Invalid request'}`); } else if (updateResponse.status === 404) { console.error(`Error: Project ${projectId} not found`); } else if (updateResponse.status === 401) { console.error('Error: Invalid API key'); } else { console.error(`Error: HTTP ${updateResponse.status}`); } return false; } catch (error) { console.error(`Unexpected error: ${error}`); return false; }}// Quick preset updateasync function quickUpdatePreset(projectId: string, presetName: string) { /** * Quick update using preset name string. * Format: 'use_case:strictness' or 'use_case' (defaults to moderate) */ const response = await fetch( `${baseUrl}/projects/${projectId}/preset`, { method: 'PUT', headers, body: JSON.stringify({ preset_name: presetName }) } ); if (response.status === 200) { console.log(`OK: Preset updated to: ${presetName}`); return await response.json(); } else { const error = await response.json().catch(() => ({})); console.error(`Error: ${error.detail || `HTTP ${response.status}`}`); return null; }}// Example usageasync function main() { const projectId = 'proj_abc123'; // Complete workflow await configureSecurityPreset(projectId, 'support_bot', 'strict'); // Quick update await quickUpdatePreset(projectId, 'code_assistant:moderate'); // Update with default strictness (moderate) await quickUpdatePreset(projectId, 'rag_system');}main();
# List available use cases (developer API - requires API key)curl https://api.promptguard.co/api/v1/presets/use-cases \ -H "X-API-Key: $PROMPTGUARD_API_KEY"# Get current project presetcurl https://api.promptguard.co/api/v1/presets/projects/{project_id}/preset \ -H "X-API-Key: $PROMPTGUARD_API_KEY"# Update project preset using composed format (use_case:strictness)curl -X PUT https://api.promptguard.co/api/v1/presets/projects/{project_id}/preset \ -H "X-API-Key: $PROMPTGUARD_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "preset_name": "support_bot:strict" }'# Or use just use case name (defaults to moderate strictness)curl -X PUT https://api.promptguard.co/api/v1/presets/projects/{project_id}/preset \ -H "X-API-Key: $PROMPTGUARD_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "preset_name": "support_bot" }'
PromptGuard uses ~1,000+ detection patterns, machine learning models, and LLM-based analysis to identify injection techniques including instruction overrides, role confusion, context breaking, jailbreak attempts across 7 categories, indirect prompt injection in agentic tool calls, and agent-layer threats like tool poisoning and cross-agent manipulation.
What happens when a request is blocked?
Blocked requests return an HTTP 400 error with details about the security violation. You can configure whether to fail open (allow) or closed (block) when the security engine is unavailable.
Can I whitelist certain patterns?
Yes, you can create custom rules to allow specific patterns that might otherwise be blocked. This is useful for legitimate use cases that trigger false positives.
How do I reduce false positives?
Start with the Default preset and adjust based on your use case. Monitor your security dashboard for false positives and add custom policies if needed.