Skip to main content
PromptGuard uses a composable preset system that combines use-case templates with strictness levels. This gives you fine-grained control over security policies while providing sensible defaults for common scenarios.

Composable Preset System

PromptGuard presets are composed of two parts:
  1. Use Case Template - Defines patterns, domains, and toxicity settings for your specific use case
  2. Strictness Level - Controls detection thresholds (strict, moderate, permissive)

Use Case Templates

Best for: General AI applications and most production use cases
  • Custom Patterns: None
  • Allowed Domains: All
  • Blocked Domains: None
  • Toxicity Config: Disabled
  • Use Cases: Most production applications, general business use

Support Bot

Best for: Customer support chatbots and help desk applications
  • Custom Patterns: Password/account queries, admin access attempts
  • Allowed Domains: All
  • Blocked Domains: Internal/admin systems
  • Toxicity Config: Disabled
  • Use Cases: Customer service, help desks, support systems
What’s Configured:
  • Custom patterns for password/account queries
  • Blocked domains for admin/internal access
  • Optimized for customer interaction scenarios

Code Assistant

Best for: AI coding assistants and code generation tools
  • Custom Patterns: API keys, secrets, credentials
  • Allowed Domains: GitHub, Stack Overflow, documentation sites
  • Blocked Domains: None
  • Toxicity Config: Disabled
  • Use Cases: IDEs, code generation, development tools
What’s Configured:
  • API key and secret detection patterns
  • Allowed domains for GitHub, Stack Overflow, docs
  • Optimized for code generation scenarios

RAG System

Best for: Retrieval-augmented generation with document knowledge
  • Custom Patterns: Confidential, proprietary, internal content
  • Allowed Domains: All
  • Blocked Domains: Internal/staging systems
  • Toxicity Config: Disabled
  • Use Cases: Knowledge bases, document Q&A, enterprise RAG
What’s Configured:
  • Custom patterns for confidential/proprietary content
  • Blocked domains for internal/staging systems
  • Enhanced data leak prevention

Data Analysis

Best for: Data processing and analysis with sensitive information
  • Custom Patterns: SSN, DOB, sensitive data patterns
  • Allowed Domains: All
  • Blocked Domains: External/public domains
  • Toxicity Config: Disabled
  • Use Cases: Analytics, data pipelines, business intelligence
What’s Configured:
  • Enhanced data protection patterns
  • Blocked external/public domains
  • Comprehensive exfiltration prevention

Creative Writing

Best for: Creative content generation and writing assistance
  • Custom Patterns: None
  • Allowed Domains: All
  • Blocked Domains: None
  • Toxicity Config: Enabled with ML, threshold 0.8, categories (hate, sexual, violence)
  • Use Cases: Content generation, writing tools, creative applications
What’s Configured:
  • ML-based toxicity detection enabled
  • Higher toxicity threshold (0.8) for creative content
  • Category filtering (hate, sexual, violence)
  • Optimized for content generation scenarios

Strictness Levels

Each use case template can be combined with one of three strictness levels:

Strict

  • PII Detection: Strict (detects all PII types)
  • Injection Detection: Strict (lower ML threshold: 0.6)
  • Exfiltration Detection: Strict (lower ML threshold: 0.7)
  • Output Safety: Strict (lower toxicity threshold: 0.6)
  • Best for: High-security applications, sensitive data handling

Moderate (Default)

  • PII Detection: Moderate (detects common PII types)
  • Injection Detection: Moderate (ML threshold: 0.8)
  • Exfiltration Detection: Moderate (ML threshold: 0.8)
  • Output Safety: Moderate (toxicity threshold: 0.7)
  • Best for: Most production applications, balanced security

Permissive

  • PII Detection: Permissive (only SSN/credit cards)
  • Injection Detection: Permissive (higher ML threshold: 0.9)
  • Exfiltration Detection: Permissive (higher ML threshold: 0.9)
  • Output Safety: Permissive (higher toxicity threshold: 0.8)
  • Best for: Low-risk applications, development/testing

Choosing the Right Preset

Decision Matrix

Use CaseRecommended Use CaseRecommended StrictnessAlternative
General AI ApplicationDefaultModerate-
Customer SupportSupport BotStrictSupport Bot + Moderate
Code GenerationCode AssistantModerateCode Assistant + Strict
Document Q&ARAG SystemStrictRAG System + Moderate
Data ProcessingData AnalysisStrictData Analysis + Moderate
Content CreationCreative WritingModerateCreative Writing + Permissive

Recommendation Flow

Configuring Presets

Via Dashboard

  1. Access Project Settings
    • Login to app.promptguard.co
    • Navigate to Projects > [Your Project] > Overview
    • Find the “Policy Preset” section
  2. Choose Use Case and Strictness
    • Select your Use Case from the first dropdown (e.g., “Support Bot”, “Code Assistant”)
    • Select your Strictness Level from the second dropdown (Strict, Moderate, Permissive)
    • The preset is automatically composed (e.g., “Support Bot / Strict”)
  3. Test Configuration
    • Make test requests to validate the preset
    • Monitor security events in the dashboard
    • Adjust with custom policies if needed

Via API

Developer API Endpoints: The preset management endpoints below are part of the Developer API and are included in the OpenAPI spec. They use API key authentication and are suitable for SDK usage.
import requests
import os

api_key = os.environ.get("PROMPTGUARD_API_KEY")
base_url = "https://api.promptguard.co/api/v1/presets"

headers = {
    "X-API-Key": api_key,
    "Content-Type": "application/json"
}

# List available use cases
def list_use_cases():
    response = requests.get(
        f"{base_url}/use-cases",
        headers=headers
    )

    if response.status_code == 200:
        use_cases = response.json()
        print("Available Use Cases:")
        for uc in use_cases.get('use_cases', []):
            print(f"  - {uc.get('key')}: {uc.get('name')}")
        return use_cases
    elif response.status_code == 401:
        print("Error: Invalid API key")
    else:
        print(f"Error: HTTP {response.status_code}")
    return None

# List available strictness levels
def list_strictness_levels():
    response = requests.get(
        f"{base_url}/strictness-levels",
        headers=headers
    )

    if response.status_code == 200:
        levels = response.json()
        print("Available Strictness Levels:")
        for level in levels.get('strictness_levels', []):
            print(f"  - {level.get('key')}: {level.get('name')}")
        return levels
    else:
        print(f"Error: HTTP {response.status_code}")
    return None

# Get current project preset
def get_project_preset(project_id):
    response = requests.get(
        f"{base_url}/projects/{project_id}/preset",
        headers=headers
    )

    if response.status_code == 200:
        preset = response.json()
        print(f"Current Preset: {preset.get('use_case')}:{preset.get('strictness_level')}")
        return preset
    elif response.status_code == 404:
        print(f"Error: Project {project_id} not found")
    else:
        print(f"Error: HTTP {response.status_code}")
    return None

# Set project preset
def set_project_preset(project_id, use_case, strictness_level=None):
    """
    Set project preset.

    Args:
        project_id: Project ID
        use_case: Use case key (e.g., 'support_bot', 'code_assistant')
        strictness_level: Optional strictness level ('strict', 'moderate', 'permissive')
                          If not provided, defaults to 'moderate'
    """
    if strictness_level:
        preset_name = f"{use_case}:{strictness_level}"
    else:
        preset_name = use_case  # Defaults to moderate

    payload = {"preset_name": preset_name}

    response = requests.put(
        f"{base_url}/projects/{project_id}/preset",
        headers=headers,
        json=payload
    )

    if response.status_code == 200:
        result = response.json()
        print(f"Preset updated to: {preset_name}")
        return result
    elif response.status_code == 400:
        error = response.json()
        print(f"Error: {error.get('detail', 'Invalid preset')}")
        print("Use list_use_cases() to see available use cases.")
    elif response.status_code == 404:
        print(f"Error: Project {project_id} not found")
    elif response.status_code == 401:
        print("Error: Invalid API key")
    else:
        print(f"Error: HTTP {response.status_code}")
    return None

# Complete workflow example
def configure_project_preset(project_id, use_case, strictness_level="moderate"):
    """Complete workflow: list → verify → set → confirm"""
    try:
        # Step 1: List available options
        print("Fetching available use cases...")
        use_cases = list_use_cases()
        if not use_cases:
            return False

        # Step 2: Verify use case exists
        available_keys = [uc.get('key') for uc in use_cases.get('use_cases', [])]
        if use_case not in available_keys:
            print(f"Error: Use case '{use_case}' not found")
            print(f"Available: {', '.join(available_keys)}")
            return False

        # Step 3: Get current preset
        print(f"\nGetting current preset for project {project_id}...")
        current = get_project_preset(project_id)

        # Step 4: Set new preset
        print(f"\nSetting preset to {use_case}:{strictness_level}...")
        result = set_project_preset(project_id, use_case, strictness_level)

        if result:
            # Step 5: Verify change
            print("\nVerifying preset change...")
            updated = get_project_preset(project_id)
            if updated:
                print("✅ Preset successfully updated!")
                return True

        return False

    except Exception as error:
        print(f"Error: {error}")
        return False

# Example usage
if __name__ == "__main__":
    project_id = "proj_abc123"

    # List available options
    list_use_cases()
    list_strictness_levels()

    # Set preset using composed format
    set_project_preset(project_id, "support_bot", "strict")

    # Set preset using just use case (defaults to moderate)
    set_project_preset(project_id, "code_assistant")

    # Complete workflow
    configure_project_preset(project_id, "rag_system", "strict")

Response Formats

List Use Cases Response (200 OK)
{
  "use_cases": [
    {
      "key": "default",
      "name": "General Purpose",
      "description": "Balanced security for general AI applications"
    },
    {
      "key": "support_bot",
      "name": "Support Bot",
      "description": "Optimized for customer support chatbots"
    }
  ]
}
Get Project Preset Response (200 OK)
{
  "project_id": "proj_abc123",
  "use_case": "support_bot",
  "strictness_level": "strict",
  "preset_name": "support_bot:strict"
}
Set Preset Response (200 OK)
{
  "project_id": "proj_abc123",
  "use_case": "support_bot",
  "strictness_level": "strict",
  "preset_name": "support_bot:strict",
  "message": "Preset updated successfully"
}
Error Responses 400 Bad Request - Invalid preset format
{
  "detail": "Invalid preset format. Use 'use_case:strictness' or 'use_case'"
}
400 Bad Request - Invalid use case
{
  "detail": "Invalid use case 'invalid_case'. Use list_use_cases() to see available options."
}
400 Bad Request - Invalid strictness level
{
  "detail": "Invalid strictness level 'invalid'. Use 'strict', 'moderate', or 'permissive'."
}
404 Not Found - Project doesn’t exist
{
  "detail": "Project not found"
}
401 Unauthorized - Invalid API key
{
  "detail": "Invalid API key"
}
Preset Format: Use "use_case:strictness" format (e.g., "support_bot:strict") or just the use case name (defaults to moderate strictness).

Preset Comparison

Use Case Templates Comparison

FeatureDefaultSupport BotCode AssistantRAG SystemData AnalysisCreative Writing
Custom PatternsNonePassword/AccountAPI Keys/SecretsConfidentialSSN/DOBNone
Allowed DomainsAllAllGitHub, Stack Overflow, DocsAllAllAll
Blocked DomainsNoneInternal/AdminNoneInternal/StagingExternal/PublicNone
ML ToxicityDisabledDisabledDisabledDisabledDisabledEnabled (0.8 threshold)

Strictness Level Comparison

Detection TypeStrictModeratePermissive
PII DetectionAll typesCommon typesSSN/Credit cards only
Injection ML Threshold0.60.80.9
Exfiltration ML Threshold0.70.80.9
Toxicity Threshold0.60.70.8

Performance Impact

All presets have similar performance characteristics:
MetricImpact
Latency+30-50ms overhead
ThroughputMinimal impact
Resource UsageLow to moderate

Customizing Presets

Adding Custom Policies

You can enhance any preset with custom policies:
  1. Navigate to Projects > [Your Project] > Policies
  2. Click “Create Policy”
  3. Define custom rules that complement your preset
  4. Custom policies apply in addition to preset rules

Preset + Custom Policies

Presets provide the foundation, and custom policies add specific rules:
# Example: Using Default preset + custom policy for specific patterns
# 1. Set preset to "default" via dashboard
# 2. Create custom policy via dashboard or API
curl -X POST https://api.promptguard.co/dashboard/policies \
  -H "Cookie: session=YOUR_SESSION_COOKIE" \
  -H "Content-Type: application/json" \
  -d '{
    "project_id": "your-project-id",
    "name": "Block Specific Terms",
    "rules": [
      {
        "pattern": "confidential",
        "action": "block"
      }
    ]
  }'

Monitoring Preset Performance

Key Metrics to Track

  1. Security Events
    • Track blocked requests by type
    • Monitor threat patterns
    • Validate detection accuracy
  2. False Positive Rate
    • Monitor legitimate requests being blocked
    • Adjust with custom policies if needed
    • Target: 1% for most presets
  3. Performance Impact
    • Measure latency overhead
    • Track error rates
    • Monitor user experience

Dashboard Views

Access preset-specific analytics:
  • Projects > [Your Project] > Analytics
  • Filter by time range and security events
  • Compare metrics across different configurations
  • Export data for detailed analysis

Best Practices

Development Workflow

  1. Start with Default + Moderate: Begin with default:moderate for most applications
  2. Choose Use-Case Template: If you have a specific use case, select the matching template
  3. Adjust Strictness: Start with moderate, then adjust to strict or permissive based on needs
  4. Add Custom Policies: Enhance with custom rules for specific needs
  5. Monitor Continuously: Track performance and adjust as needed

Preset Transitions

When changing presets:
  1. Test in Staging: Apply new preset to staging environment first
  2. Monitor Metrics: Check security events and false positives for 24-48 hours
  3. Gradual Rollout: Use feature flags for gradual production rollout if needed
  4. Monitor and Adjust: Watch for issues and fine-tune strictness level or add custom policies

Strictness Level Guidelines

  • Start Moderate: Most applications work well with moderate strictness
  • Go Strict If: Handling sensitive data, high-security requirements, compliance needs
  • Go Permissive If: Low-risk scenarios, development/testing, high false positive rates

Troubleshooting

Solutions:
  • Review security events to identify patterns
  • Add custom whitelist policies for legitimate use cases
  • Consider switching to a more permissive preset (if appropriate)
  • Contact support for preset tuning assistance
Solutions:
  • Verify you’re using appropriate preset for your security needs
  • Check if custom policies are overriding preset behavior
  • Test with known malicious prompts
  • Ensure preset is correctly applied to your project
Solutions:
  • Use custom policies to add specific rules
  • Combine preset with custom policies for fine-tuned control
  • Review preset details to understand what’s enabled
  • Contact support for custom preset recommendations

Next Steps

Need help choosing the right preset? Contact our security team for personalized recommendations.