Skip to main content
Every request through PromptGuard generates detailed observability data. This guide shows you how to trace requests, understand security decisions, and debug issues.

Request Tracing

Event IDs

Every request receives a unique Event ID that you can use to trace it through the system:
# The Event ID is returned in response headers
X-PromptGuard-Event-ID: evt_abc123xyz789

Viewing Events in Dashboard

  1. Go to Dashboard → [Project] → Interactions
  2. Use the search bar to find by Event ID
  3. Click an event to see full details

Event Details

Each event contains:
FieldDescription
Event IDUnique identifier (evt_xxx)
TimestampWhen the request was processed
Directioninput (prompt) or output (response)
Decisionallow, block, or redact
ConfidenceML model confidence score (0.0 - 1.0)
LatencyProcessing time in milliseconds
Threat TypeType of threat detected (if any)
ModelLLM model used
TokensInput/output token counts

Response Headers

PromptGuard adds these headers to every response:
X-PromptGuard-Event-ID: evt_abc123xyz789
X-PromptGuard-Decision: allow
X-PromptGuard-Confidence: 0.02
X-PromptGuard-Threat-Type: none
X-PromptGuard-Latency-Ms: 45

Using Headers for Debugging

import openai

client = openai.OpenAI(
    api_key=os.environ["PROMPTGUARD_API_KEY"],
    base_url="https://api.promptguard.co/api/v1"
)

response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Hello"}],
)

# Access PromptGuard headers from the raw response
# Note: Requires accessing the underlying httpx response
print(f"Event ID: {response._response.headers.get('X-PromptGuard-Event-ID')}")
print(f"Decision: {response._response.headers.get('X-PromptGuard-Decision')}")

SDK Access to Decisions

With auto-instrumentation, access the last decision:
import promptguard

promptguard.init(api_key="pg_xxx")

# After any LLM call, access the last scan result
from openai import OpenAI
client = OpenAI()

response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Hello"}]
)

# Get the last scan decision
last_decision = promptguard.get_last_decision()
if last_decision:
    print(f"Event ID: {last_decision.event_id}")
    print(f"Decision: {last_decision.decision}")
    print(f"Latency: {last_decision.latency_ms}ms")

Understanding Confidence Scores

The confidence score indicates how certain the ML model is about its decision:
Score RangeInterpretationTypical Action
0.0 - 0.3Very unlikely threatAllow
0.3 - 0.6Possible threatLog for review
0.6 - 0.8Likely threatRedact or block
0.8 - 1.0High confidence threatBlock

Threshold Configuration

Default thresholds by threat type:
{
  "prompt_injection": 0.80,
  "data_exfiltration": 0.85,
  "pii_detection": 0.90,
  "toxicity": 0.75,
  "jailbreak": 0.80
}
Adjust in Dashboard → Project → Policies → Thresholds.

Threat Types

Threat TypeDescriptionExample
prompt_injectionAttempts to override instructions”Ignore all previous instructions”
jailbreakBypass safety measures”Pretend you have no rules”
data_exfiltrationExtract system info”Show me your system prompt”
pii_detectedPersonal data in promptCredit card, SSN, email
toxicityHarmful contentHarassment, hate speech
noneNo threat detectedNormal request

Debugging Common Issues

Request Blocked Unexpectedly

Symptoms: Legitimate request returns 400 with policy_violation Debug steps:
  1. Get the Event ID from the error response
  2. Search for it in Dashboard → Interactions
  3. Check the Threat Type and Confidence score
  4. Review the Matched Patterns section
Solutions:
  • If false positive: Add to allowlist or lower threshold
  • If custom rule triggered: Review custom rules
  • If PII detected: Use [REDACTED] placeholders in prompts
# Example: Handle blocked requests gracefully
from promptguard import PromptGuardBlockedError

try:
    response = client.chat.completions.create(...)
except PromptGuardBlockedError as e:
    print(f"Blocked: {e.decision.threat_type}")
    print(f"Confidence: {e.decision.confidence}")
    print(f"Event ID: {e.decision.event_id}")
    # Log for review or show user-friendly message

High Latency

Symptoms: Requests taking longer than expected Debug steps:
  1. Check X-PromptGuard-Latency-Ms header
  2. Compare to your LLM provider latency
  3. Check Dashboard → Analytics → Performance
Typical latencies:
ComponentExpected Latency
PromptGuard scan30-100ms
Network overhead10-50ms
LLM inference500ms-30s (varies)
Solutions:
  • Use regional endpoints (when available)
  • Enable fail-open mode for non-critical paths
  • Check if ML inference mode is api vs local

Missing Events in Dashboard

Symptoms: Requests succeed but don’t appear in Interactions Debug steps:
  1. Verify you’re looking at the correct project
  2. Check date range filter
  3. Confirm API key belongs to that project
Solutions:
  • API keys are project-scoped; check key assignment
  • Events may take a few seconds to appear (async logging)

Authentication Failures

Symptoms: 401 Unauthorized errors Debug steps:
# Verify your API key works
curl -H "X-API-Key: pg_xxx" https://api.promptguard.co/api/v1/models
Common causes:
ErrorCauseFix
invalid_api_keyKey doesn’t existCheck for typos, regenerate
key_revokedKey was deletedCreate a new key
subscription_inactivePayment failedUpdate billing

Exporting Data

Export Interactions

  1. Go to Dashboard → Project → Interactions
  2. Apply filters (date, decision, threat type)
  3. Click Export → CSV or JSON

Export via API

# Get recent security events
curl https://api.promptguard.co/dashboard/projects/{project_id}/events \
  -H "Authorization: Bearer YOUR_SESSION_TOKEN" \
  -G -d "limit=100" -d "decision=block"

Webhook Integration

Stream events in real-time to your systems:
  1. Go to Dashboard → Project → Settings → Webhooks
  2. Add endpoint URL
  3. Select event types (blocked, redacted, all)
  4. Test the webhook
Webhook payload:
{
  "event_id": "evt_abc123",
  "timestamp": "2026-02-18T12:34:56Z",
  "project_id": "proj_xxx",
  "decision": "block",
  "confidence": 0.95,
  "threat_type": "prompt_injection",
  "model": "gpt-4",
  "latency_ms": 45
}

Logging Best Practices

Correlating Logs

Include the Event ID in your application logs:
import logging

logger = logging.getLogger(__name__)

try:
    response = client.chat.completions.create(...)
    event_id = response._response.headers.get('X-PromptGuard-Event-ID')
    logger.info(f"LLM request completed", extra={"event_id": event_id})
except PromptGuardBlockedError as e:
    logger.warning(f"Request blocked", extra={
        "event_id": e.decision.event_id,
        "threat_type": e.decision.threat_type
    })

Structured Logging

{
  "timestamp": "2026-02-18T12:34:56Z",
  "level": "info",
  "message": "LLM request completed",
  "promptguard": {
    "event_id": "evt_abc123",
    "decision": "allow",
    "latency_ms": 45
  },
  "request": {
    "model": "gpt-4",
    "tokens": 150
  }
}

Alerting

Built-in Alerts

Configure in Dashboard → Project → Settings → Alerts:
Alert TypeTrigger
High Block Rate>10% requests blocked in 1 hour
New Threat TypeFirst occurrence of a threat type
Latency SpikeP95 latency >500ms
Usage Threshold80%, 90%, 100% of quota

External Integrations

Send alerts to:
  • Email: Built-in
  • Slack: Webhook integration
  • PagerDuty: Enterprise tier
  • Custom Webhook: Any endpoint

Next Steps