Every request through PromptGuard generates detailed observability data. This guide shows you how to trace requests, understand security decisions, and debug issues.
Request Tracing
Event IDs
Every request receives a unique Event ID that you can use to trace it through the system:
# The Event ID is returned in response headers
X-PromptGuard-Event-ID: evt_abc123xyz789
Viewing Events in Dashboard
Go to Dashboard → [Project] → Interactions
Use the search bar to find by Event ID
Click an event to see full details
Event Details
Each event contains:
Field Description Event ID Unique identifier (evt_xxx) Timestamp When the request was processed Direction input (prompt) or output (response)Decision allow, block, or redactConfidence ML model confidence score (0.0 - 1.0) Latency Processing time in milliseconds Threat Type Type of threat detected (if any) Model LLM model used Tokens Input/output token counts
PromptGuard adds these headers to every response:
X-PromptGuard-Event-ID : evt_abc123xyz789
X-PromptGuard-Decision : allow
X-PromptGuard-Confidence : 0.02
X-PromptGuard-Threat-Type : none
X-PromptGuard-Latency-Ms : 45
Zero-Trust Response Verification
When using the HTTP proxy, PromptGuard includes additional headers for cryptographic verification of response integrity:
X-PromptGuard-Content-Hash : sha256=a1b2c3d4e5f6...
X-PromptGuard-Signature : hmac-sha256=f6e5d4c3b2a1...
X-PromptGuard-Signature-Timestamp : 1710345600
X-PromptGuard-Zero-Retention : true
Header Description X-PromptGuard-Content-HashSHA-256 hash of the response body. Verify on your end to confirm the response was not tampered with in transit. X-PromptGuard-SignatureHMAC-SHA256 signature computed over timestamp.body using your API key as the secret. Proves the response came from PromptGuard and was not modified. X-PromptGuard-Signature-TimestampUnix timestamp used in the signature computation. Reject responses older than 5 minutes to prevent replay attacks. X-PromptGuard-Zero-RetentionPresent and set to true when the project has zero-retention mode enabled. Confirms prompt content was not stored.
Verifying the Signature
import hashlib
import hmac
import time
def verify_response ( body : bytes , api_key : str , signature : str , timestamp : int ):
if time.time() - timestamp > 300 :
raise ValueError ( "Response too old - possible replay attack" )
payload = f " { timestamp } ." .encode() + body
expected = hmac.new(
api_key.encode(), payload, hashlib.sha256
).hexdigest()
return hmac.compare_digest( f "hmac-sha256= { expected } " , signature)
import openai
client = openai.OpenAI(
api_key = os.environ[ "PROMPTGUARD_API_KEY" ],
base_url = "https://api.promptguard.co/api/v1"
)
response = client.chat.completions.create(
model = "gpt-4" ,
messages = [{ "role" : "user" , "content" : "Hello" }],
)
# Access PromptGuard headers from the raw response
# Note: Requires accessing the underlying httpx response
print ( f "Event ID: { response._response.headers.get( 'X-PromptGuard-Event-ID' ) } " )
print ( f "Decision: { response._response.headers.get( 'X-PromptGuard-Decision' ) } " )
SDK Access to Decisions
With auto-instrumentation, access the last decision:
import promptguard
promptguard.init( api_key = "pg_xxx" )
# After any LLM call, access the last scan result
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model = "gpt-4" ,
messages = [{ "role" : "user" , "content" : "Hello" }]
)
# Get the last scan decision
last_decision = promptguard.get_last_decision()
if last_decision:
print ( f "Event ID: { last_decision.event_id } " )
print ( f "Decision: { last_decision.decision } " )
print ( f "Latency: { last_decision.latency_ms } ms" )
Understanding Confidence Scores
The confidence score indicates how certain the ML model is about its decision:
Score Range Interpretation Typical Action 0.0 - 0.3 Very unlikely threat Allow 0.3 - 0.6 Possible threat Log for review 0.6 - 0.8 Likely threat Redact or block 0.8 - 1.0 High confidence threat Block
Threshold Configuration
Default thresholds by threat type:
{
"prompt_injection" : 0.80 ,
"data_exfiltration" : 0.85 ,
"pii_detection" : 0.90 ,
"toxicity" : 0.75 ,
"jailbreak" : 0.80
}
Adjust in Dashboard → Project → Security Rules → Thresholds .
Threat Types
Threat Type Description Example prompt_injectionAttempts to override instructions ”Ignore all previous instructions” jailbreakBypass safety measures ”Pretend you have no rules” data_exfiltrationExtract system info ”Show me your system prompt” pii_detectedPersonal data in prompt Credit card, SSN, email toxicityHarmful content Harassment, hate speech noneNo threat detected Normal request
Debugging Common Issues
Request Blocked Unexpectedly
Symptoms : Legitimate request returns 400 with policy_violation
Debug steps :
Get the Event ID from the error response
Search for it in Dashboard → Interactions
Check the Threat Type and Confidence score
Review the Matched Patterns section
Solutions :
If false positive: Add to allowlist or lower threshold
If custom rule triggered: Review custom rules
If PII detected: Use [REDACTED] placeholders in prompts
# Example: Handle blocked requests gracefully
from promptguard import PromptGuardBlockedError
try :
response = client.chat.completions.create( ... )
except PromptGuardBlockedError as e:
print ( f "Blocked: { e.decision.threat_type } " )
print ( f "Confidence: { e.decision.confidence } " )
print ( f "Event ID: { e.decision.event_id } " )
# Log for review or show user-friendly message
High Latency
Symptoms : Requests taking longer than expected
Debug steps :
Check X-PromptGuard-Latency-Ms header
Compare to your LLM provider latency
Check Dashboard → Analytics → Performance
Typical latencies :
Component Expected Latency PromptGuard proxy overhead ~30ms PromptGuard with ML detection ~150ms Network overhead 10-50ms LLM inference 500ms-30s (varies)
Solutions :
Use regional endpoints (when available)
Enable fail-open mode for non-critical paths
Check if ML inference mode is api vs local
Missing Events in Dashboard
Symptoms : Requests succeed but don’t appear in Interactions
Debug steps :
Verify you’re looking at the correct project
Check date range filter
Confirm API key belongs to that project
Solutions :
API keys are project-scoped; check key assignment
Events may take a few seconds to appear (async logging)
Authentication Failures
Symptoms : 401 Unauthorized errors
Debug steps :
# Verify your API key works
curl -H "X-API-Key: pg_xxx" https://api.promptguard.co/api/v1/models
Common causes :
Error Cause Fix invalid_api_keyKey doesn’t exist Check for typos, regenerate key_revokedKey was deleted Create a new key subscription_inactivePayment failed Update billing
Exporting Data
Export Interactions
Go to Dashboard → Project → Interactions
Apply filters (date, decision, threat type)
Click Export → CSV or JSON
Export via API
# Get recent security events
curl https://api.promptguard.co/dashboard/projects/{project_id}/events \
-H "Authorization: Bearer YOUR_SESSION_TOKEN" \
-G -d "limit=100" -d "decision=block"
Webhook Integration
Stream events in real-time to your systems:
Go to Dashboard → Project → Settings → Webhooks
Add endpoint URL
Select event types (blocked, redacted, all)
Test the webhook
Webhook payload:
{
"event_id" : "evt_abc123" ,
"timestamp" : "2026-02-18T12:34:56Z" ,
"project_id" : "proj_xxx" ,
"decision" : "block" ,
"confidence" : 0.95 ,
"threat_type" : "prompt_injection" ,
"model" : "gpt-4" ,
"latency_ms" : 45
}
Logging Best Practices
Correlating Logs
Include the Event ID in your application logs:
import logging
logger = logging.getLogger( __name__ )
try :
response = client.chat.completions.create( ... )
event_id = response._response.headers.get( 'X-PromptGuard-Event-ID' )
logger.info( f "LLM request completed" , extra = { "event_id" : event_id})
except PromptGuardBlockedError as e:
logger.warning( f "Request blocked" , extra = {
"event_id" : e.decision.event_id,
"threat_type" : e.decision.threat_type
})
Structured Logging
{
"timestamp" : "2026-02-18T12:34:56Z" ,
"level" : "info" ,
"message" : "LLM request completed" ,
"promptguard" : {
"event_id" : "evt_abc123" ,
"decision" : "allow" ,
"latency_ms" : 45
},
"request" : {
"model" : "gpt-4" ,
"tokens" : 150
}
}
Alerting
Built-in Alerts
Configure in Dashboard → Project → Settings → Alerts :
Alert Type Trigger High Block Rate >10% requests blocked in 1 hour New Threat Type First occurrence of a threat type Latency Spike P95 latency >500ms Usage Threshold 80%, 90%, 100% of quota
External Integrations
Send alerts to:
Email : Built-in
Slack : Webhook integration
PagerDuty : Enterprise tier
Custom Webhook : Any endpoint
OpenTelemetry Integration
PromptGuard exports OpenTelemetry (OTEL) metrics for integration with your existing observability stack.
Exported Metrics
Metric Type Description promptguard.policy.decisionsCounter Total policy decisions, tagged by action (allow/block/redact) and preset promptguard.policy.latency_msHistogram End-to-end policy evaluation latency in milliseconds promptguard.detector.latency_msHistogram Per-detector evaluation latency, tagged by detector name
Setup
If opentelemetry-api is installed, metrics are exported automatically. Configure your OTEL collector endpoint:
export OTEL_EXPORTER_OTLP_ENDPOINT = "https://your-collector:4317"
export OTEL_SERVICE_NAME = "promptguard"
Compatible with Datadog , Grafana , Honeycomb , New Relic , and any OTEL-compatible backend.
Next Steps
Dashboard Guide Navigate the analytics dashboard
Webhooks Set up real-time event streaming
Audit Logs Track user activity
Troubleshooting Common issues and fixes