Every request through PromptGuard generates detailed observability data. This guide shows you how to trace requests, understand security decisions, and debug issues.
Request Tracing
Event IDs
Every request receives a unique Event ID that you can use to trace it through the system:
# The Event ID is returned in response headers
X-PromptGuard-Event-ID: evt_abc123xyz789
Viewing Events in Dashboard
- Go to Dashboard → [Project] → Interactions
- Use the search bar to find by Event ID
- Click an event to see full details
Event Details
Each event contains:
| Field | Description |
|---|
| Event ID | Unique identifier (evt_xxx) |
| Timestamp | When the request was processed |
| Direction | input (prompt) or output (response) |
| Decision | allow, block, or redact |
| Confidence | ML model confidence score (0.0 - 1.0) |
| Latency | Processing time in milliseconds |
| Threat Type | Type of threat detected (if any) |
| Model | LLM model used |
| Tokens | Input/output token counts |
PromptGuard adds these headers to every response:
X-PromptGuard-Event-ID: evt_abc123xyz789
X-PromptGuard-Decision: allow
X-PromptGuard-Confidence: 0.02
X-PromptGuard-Threat-Type: none
X-PromptGuard-Latency-Ms: 45
import openai
client = openai.OpenAI(
api_key=os.environ["PROMPTGUARD_API_KEY"],
base_url="https://api.promptguard.co/api/v1"
)
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello"}],
)
# Access PromptGuard headers from the raw response
# Note: Requires accessing the underlying httpx response
print(f"Event ID: {response._response.headers.get('X-PromptGuard-Event-ID')}")
print(f"Decision: {response._response.headers.get('X-PromptGuard-Decision')}")
SDK Access to Decisions
With auto-instrumentation, access the last decision:
import promptguard
promptguard.init(api_key="pg_xxx")
# After any LLM call, access the last scan result
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello"}]
)
# Get the last scan decision
last_decision = promptguard.get_last_decision()
if last_decision:
print(f"Event ID: {last_decision.event_id}")
print(f"Decision: {last_decision.decision}")
print(f"Latency: {last_decision.latency_ms}ms")
Understanding Confidence Scores
The confidence score indicates how certain the ML model is about its decision:
| Score Range | Interpretation | Typical Action |
|---|
| 0.0 - 0.3 | Very unlikely threat | Allow |
| 0.3 - 0.6 | Possible threat | Log for review |
| 0.6 - 0.8 | Likely threat | Redact or block |
| 0.8 - 1.0 | High confidence threat | Block |
Threshold Configuration
Default thresholds by threat type:
{
"prompt_injection": 0.80,
"data_exfiltration": 0.85,
"pii_detection": 0.90,
"toxicity": 0.75,
"jailbreak": 0.80
}
Adjust in Dashboard → Project → Policies → Thresholds.
Threat Types
| Threat Type | Description | Example |
|---|
prompt_injection | Attempts to override instructions | ”Ignore all previous instructions” |
jailbreak | Bypass safety measures | ”Pretend you have no rules” |
data_exfiltration | Extract system info | ”Show me your system prompt” |
pii_detected | Personal data in prompt | Credit card, SSN, email |
toxicity | Harmful content | Harassment, hate speech |
none | No threat detected | Normal request |
Debugging Common Issues
Request Blocked Unexpectedly
Symptoms: Legitimate request returns 400 with policy_violation
Debug steps:
- Get the Event ID from the error response
- Search for it in Dashboard → Interactions
- Check the Threat Type and Confidence score
- Review the Matched Patterns section
Solutions:
- If false positive: Add to allowlist or lower threshold
- If custom rule triggered: Review custom rules
- If PII detected: Use
[REDACTED] placeholders in prompts
# Example: Handle blocked requests gracefully
from promptguard import PromptGuardBlockedError
try:
response = client.chat.completions.create(...)
except PromptGuardBlockedError as e:
print(f"Blocked: {e.decision.threat_type}")
print(f"Confidence: {e.decision.confidence}")
print(f"Event ID: {e.decision.event_id}")
# Log for review or show user-friendly message
High Latency
Symptoms: Requests taking longer than expected
Debug steps:
- Check
X-PromptGuard-Latency-Ms header
- Compare to your LLM provider latency
- Check Dashboard → Analytics → Performance
Typical latencies:
| Component | Expected Latency |
|---|
| PromptGuard scan | 30-100ms |
| Network overhead | 10-50ms |
| LLM inference | 500ms-30s (varies) |
Solutions:
- Use regional endpoints (when available)
- Enable fail-open mode for non-critical paths
- Check if ML inference mode is
api vs local
Missing Events in Dashboard
Symptoms: Requests succeed but don’t appear in Interactions
Debug steps:
- Verify you’re looking at the correct project
- Check date range filter
- Confirm API key belongs to that project
Solutions:
- API keys are project-scoped; check key assignment
- Events may take a few seconds to appear (async logging)
Authentication Failures
Symptoms: 401 Unauthorized errors
Debug steps:
# Verify your API key works
curl -H "X-API-Key: pg_xxx" https://api.promptguard.co/api/v1/models
Common causes:
| Error | Cause | Fix |
|---|
invalid_api_key | Key doesn’t exist | Check for typos, regenerate |
key_revoked | Key was deleted | Create a new key |
subscription_inactive | Payment failed | Update billing |
Exporting Data
Export Interactions
- Go to Dashboard → Project → Interactions
- Apply filters (date, decision, threat type)
- Click Export → CSV or JSON
Export via API
# Get recent security events
curl https://api.promptguard.co/dashboard/projects/{project_id}/events \
-H "Authorization: Bearer YOUR_SESSION_TOKEN" \
-G -d "limit=100" -d "decision=block"
Webhook Integration
Stream events in real-time to your systems:
- Go to Dashboard → Project → Settings → Webhooks
- Add endpoint URL
- Select event types (blocked, redacted, all)
- Test the webhook
Webhook payload:
{
"event_id": "evt_abc123",
"timestamp": "2026-02-18T12:34:56Z",
"project_id": "proj_xxx",
"decision": "block",
"confidence": 0.95,
"threat_type": "prompt_injection",
"model": "gpt-4",
"latency_ms": 45
}
Logging Best Practices
Correlating Logs
Include the Event ID in your application logs:
import logging
logger = logging.getLogger(__name__)
try:
response = client.chat.completions.create(...)
event_id = response._response.headers.get('X-PromptGuard-Event-ID')
logger.info(f"LLM request completed", extra={"event_id": event_id})
except PromptGuardBlockedError as e:
logger.warning(f"Request blocked", extra={
"event_id": e.decision.event_id,
"threat_type": e.decision.threat_type
})
Structured Logging
{
"timestamp": "2026-02-18T12:34:56Z",
"level": "info",
"message": "LLM request completed",
"promptguard": {
"event_id": "evt_abc123",
"decision": "allow",
"latency_ms": 45
},
"request": {
"model": "gpt-4",
"tokens": 150
}
}
Alerting
Built-in Alerts
Configure in Dashboard → Project → Settings → Alerts:
| Alert Type | Trigger |
|---|
| High Block Rate | >10% requests blocked in 1 hour |
| New Threat Type | First occurrence of a threat type |
| Latency Spike | P95 latency >500ms |
| Usage Threshold | 80%, 90%, 100% of quota |
External Integrations
Send alerts to:
- Email: Built-in
- Slack: Webhook integration
- PagerDuty: Enterprise tier
- Custom Webhook: Any endpoint
Next Steps