Webhooks let your application receive real-time notifications when PromptGuard detects security events — threats blocked, PII redacted, usage thresholds crossed, and more.
Overview
When you configure a webhook for a project, PromptGuard sends HTTP POST requests to your endpoint whenever specific security events occur. This lets you:
Log security events to your own systems
Trigger alerts in Slack, PagerDuty, or other tools
Build custom dashboards and analytics
Audit AI interactions in real-time
Setup
Via Dashboard
Go to app.promptguard.co
Select your project
Navigate to Settings or Project Overview
Enter your Webhook URL
Save
Delivery Monitoring
Track webhook delivery status in the dashboard:
Navigate to your project → Webhooks
View delivery history with status, attempts, and errors
Manually retry failed deliveries
The delivery status page shows:
Status : Pending, delivered, or failed
Attempts : Number of delivery attempts (auto-retries with exponential backoff)
Response Status : HTTP status code from your endpoint
Error Details : Last error message for failed deliveries
Via API
curl -X PATCH https://api.promptguard.co/dashboard/projects/{project_id}/webhook \
-H "Cookie: session=YOUR_SESSION" \
-H "Content-Type: application/json" \
-d '{
"webhook_url": "https://your-app.com/webhooks/promptguard",
"webhook_enabled": true
}'
Event Types
Event Triggered When threat.blockedA request is blocked by security policy threat.detectedA threat is detected (even if allowed) pii.redactedPII is detected and redacted from content usage.thresholdUsage crosses 80% or 100% of monthly quota usage.overageUsage exceeds monthly quota (Scale plan)
All webhook events follow this structure:
{
"event" : "threat.blocked" ,
"timestamp" : "2025-02-08T14:30:00Z" ,
"project_id" : "proj_abc123" ,
"data" : {
"event_id" : "evt_xyz789" ,
"decision" : "block" ,
"threat_type" : "prompt_injection" ,
"confidence" : 0.95 ,
"reason" : "Instruction override pattern detected" ,
"request_metadata" : {
"model" : "gpt-5-nano" ,
"ip_address" : "203.0.113.42"
}
}
}
Threat Blocked
{
"event" : "threat.blocked" ,
"timestamp" : "2025-02-08T14:30:00Z" ,
"project_id" : "proj_abc123" ,
"data" : {
"event_id" : "evt_xyz789" ,
"decision" : "block" ,
"threat_type" : "prompt_injection" ,
"confidence" : 0.95 ,
"reason" : "Instruction override pattern detected"
}
}
PII Redacted
{
"event" : "pii.redacted" ,
"timestamp" : "2025-02-08T14:32:00Z" ,
"project_id" : "proj_abc123" ,
"data" : {
"event_id" : "evt_abc456" ,
"pii_types" : [ "email" , "phone" ],
"redaction_count" : 2 ,
"direction" : "input"
}
}
Usage Threshold
{
"event" : "usage.threshold" ,
"timestamp" : "2025-02-08T14:35:00Z" ,
"project_id" : "proj_abc123" ,
"data" : {
"current_usage" : 80500 ,
"monthly_limit" : 100000 ,
"percentage" : 80.5 ,
"plan" : "pro"
}
}
Handling Webhooks
Example Server (Node.js)
import express from 'express' ;
const app = express ();
app . use ( express . json ());
app . post ( '/webhooks/promptguard' , ( req , res ) => {
const { event , data , timestamp } = req . body ;
switch ( event ) {
case 'threat.blocked' :
console . log ( `[BLOCKED] ${ data . threat_type } (confidence: ${ data . confidence } )` );
// Send to Slack, PagerDuty, etc.
break ;
case 'pii.redacted' :
console . log ( `[PII] Redacted ${ data . redaction_count } items: ${ data . pii_types . join ( ', ' ) } ` );
break ;
case 'usage.threshold' :
console . log ( `[USAGE] ${ data . percentage } % of monthly quota used` );
if ( data . percentage >= 90 ) {
// Alert team about approaching limit
}
break ;
}
res . status ( 200 ). json ({ received: true });
});
app . listen ( 3000 );
Example Server (Python)
from flask import Flask, request, jsonify
app = Flask( __name__ )
@app.route ( '/webhooks/promptguard' , methods = [ 'POST' ])
def handle_webhook ():
payload = request.json
event = payload[ 'event' ]
data = payload[ 'data' ]
if event == 'threat.blocked' :
print ( f "[BLOCKED] { data[ 'threat_type' ] } (confidence: { data[ 'confidence' ] } )" )
# Send to logging/alerting system
elif event == 'pii.redacted' :
print ( f "[PII] Redacted { data[ 'redaction_count' ] } items" )
elif event == 'usage.threshold' :
print ( f "[USAGE] { data[ 'percentage' ] } % of quota used" )
if data[ 'percentage' ] >= 90 :
send_team_alert( "Approaching monthly quota limit" )
return jsonify({ "received" : True }), 200
Webhook Signing (Enterprise)
PromptGuard signs webhook payloads with HMAC-SHA256 using your project’s webhook secret. Verify the signature to ensure payloads are authentic:
import hmac
import hashlib
def verify_webhook ( payload_bytes , signature , secret ):
expected = hmac.new(
secret.encode(),
payload_bytes,
hashlib.sha256
).hexdigest()
return hmac.compare_digest( f "sha256= { expected } " , signature)
# In your webhook handler:
signature = request.headers.get( "X-PromptGuard-Signature" )
if not verify_webhook(request.data, signature, YOUR_WEBHOOK_SECRET ):
return "Invalid signature" , 401
The signature is sent in the X-PromptGuard-Signature header with format sha256=<hex_digest>.
Best Practices
Respond quickly — Return a 200 status within 5 seconds. Process events asynchronously if needed.
Handle duplicates — Use event_id to deduplicate events in case of retries.
Verify signatures — Always verify the X-PromptGuard-Signature header to ensure webhook authenticity.
Use HTTPS — Always use HTTPS endpoints for webhook delivery.
Log everything — Store raw webhook payloads for debugging and audit trails.
Monitor failures — Track webhook delivery failures in your monitoring system.
Retry Policy
If your endpoint returns a non-2xx status code or times out, PromptGuard will retry delivery:
Attempt Delay 1st retry 30 seconds 2nd retry 2 minutes 3rd retry 10 minutes
After 3 failed retries (4 total attempts), the delivery is marked as failed. Check your dashboard for delivery failures. Failed deliveries are tracked in the webhook_deliveries table with error details.
Custom Policy Webhooks
In addition to receiving event notifications, you can use webhooks as custom policy hooks in the PromptGuard guard pipeline. This lets you run your own verdict logic on every scan without modifying the detection engine.
When a custom policy webhook is configured, PromptGuard calls your endpoint during the scan pipeline and uses your response to decide whether to allow, block, or redact the content.
How It Works
PromptGuard runs its built-in threat detectors on the content.
Before returning a final decision, it sends a POST request to your custom policy webhook with the scan context and any threats already detected.
Your endpoint evaluates the content and returns a verdict.
PromptGuard incorporates your verdict into the final decision.
Your endpoint receives a POST request with this JSON body:
{
"content" : "the scanned text" ,
"direction" : "input" ,
"model" : "gpt-5-nano" ,
"event_id" : "evt_abc123" ,
"threats_detected" : [
{
"type" : "prompt_injection" ,
"confidence" : 0.95
}
]
}
Field Type Description contentstring The text being scanned directionstring "input" (user → model) or "output" (model → user)modelstring The AI model being used event_idstring Unique identifier for this scan event threats_detectedarray Threats already found by built-in detectors
Your endpoint must return a JSON response:
{
"verdict" : "allow" ,
"reason" : "Content passes custom compliance check" ,
"redacted_content" : null
}
Field Type Required Description verdictstring Yes "allow", "block", or "redact"reasonstring No Human-readable explanation for the verdict redacted_contentstring No Required when verdict is "redact" — the sanitized content
Example: Custom Compliance Server
from flask import Flask, request, jsonify
app = Flask( __name__ )
BLOCKED_TOPICS = [ "internal-codename-project-x" , "unreleased-feature" ]
@app.route ( '/policy-webhook' , methods = [ 'POST' ])
def policy_hook ():
payload = request.json
content = payload[ "content" ].lower()
for topic in BLOCKED_TOPICS :
if topic in content:
return jsonify({
"verdict" : "block" ,
"reason" : f "Content references restricted topic: { topic } "
}), 200
return jsonify({
"verdict" : "allow" ,
"reason" : "Content passes custom policy"
}), 200
Failure Behavior
Custom policy webhooks have a 3-second timeout by default. If your endpoint is unreachable or returns an error:
Fail open (default) : The request is allowed through. The webhook error is logged but does not block the user.
Fail closed : The request is blocked. Enable this for high-security environments where you require your custom policy to run on every request.
Configure the failure mode in your project settings or via the API.
Best Practices for Policy Webhooks
Keep it fast — Your endpoint is in the hot path of every scan. Aim for sub-100ms response times.
Return valid verdicts — Only "allow", "block", and "redact" are accepted. Invalid values default to "allow".
Use fail-closed sparingly — Only enable fail-closed mode when your policy check is mandatory for compliance.
Log decisions — Record your webhook’s verdicts for auditing and debugging.
Handle all fields — Your endpoint should gracefully handle any combination of threat types in threats_detected.
Next Steps
Monitoring Dashboard View security events and analytics
Usage Tracking Monitor your API usage