Scan messages for security threats without proxying to an LLM.
This is the primary endpoint for auto-instrumentation and framework callback integrations. It runs the same policy engine, ML ensemble, preset configuration, custom rules, and entitlements checks as the proxy pipeline.
Use direction="input" before sending messages to the LLM and
direction="output" after receiving a response.
Returns a decision of allow, block, or redact along with
detailed threat information and optional redacted messages.
PromptGuard API key for developer endpoints. Keys start with pg_live_ and are created in the dashboard.
Request body for the guard endpoint.
Messages to scan (OpenAI-style message array)
1Scan direction: 'input' (pre-LLM) or 'output' (post-LLM)
^(input|output)$Model being used (for logging)
Optional framework context
Successful Response
Response from the guard endpoint.
Policy decision: 'allow', 'block', or 'redact'
Unique event identifier for tracking
Confidence score of the decision
Processing time in milliseconds
Primary threat type detected
Redacted messages (only present when decision='redact')
Detailed threat breakdown