Agent Security API
The Agent Security API protects AI agents by validating tool calls before execution and detecting anomalous behavior patterns.Why Agent Security?
AI agents with tool access can be exploited to:- Execute dangerous commands: Shell injection, file system manipulation
- Escalate privileges: Accessing restricted resources
- Exfiltrate data: Sending data to external endpoints
- Behave erratically: Unusual patterns indicating compromise
Endpoints
Validate Tool Call
Validate a tool call before allowing execution.Analyze Behavior
Analyze agent behavior for anomalies.Get Agent Stats
Get statistics for a specific agent.SDK Usage
- Python
- Node.js
Risk Levels
| Level | Score Range | Action |
|---|---|---|
safe | 0.0 - 0.2 | Allow |
low | 0.2 - 0.4 | Allow with logging |
medium | 0.4 - 0.6 | May require review |
high | 0.6 - 0.8 | Block or require approval |
critical | 0.8 - 1.0 | Always block |
Blocked Tools (Default)
These tools are blocked by default:execute_shell,run_command,bash,systemdelete_file,rm,rmdirkill_process,terminatesend_email,http_post(without approval)
Best Practices
- Validate every tool call: Don’t skip validation for “safe” tools
- Use sessions: Group related calls for better behavior analysis
- Review anomalies: Investigate when
anomaly_scoreis high - Set up alerts: Monitor for patterns indicating compromise