Documentation Index
Fetch the complete documentation index at: https://docs.promptguard.co/llms.txt
Use this file to discover all available pages before exploring further.
The PromptGuard Cursor plugin gives the Cursor agent real-time LLM security capabilities: scanning for threats, detecting unprotected SDK usage, redacting PII, and enforcing security best practices as you code.
Installation
One-click install
Use the deep link to install the plugin and MCP server in one step:Prerequisites
The plugin requires the PromptGuard CLI for MCP server functionality:- macOS (Homebrew)
- Linux / macOS (Binary)
- Cargo
What’s included
The plugin bundles five components that work together:MCP Server (6 tools)
The CLI’s built-in MCP server (promptguard mcp) exposes tools that the Cursor agent can call directly:
| Tool | What it does |
|---|---|
promptguard_auth | Authenticate with PromptGuard. Opens the dashboard in the browser so you can copy your API key, then saves it locally. The agent calls this automatically when other tools report you’re not authenticated. |
promptguard_logout | Log out by clearing the locally stored API key and configuration. |
promptguard_scan_text | Scan any text for prompt injection, jailbreaks, PII leakage, and toxic content. Returns a decision (allow/block), confidence score, and threat details. |
promptguard_scan_project | Scan a directory for unprotected LLM SDK usage across OpenAI, Anthropic, Cohere, Gemini, Bedrock, and more. |
promptguard_redact | Redact PII (emails, phones, SSNs, credit cards, API keys) from text before sending to an LLM. |
promptguard_status | Check whether PromptGuard is configured, which providers are active, and which key type is in use. |
Always-on Rule: Secure LLM Usage
When the agent writes code that imports any supported LLM SDK, this rule automatically guides it to includepromptguard.init() with proper configuration. No manual invocation needed.
Skill: Secure LLM Integration
A step-by-step playbook the agent follows when you ask it to add PromptGuard or build AI features:- Detect project language and LLM providers
- Choose the right integration method (auto-instrumentation, Guard API, or HTTP proxy)
- Install the SDK and add initialization
- Configure security policies
- Verify the setup
Commands
| Command | Description |
|---|---|
/promptguard-scan | Find unprotected LLM calls, hardcoded secrets, and misconfigurations. Reports findings in a severity-ranked table and offers to fix them. |
/promptguard-secure | Add PromptGuard to the project end-to-end: detect language, install SDK, configure initialization, set up environment variables. |
Agent: LLM Security Reviewer
A specialized code reviewer that focuses on LLM-specific threats:- Prompt injection (direct and indirect)
- PII leakage in prompts and responses
- Agent tool abuse (SQL injection, SSRF, path traversal via LLM tools)
- Secrets exposure in LLM context
- Unsafe output handling (XSS via LLM responses)
Manual MCP setup
If you prefer to configure the MCP server manually instead of using the plugin: Add to.cursor/mcp.json in your project or your global Cursor MCP config:
Using with other editors
The MCP server works with any MCP-compatible AI editor:- Claude Code
- Windsurf
- Any MCP client