The PromptGuard Cursor plugin gives the Cursor agent real-time LLM security capabilities: scanning for threats, detecting unprotected SDK usage, redacting PII, and enforcing security best practices as you code.
Installation
One-click install
Use the deep link to install the plugin and MCP server in one step:
cursor://anysphere.cursor-deeplink/mcp/install?name=promptguard&config=eyJjb21tYW5kIjoicHJvbXB0Z3VhcmQiLCJhcmdzIjpbIm1jcCIsIi10Iiwic3RkaW8iXX0=
From Cursor Marketplace
- Open Cursor
- Go to Settings > Plugins
- Search for “PromptGuard”
- Click Install
Prerequisites
The plugin requires the PromptGuard CLI for MCP server functionality:
macOS (Homebrew)
Linux / macOS (Binary)
Cargo
brew tap promptguard/tap
brew install promptguard
curl -fsSL https://raw.githubusercontent.com/acebot712/promptguard-cli/main/install.sh | sh
cargo install promptguard-cli
Then configure your API key:
promptguard init --api-key pg_sk_prod_YOUR_KEY
What’s included
The plugin bundles five components that work together:
The CLI’s built-in MCP server (promptguard mcp) exposes tools that the Cursor agent can call directly:
| Tool | What it does |
|---|
promptguard_scan_text | Scan any text for prompt injection, jailbreaks, PII leakage, and toxic content. Returns a decision (allow/block), confidence score, and threat details. |
promptguard_scan_project | Scan a directory for unprotected LLM SDK usage across OpenAI, Anthropic, Cohere, Gemini, Bedrock, and more. |
promptguard_redact | Redact PII (emails, phones, SSNs, credit cards, API keys) from text before sending to an LLM. |
promptguard_status | Check whether PromptGuard is configured, which providers are active, and which key type is in use. |
Always-on Rule: Secure LLM Usage
When the agent writes code that imports any supported LLM SDK, this rule automatically guides it to include promptguard.init() with proper configuration. No manual invocation needed.
Skill: Secure LLM Integration
A step-by-step playbook the agent follows when you ask it to add PromptGuard or build AI features:
- Detect project language and LLM providers
- Choose the right integration method (auto-instrumentation, Guard API, or HTTP proxy)
- Install the SDK and add initialization
- Configure security policies
- Verify the setup
Includes a full threat model reference covering prompt injection, PII leakage, data exfiltration, agent tool abuse, and more.
Commands
| Command | Description |
|---|
/promptguard-scan | Find unprotected LLM calls, hardcoded secrets, and misconfigurations. Reports findings in a severity-ranked table and offers to fix them. |
/promptguard-secure | Add PromptGuard to the project end-to-end: detect language, install SDK, configure initialization, set up environment variables. |
Agent: LLM Security Reviewer
A specialized code reviewer that focuses on LLM-specific threats:
- Prompt injection (direct and indirect)
- PII leakage in prompts and responses
- Agent tool abuse (SQL injection, SSRF, path traversal via LLM tools)
- Secrets exposure in LLM context
- Unsafe output handling (XSS via LLM responses)
Manual MCP setup
If you prefer to configure the MCP server manually instead of using the plugin:
Add to .cursor/mcp.json in your project or your global Cursor MCP config:
{
"mcpServers": {
"promptguard": {
"command": "promptguard",
"args": ["mcp", "-t", "stdio"]
}
}
}
Using with other editors
The MCP server works with any MCP-compatible AI editor:
Claude Code
Windsurf
Any MCP client
claude mcp add promptguard -- promptguard mcp -t stdio
Add to your Windsurf MCP config:{
"mcpServers": {
"promptguard": {
"command": "promptguard",
"args": ["mcp", "-t", "stdio"]
}
}
}
The server reads JSON-RPC 2.0 messages from stdin and writes responses to stdout.
Supported LLM providers
OpenAI, Anthropic, Google Generative AI (Gemini), Cohere, AWS Bedrock, LangChain, CrewAI, LlamaIndex, Vercel AI SDK.
Links