Skip to main content
The PromptGuard Cursor plugin gives the Cursor agent real-time LLM security capabilities: scanning for threats, detecting unprotected SDK usage, redacting PII, and enforcing security best practices as you code.

Installation

One-click install

Use the deep link to install the plugin and MCP server in one step:
cursor://anysphere.cursor-deeplink/mcp/install?name=promptguard&config=eyJjb21tYW5kIjoicHJvbXB0Z3VhcmQiLCJhcmdzIjpbIm1jcCIsIi10Iiwic3RkaW8iXX0=

From Cursor Marketplace

  1. Open Cursor
  2. Go to Settings > Plugins
  3. Search for “PromptGuard”
  4. Click Install

Prerequisites

The plugin requires the PromptGuard CLI for MCP server functionality:
brew tap promptguard/tap
brew install promptguard
Then configure your API key:
promptguard init --api-key pg_sk_prod_YOUR_KEY

What’s included

The plugin bundles five components that work together:

MCP Server (4 tools)

The CLI’s built-in MCP server (promptguard mcp) exposes tools that the Cursor agent can call directly:
ToolWhat it does
promptguard_scan_textScan any text for prompt injection, jailbreaks, PII leakage, and toxic content. Returns a decision (allow/block), confidence score, and threat details.
promptguard_scan_projectScan a directory for unprotected LLM SDK usage across OpenAI, Anthropic, Cohere, Gemini, Bedrock, and more.
promptguard_redactRedact PII (emails, phones, SSNs, credit cards, API keys) from text before sending to an LLM.
promptguard_statusCheck whether PromptGuard is configured, which providers are active, and which key type is in use.

Always-on Rule: Secure LLM Usage

When the agent writes code that imports any supported LLM SDK, this rule automatically guides it to include promptguard.init() with proper configuration. No manual invocation needed.

Skill: Secure LLM Integration

A step-by-step playbook the agent follows when you ask it to add PromptGuard or build AI features:
  1. Detect project language and LLM providers
  2. Choose the right integration method (auto-instrumentation, Guard API, or HTTP proxy)
  3. Install the SDK and add initialization
  4. Configure security policies
  5. Verify the setup
Includes a full threat model reference covering prompt injection, PII leakage, data exfiltration, agent tool abuse, and more.

Commands

CommandDescription
/promptguard-scanFind unprotected LLM calls, hardcoded secrets, and misconfigurations. Reports findings in a severity-ranked table and offers to fix them.
/promptguard-secureAdd PromptGuard to the project end-to-end: detect language, install SDK, configure initialization, set up environment variables.

Agent: LLM Security Reviewer

A specialized code reviewer that focuses on LLM-specific threats:
  • Prompt injection (direct and indirect)
  • PII leakage in prompts and responses
  • Agent tool abuse (SQL injection, SSRF, path traversal via LLM tools)
  • Secrets exposure in LLM context
  • Unsafe output handling (XSS via LLM responses)

Manual MCP setup

If you prefer to configure the MCP server manually instead of using the plugin: Add to .cursor/mcp.json in your project or your global Cursor MCP config:
{
  "mcpServers": {
    "promptguard": {
      "command": "promptguard",
      "args": ["mcp", "-t", "stdio"]
    }
  }
}

Using with other editors

The MCP server works with any MCP-compatible AI editor:
claude mcp add promptguard -- promptguard mcp -t stdio

Supported LLM providers

OpenAI, Anthropic, Google Generative AI (Gemini), Cohere, AWS Bedrock, LangChain, CrewAI, LlamaIndex, Vercel AI SDK.