The PromptGuard GitHub App scans your repositories for unprotected LLM SDK calls (OpenAI, Anthropic, Google, Cohere, AWS Bedrock) and creates automated pull requests to add PromptGuard protection.
Overview
The GitHub Code Scanner helps you:- Discover all LLM API calls across your codebase
- Identify which calls are unprotected (no PromptGuard SDK)
- Fix vulnerabilities with automated pull requests
- Monitor new code for unprotected LLM usage via CI checks
Installation
Step 1: Install the GitHub App
- Go to Dashboard → Settings → Integrations
- Click “Install GitHub App”
- Select the repositories you want to scan (or grant access to all)
- Authorize the installation
The GitHub App requires the following permissions:
- Contents: Read & write (to read code and create fix branches)
- Pull requests: Read & write (to create fix PRs)
- Checks: Read & write (to post scan results on PRs)
Step 2: Connect a Repository
After installation, connect repositories to PromptGuard projects:- Go to Dashboard → Settings → Integrations
- Click “Connect Repository”
- Select a repository from the list
- Choose which PromptGuard project to associate it with
- Enable Auto-scan and/or Auto-fix options
How It Works
Scanning Process
Detection Engine
The scanner uses AST-based detection (not regex) to accurately find LLM SDK usage:| Language | Parser | Accuracy |
|---|---|---|
| Python | ast module | 100% (no false positives from strings/comments) |
| JavaScript/TypeScript | Tree-sitter | 100% (handles JSX, template literals correctly) |
Supported SDKs
| Provider | Python | JavaScript/TypeScript |
|---|---|---|
| OpenAI | openai.ChatCompletion.create() | openai.chat.completions.create() |
| Anthropic | anthropic.messages.create() | anthropic.messages.create() |
| Google AI | genai.GenerativeModel() | @google/generative-ai |
| Cohere | cohere.chat() | cohere.chat() |
| AWS Bedrock | bedrock.invoke_model() | BedrockRuntimeClient |
Scan Results
Viewing Findings
- Go to Dashboard > Settings > Integrations
- Click on a connected repository to expand its detail panel
- View scan history, summary cards, and latest findings inline
- Click a scan row to see detailed findings for that run
Finding Details
Each finding includes:| Field | Description |
|---|---|
| File path | Location of the LLM call |
| Line number | Exact line in the file |
| Provider | Which LLM provider (OpenAI, Anthropic, etc.) |
| Call type | Method being called |
| Protected | Whether PromptGuard SDK wraps it |
| Severity | Risk level (high for unprotected calls) |
| Code snippet | The actual code found |
Severity Levels
| Severity | Meaning |
|---|---|
| High | Unprotected LLM call in production code |
| Medium | Unprotected call in non-critical path |
| Low | Protected call or test file |
| Info | Informational (e.g., SDK import detected) |
Automated Fix PRs
When you click “Create Fix PR” or have auto-fix enabled:- The scanner creates a new branch (
promptguard/add-protection-<sha>) - Wraps unprotected LLM calls with PromptGuard SDK
- Adds necessary imports
- Opens a pull request with a detailed description
Example Fix
CI/CD Integration
Pull Request Checks
When a PR is opened, the scanner:- Analyzes changed files for new LLM calls
- Posts a Check Run with results
- Fails the check if unprotected calls are introduced
- Passes if all calls are protected
Check Status
| Status | Meaning |
|---|---|
| ✅ Passed | No unprotected LLM calls found |
| ❌ Failed | New unprotected calls introduced |
| ⚠️ Warning | Protected calls found (informational) |
Branch Protection
For maximum security, enable branch protection rules:- Go to GitHub → Repository → Settings → Branches
- Add a rule for your default branch
- Enable “Require status checks to pass”
- Select “PromptGuard Security Scan”
Configuration
Repository Settings
| Setting | Default | Description |
|---|---|---|
| Auto-scan | On | Scan on every push to default branch |
| Auto-fix | Off | Automatically create fix PRs |
| PR checks | On | Run scans on pull requests |
| Notify on findings | On | Email when vulnerabilities found |
Ignoring Files
Create a.promptguardignore file in your repository root:
Inline Ignore Comments
Suppress specific findings with comments:Webhook Events
The GitHub App subscribes to these events:| Event | Trigger | Action |
|---|---|---|
push | Push to default branch | Full repository scan |
pull_request | PR opened/updated | Scan changed files, post check |
installation | App installed/removed | Setup/cleanup integration |
Troubleshooting
Scans not triggering
Scans not triggering
Check:
- Is auto-scan enabled for the repository?
- Is the repository connected to a project?
- Are webhooks configured correctly? (Settings → Integrations → Webhook status)
False positives
False positives
Check:
- Is the code in a test file? Add to
.promptguardignore - Is it intentionally unprotected? Add
promptguard-ignorecomment
Fix PR has conflicts
Fix PR has conflicts
Check:
- Has the base branch changed since the scan?
Missing permissions
Missing permissions
Check:
- Does the GitHub App have Contents and Pull requests permissions?