Skip to main content
The PromptGuard GitHub App scans your repositories for unprotected LLM SDK calls (OpenAI, Anthropic, Google, Cohere, AWS Bedrock) and creates automated pull requests to add PromptGuard protection.

Overview

The GitHub Code Scanner helps you:
  • Discover all LLM API calls across your codebase
  • Identify which calls are unprotected (no PromptGuard SDK)
  • Fix vulnerabilities with automated pull requests
  • Monitor new code for unprotected LLM usage via CI checks

Installation

Step 1: Install the GitHub App

  1. Go to Dashboard → Settings → Integrations
  2. Click “Install GitHub App”
  3. Select the repositories you want to scan (or grant access to all)
  4. Authorize the installation
The GitHub App requires the following permissions:
  • Contents: Read & write (to read code and create fix branches)
  • Pull requests: Read & write (to create fix PRs)
  • Checks: Read & write (to post scan results on PRs)

Step 2: Connect a Repository

After installation, connect repositories to PromptGuard projects:
  1. Go to Dashboard → Settings → Integrations
  2. Click “Connect Repository”
  3. Select a repository from the list
  4. Choose which PromptGuard project to associate it with
  5. Enable Auto-scan and/or Auto-fix options

How It Works

Scanning Process

Detection Engine

The scanner uses AST-based detection (not regex) to accurately find LLM SDK usage:
LanguageParserAccuracy
Pythonast module100% (no false positives from strings/comments)
JavaScript/TypeScriptTree-sitter100% (handles JSX, template literals correctly)

Supported SDKs

ProviderPythonJavaScript/TypeScript
OpenAIopenai.ChatCompletion.create()openai.chat.completions.create()
Anthropicanthropic.messages.create()anthropic.messages.create()
Google AIgenai.GenerativeModel()@google/generative-ai
Coherecohere.chat()cohere.chat()
AWS Bedrockbedrock.invoke_model()BedrockRuntimeClient

Scan Results

Viewing Findings

  1. Go to Dashboard > Settings > Integrations
  2. Click on a connected repository to expand its detail panel
  3. View scan history, summary cards, and latest findings inline
  4. Click a scan row to see detailed findings for that run

Finding Details

Each finding includes:
FieldDescription
File pathLocation of the LLM call
Line numberExact line in the file
ProviderWhich LLM provider (OpenAI, Anthropic, etc.)
Call typeMethod being called
ProtectedWhether PromptGuard SDK wraps it
SeverityRisk level (high for unprotected calls)
Code snippetThe actual code found

Severity Levels

SeverityMeaning
HighUnprotected LLM call in production code
MediumUnprotected call in non-critical path
LowProtected call or test file
InfoInformational (e.g., SDK import detected)

Automated Fix PRs

When you click “Create Fix PR” or have auto-fix enabled:
  1. The scanner creates a new branch (promptguard/add-protection-<sha>)
  2. Wraps unprotected LLM calls with PromptGuard SDK
  3. Adds necessary imports
  4. Opens a pull request with a detailed description

Example Fix

from openai import OpenAI

client = OpenAI()
response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": user_input}]
)

CI/CD Integration

Pull Request Checks

When a PR is opened, the scanner:
  1. Analyzes changed files for new LLM calls
  2. Posts a Check Run with results
  3. Fails the check if unprotected calls are introduced
  4. Passes if all calls are protected

Check Status

StatusMeaning
PassedNo unprotected LLM calls found
FailedNew unprotected calls introduced
⚠️ WarningProtected calls found (informational)

Branch Protection

For maximum security, enable branch protection rules:
  1. Go to GitHub → Repository → Settings → Branches
  2. Add a rule for your default branch
  3. Enable “Require status checks to pass”
  4. Select “PromptGuard Security Scan”

Configuration

Repository Settings

SettingDefaultDescription
Auto-scanOnScan on every push to default branch
Auto-fixOffAutomatically create fix PRs
PR checksOnRun scans on pull requests
Notify on findingsOnEmail when vulnerabilities found

Ignoring Files

Create a .promptguardignore file in your repository root:
# Ignore test files
tests/
*_test.py
*.test.ts

# Ignore specific directories
scripts/playground/
examples/

# Ignore specific files
src/legacy/old-api.py

Inline Ignore Comments

Suppress specific findings with comments:
# promptguard-ignore: intentionally unprotected for benchmarking
response = client.chat.completions.create(...)
// promptguard-ignore: test fixture
const response = await openai.chat.completions.create(...);

Webhook Events

The GitHub App subscribes to these events:
EventTriggerAction
pushPush to default branchFull repository scan
pull_requestPR opened/updatedScan changed files, post check
installationApp installed/removedSetup/cleanup integration

Troubleshooting

Check:
  • Is auto-scan enabled for the repository?
  • Is the repository connected to a project?
  • Are webhooks configured correctly? (Settings → Integrations → Webhook status)
Solution: Disconnect and reconnect the repository.
Check:
  • Is the code in a test file? Add to .promptguardignore
  • Is it intentionally unprotected? Add promptguard-ignore comment
Solution: Use ignore patterns or inline comments.
Check:
  • Has the base branch changed since the scan?
Solution: Re-run the scan to generate a fresh fix PR.
Check:
  • Does the GitHub App have Contents and Pull requests permissions?
Solution: Re-authorize the app at Settings → Integrations → Manage.

API Reference

Trigger Manual Scan

curl -X POST https://api.promptguard.co/dashboard/github/repos/{repo_id}/scan \
  -H "Authorization: Bearer YOUR_SESSION_TOKEN"

List Scan Results

curl https://api.promptguard.co/dashboard/github/repos/{repo_id}/scans \
  -H "Authorization: Bearer YOUR_SESSION_TOKEN"

Get Scan Details

curl https://api.promptguard.co/dashboard/github/repos/{repo_id}/scans/{scan_id} \
  -H "Authorization: Bearer YOUR_SESSION_TOKEN"

Next Steps