Overview
Ollama makes it easy to run open-source LLMs locally. PromptGuard integrates with Ollama by acting as a security proxy — all traffic between your application and Ollama passes through PromptGuard, where it’s scanned by all 13+ threat detectors before reaching your local model.PromptGuard supports Ollama in passthrough mode. Your prompts are scanned for threats, then forwarded to your local Ollama instance. Model responses are scanned on the way back.
Prerequisites
- Ollama installed and running — Download Ollama and pull at least one model:
- PromptGuard API key — Sign up at app.promptguard.co and create an API key
Quick Start
Route your Ollama traffic through PromptGuard by pointing the OpenAI SDK at PromptGuard’s API and using theollama/ model prefix:
Model Naming
Use theollama/ prefix followed by your Ollama model name:
| Ollama Model | PromptGuard Model Name |
|---|---|
llama3 | ollama/llama3 |
llama3:70b | ollama/llama3:70b |
mistral | ollama/mistral |
mixtral | ollama/mixtral |
codellama | ollama/codellama |
phi3 | ollama/phi3 |
gemma2 | ollama/gemma2 |
qwen2 | ollama/qwen2 |
deepseek-coder-v2 | ollama/deepseek-coder-v2 |
Any model available in your local Ollama instance can be used. The model name after
ollama/ must match exactly what ollama list shows.Environment Variables
Configure your Ollama endpoint and PromptGuard credentials via environment variables:OLLAMA_BASE_URL to the correct address. PromptGuard reads this variable to know where to forward requests.
Full Integration Example
Streaming
PromptGuard supports streaming responses from Ollama models:Security Benefits
All 12 PromptGuard threat detectors are applied to Ollama traffic:Prompt Injection
Blocks jailbreak attempts against local models
PII Detection
Prevents sensitive data from reaching local inference
Data Exfiltration
Detects attempts to extract training data or system prompts
Content Moderation
Applies toxicity and content safety filters
Troubleshooting
Error: “Cannot connect to Ollama”
Ensure Ollama is running and accessible:Error: “Model not found”
Verify the model is pulled locally:Error: “No provider found for model”
Ensure your model name uses theollama/ prefix:
Next Steps
LLM Providers
See all supported LLM providers
Security Policies
Configure detection rules for local models
Python Guide
Full Python integration guide
Streaming
Streaming integration details