Skip to main content

Overview

PromptGuard supports Azure OpenAI in passthrough mode, allowing you to use your own Azure OpenAI deployments with full security protection.
No configuration needed on our end - simply provide your Azure credentials in each request.

Quick Start

To use Azure OpenAI with PromptGuard, you need three things:
  1. Model name: Format as azure/{deployment-name}
  2. X-Azure-Resource header: Your Azure resource name
  3. Authorization header: Your Azure API key

Example Request

curl https://api.promptguard.co/api/v1/chat/completions \
  -H "X-API-Key: pg_live_xxx" \
  -H "X-Azure-Resource: my-azure-resource" \
  -H "Authorization: Bearer <your-azure-api-key>" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "azure/my-gpt4-deployment",
    "messages": [
      {"role": "user", "content": "What is 2+2?"}
    ]
  }'

Finding Your Azure Details

Resource Name

  1. Go to Azure Portal → Your Azure OpenAI resource
  2. Under Keys and Endpoint, find your endpoint URL
  3. Format: https://{resource-name}.openai.azure.com/
  4. Your resource name is everything before .openai.azure.com
Example:
  • Endpoint: https://my-company-openai.openai.azure.com/
  • Resource name: my-company-openai

Deployment Name

  1. Go to Azure AI Studio → Your resource
  2. Click Deployments in the left sidebar
  3. Your deployment name is in the Deployment Name column
Common deployment names:
  • gpt-4-deployment
  • gpt-4o-mini
  • my-custom-gpt4

API Key

  1. Go to Azure Portal → Your Azure OpenAI resource
  2. Under Keys and Endpoint, copy Key 1 or Key 2

SDK Integration

Python (OpenAI SDK)

import openai

client = openai.OpenAI(
    base_url="https://api.promptguard.co/api/v1",
    api_key="<your-azure-api-key>",
    default_headers={
        "X-API-Key": "pg_live_xxx",
        "X-Azure-Resource": "my-azure-resource",
    }
)

response = client.chat.completions.create(
    model="azure/my-gpt4-deployment",
    messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)

Node.js (OpenAI SDK)

import OpenAI from 'openai';

const client = new OpenAI({
  baseURL: 'https://api.promptguard.co/api/v1',
  apiKey: '<your-azure-api-key>',
  defaultHeaders: {
    'X-API-Key': 'pg_live_xxx',
    'X-Azure-Resource': 'my-azure-resource',
  }
});

const response = await client.chat.completions.create({
  model: 'azure/my-gpt4-deployment',
  messages: [{ role: 'user', content: 'Hello!' }]
});

console.log(response.choices[0].message.content);

TypeScript

import OpenAI from 'openai';

const client = new OpenAI({
  baseURL: 'https://api.promptguard.co/api/v1',
  apiKey: process.env.AZURE_API_KEY,
  defaultHeaders: {
    'X-API-Key': process.env.PROMPTGUARD_API_KEY,
    'X-Azure-Resource': process.env.AZURE_RESOURCE_NAME,
  }
});

const response = await client.chat.completions.create({
  model: `azure/${process.env.AZURE_DEPLOYMENT_NAME}`,
  messages: [{ role: 'user', content: 'Hello!' }]
});

Supported Models

PromptGuard supports all Azure OpenAI models through passthrough:
Model FamilyExample
GPT-4oazure/my-gpt4o-deployment
GPT-4 Turboazure/my-gpt4-turbo
GPT-4azure/my-gpt4
GPT-3.5 Turboazure/my-gpt35-turbo
Embeddingsazure/text-embedding-ada-002
DALL-Eazure/my-dalle3
The model name is always: azure/{your-deployment-name}

Security Features

All PromptGuard security features work seamlessly with Azure OpenAI:

Prompt Injection

Blocks jailbreak attempts and instruction hijacking

PII Detection

Redacts emails, SSNs, phone numbers automatically

Credit Cards

Luhn-validated credit card redaction

Data Exfiltration

Prevents “send to external email” attacks

Environment Variables

For cleaner code, use environment variables:
# .env
PROMPTGUARD_API_KEY=pg_live_xxx
AZURE_RESOURCE_NAME=my-azure-resource
AZURE_DEPLOYMENT_NAME=my-gpt4-deployment
AZURE_API_KEY=your-azure-api-key
Then in your code:
import os
from openai import OpenAI

client = OpenAI(
    base_url="https://api.promptguard.co/api/v1",
    api_key=os.getenv("AZURE_API_KEY"),
    default_headers={
        "X-API-Key": os.getenv("PROMPTGUARD_API_KEY"),
        "X-Azure-Resource": os.getenv("AZURE_RESOURCE_NAME"),
    }
)

response = client.chat.completions.create(
    model=f"azure/{os.getenv('AZURE_DEPLOYMENT_NAME')}",
    messages=[{"role": "user", "content": "Hello!"}]
)

Troubleshooting

Error: “Azure OpenAI passthrough requires X-Azure-Resource header”

Solution: Add the X-Azure-Resource header with your Azure resource name.
# ❌ Missing header
client = OpenAI(base_url="...", api_key="...")

# ✅ Correct
client = OpenAI(
    base_url="...",
    api_key="...",
    default_headers={"X-Azure-Resource": "my-resource"}
)

Error: “No provider found for model”

Solution: Make sure your model name starts with azure/ prefix.
# ❌ Wrong format
model="my-gpt4-deployment"

# ✅ Correct format
model="azure/my-gpt4-deployment"

Error: “Deployment not found”

Solution: Verify your deployment name in Azure AI Studio matches exactly.
  1. Go to Azure AI Studio → Deployments
  2. Copy the exact deployment name
  3. Use it after azure/ prefix

Error: “Invalid credentials”

Solution: Make sure you’re using your Azure API key, not your Azure password.
  1. Azure Portal → Your OpenAI resource
  2. Keys and Endpoint → Copy Key 1 or Key 2
  3. This is your API key (not your account password)

Next Steps

Support

Need help with Azure OpenAI integration?