Skip to main content
This guide covers the most common issues you might encounter when integrating with PromptGuard and provides step-by-step solutions.

Authentication Issues

Invalid API Key Error

Problem: Getting 401 Unauthorized errors when making requests. Symptoms:
{
  "error": {
    "message": "Invalid API key provided",
    "type": "authentication_error",
    "code": "invalid_api_key"
  }
}
Solutions:
Check that your API key follows the correct format:
# Correct format
pg_live_xxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxx  # Production
pg_test_xxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxx  # Testing

# Check your key
echo $PROMPTGUARD_API_KEY | head -c 20
# Should show: pg_live_ or pg_test_
Ensure your API key is properly set:
# Check if environment variable is set
echo $PROMPTGUARD_API_KEY

# For Node.js applications
console.log('API Key:', process.env.PROMPTGUARD_API_KEY?.substring(0, 10) + '...');

# For Python applications
import os
print(f"API Key: {os.environ.get('PROMPTGUARD_API_KEY', 'NOT_SET')[:10]}...")
Check if your API key is active in the dashboard:
  1. Login to app.promptguard.co
  2. Navigate to Settings > API Keys
  3. Verify the key exists and is not revoked
  4. Check the permissions assigned to the key

Permission Denied Error

Problem: API key lacks required permissions. Symptoms:
{
  "error": {
    "message": "API key lacks required permissions",
    "type": "permission_error",
    "code": "insufficient_permissions"
  }
}
Solutions:
  1. Check key permissions in dashboard
  2. Use a key with appropriate permissions (read/write/admin)
  3. Contact your team admin to update permissions

Connection Issues

Network Timeout Errors

Problem: Requests timing out or failing to connect. Symptoms:
  • Connection timeout errors
  • Network unreachable errors
  • DNS resolution failures
Solutions:
Test basic connectivity to PromptGuard:
# Test DNS resolution
nslookup api.promptguard.co

# Test HTTP connectivity
curl -I https://api.promptguard.co/api/v1/models

# Test with your API key
curl https://api.promptguard.co/api/v1/models \
  -H "X-API-Key: $PROMPTGUARD_API_KEY"
Increase timeout values in your client:
// Node.js
const openai = new OpenAI({
  apiKey: process.env.PROMPTGUARD_API_KEY,
  baseURL: 'https://api.promptguard.co/api/v1',
  timeout: 60000 // 60 seconds
});
# Python
client = OpenAI(
    api_key=os.environ.get("PROMPTGUARD_API_KEY"),
    base_url="https://api.promptguard.co/api/v1",
    timeout=60.0
)
Ensure your network allows outbound HTTPS connections:
  • Whitelist api.promptguard.co in firewall
  • Configure proxy settings if required
  • Check corporate network restrictions

Security Policy Issues

Unexpected Request Blocks

Problem: Legitimate requests being blocked by security policies. Symptoms:
{
  "error": {
    "message": "Request blocked by security policy",
    "type": "policy_violation",
    "code": "prompt_injection_detected"
  }
}
Solutions:
  1. Open app.promptguard.co
  2. Navigate to Security > Events
  3. Find the blocked request
  4. Review the detection reason
  5. Determine if it’s a false positive
If you’re getting too many false positives:
  1. Go to Security > Policies
  2. Switch to a more permissive preset (e.g., from RAG System to Default)
  3. Test your application
  4. Gradually increase security as needed
For legitimate patterns being blocked:
  1. Navigate to Security > Custom Rules
  2. Create an “Allow” rule for the specific pattern
  3. Test to ensure the rule works correctly

High False Positive Rate

Problem: Too many legitimate requests being flagged as threats. Solutions:
  1. Start with Default preset during development
  2. Gradually increase security in staging
  3. Monitor false positive rate in dashboard
  4. Create custom whitelist rules for your use cases
  5. Contact support for policy tuning assistance

Performance Issues

High Latency

Problem: Requests taking longer than expected. Expected Performance:
  • PromptGuard overhead: 30-50ms
  • Total latency: OpenAI/Anthropic latency + 30-50ms
Troubleshooting:
async function measureLatency() {
  const start = Date.now();

  try {
    const response = await openai.chat.completions.create({
      model: 'gpt-4',
      messages: [{ role: 'user', content: 'Hello!' }]
    });

    const total = Date.now() - start;
    const pgOverhead = response.headers['x-promptguard-latency'];

    console.log(`Total: ${total}ms, PromptGuard: ${pgOverhead}ms`);

  } catch (error) {
    console.error('Request failed:', error);
  }
}
// Use connection pooling
import https from 'https';

const agent = new https.Agent({
  keepAlive: true,
  maxSockets: 20
});

const openai = new OpenAI({
  apiKey: process.env.PROMPTGUARD_API_KEY,
  baseURL: 'https://api.promptguard.co/api/v1',
  httpAgent: agent
});
Cache responses for identical requests:
const cache = new Map();

async function cachedRequest(prompt) {
  const cacheKey = `${prompt}:gpt-4`;

  if (cache.has(cacheKey)) {
    return cache.get(cacheKey);
  }

  const response = await openai.chat.completions.create({
    model: 'gpt-4',
    messages: [{ role: 'user', content: prompt }]
  });

  cache.set(cacheKey, response);
  return response;
}

Rate Limiting Issues

Too Many Requests Error

Problem: Hitting rate limits. Symptoms:
{
  "error": {
    "message": "Rate limit exceeded",
    "type": "rate_limit_error",
    "code": "too_many_requests"
  }
}
Solutions:
async function requestWithBackoff(requestFn, maxRetries = 3) {
  for (let attempt = 0; attempt < maxRetries; attempt++) {
    try {
      return await requestFn();
    } catch (error) {
      if (error.status === 429 && attempt < maxRetries - 1) {
        const delay = Math.pow(2, attempt) * 1000 + Math.random() * 1000;
        await new Promise(resolve => setTimeout(resolve, delay));
        continue;
      }
      throw error;
    }
  }
}
Monitor rate limit status:
const response = await openai.chat.completions.create({...});

console.log('Rate Limit Headers:');
console.log('Remaining:', response.headers['x-ratelimit-remaining']);
console.log('Reset:', response.headers['x-ratelimit-reset']);
console.log('Limit:', response.headers['x-ratelimit-limit']);
Use multiple API keys to increase limits:
const apiKeys = [
  process.env.PROMPTGUARD_API_KEY_1,
  process.env.PROMPTGUARD_API_KEY_2,
  process.env.PROMPTGUARD_API_KEY_3
];

function getClient() {
  const keyIndex = Math.floor(Math.random() * apiKeys.length);
  return new OpenAI({
    apiKey: apiKeys[keyIndex],
    baseURL: 'https://api.promptguard.co/api/v1'
  });
}

Model and Provider Issues

Model Not Found Error

Problem: Specified model is not available. Symptoms:
{
  "error": {
    "message": "Model 'invalid-model' not found",
    "type": "invalid_request_error",
    "code": "model_not_found"
  }
}
Solutions:
Verify the model name is correct:
# List available models
curl https://api.promptguard.co/v1/models \
  -H "X-API-Key: $PROMPTGUARD_API_KEY"
Supported models:
  • OpenAI: gpt-4, gpt-4-turbo, gpt-3.5-turbo
  • Anthropic: claude-3-5-sonnet-latest, claude-3-opus-20240229
Ensure your provider API keys are configured:
  1. Go to app.promptguard.co
  2. Navigate to Settings > Provider Keys
  3. Add your OpenAI and/or Anthropic API keys
  4. Test the connection

Provider API Key Issues

Problem: Underlying provider (OpenAI/Anthropic) API key is invalid. Solutions:
  1. Update provider keys in PromptGuard dashboard
  2. Verify keys are active in provider’s dashboard
  3. Check key permissions for the models you’re using
  4. Ensure sufficient credits in provider account

Streaming Issues

Streaming Responses Cut Off

Problem: Streaming responses stop unexpectedly. Solutions:
For serverless deployments:
// vercel.json
{
  "functions": {
    "api/chat/stream.js": {
      "maxDuration": 300
    }
  }
}
async function handleStream(stream) {
  try {
    for await (const chunk of stream) {
      const content = chunk.choices[0]?.delta?.content;
      if (content) {
        process.stdout.write(content);
      }
    }
  } catch (error) {
    console.error('Stream error:', error);
    // Implement fallback or retry logic
  }
}

Integration-Specific Issues

Next.js API Routes

Problem: Issues with Next.js API integration. Common Solutions:
Ensure proper environment variable setup:
# .env.local
PROMPTGUARD_API_KEY=your_key_here
// Check in API route
console.log('API Key available:', !!process.env.PROMPTGUARD_API_KEY);
Configure CORS for frontend requests:
// api/chat.js
export default async function handler(req, res) {
  // Add CORS headers
  res.setHeader('Access-Control-Allow-Origin', '*');
  res.setHeader('Access-Control-Allow-Methods', 'POST');
  res.setHeader('Access-Control-Allow-Headers', 'Content-Type');

  if (req.method === 'OPTIONS') {
    res.status(200).end();
    return;
  }

  // Your API logic here
}

Getting Help

Diagnostic Information

When contacting support, include:
# System information
node --version
npm --version

# Test basic connectivity
curl -I https://api.promptguard.co/v1/models

# Check API key format (first 10 characters only)
echo $PROMPTGUARD_API_KEY | head -c 10

# Recent error logs
tail -n 50 /path/to/your/app.log

Support Channels

Debug Mode

Enable debug logging for detailed troubleshooting:
// Node.js
process.env.DEBUG = 'openai:*';

// Or use console logging
const openai = new OpenAI({
  apiKey: process.env.PROMPTGUARD_API_KEY,
  baseURL: 'https://api.promptguard.co/v1',
  dangerouslyAllowBrowser: false, // Security check
  organization: undefined,
  project: undefined,
  defaultHeaders: {
    'X-Debug': 'true'
  }
});
Most issues can be resolved by following these troubleshooting steps. If you continue to experience problems, don’t hesitate to reach out to our support team with the diagnostic information above.