By the end of this guide, you’ll have PromptGuard protecting your React and Next.js applications. You’ll learn how to set up secure API routes that keep your credentials safe while providing a seamless user experience.
What you’ll learn
In this guide, you’ll learn how to:
Set up PromptGuard with Next.js API routes (App Router and Pages Router)
Create secure server-side proxies for AI requests
Handle security blocks gracefully in React components
Implement proper error handling and loading states
Set up environment variables securely
Deploy your protected application
Prerequisites
Before you begin, make sure you have:
✅ React 18+ or Next.js 13+ installed
✅ A PromptGuard account and API key (sign up here or get your API key )
✅ An existing OpenAI API key (PromptGuard uses a pass-through model)
✅ An existing React or Next.js project with OpenAI integration
Security-First Architecture
Never expose API keys in client-side code! Always route AI requests through your backend API to keep credentials secure.
The recommended architecture for React/Next.js apps:
Next.js Integration
API Routes with PromptGuard
Create secure API routes that proxy to PromptGuard:
app/api/chat/route.ts (App Router)
pages/api/chat.ts (Pages Router)
import { OpenAI } from 'openai' ;
import { NextRequest , NextResponse } from 'next/server' ;
// Initialize PromptGuard-protected client
const openai = new OpenAI ({
apiKey: process . env . PROMPTGUARD_API_KEY ! ,
baseURL: 'https://api.promptguard.co/api/v1'
});
export async function POST ( request : NextRequest ) {
try {
const { message , model = 'gpt-4' } = await request . json ();
if ( ! message ) {
return NextResponse . json (
{ error: 'Message is required' },
{ status: 400 }
);
}
const completion = await openai . chat . completions . create ({
model ,
messages: [{ role: 'user' , content: message }]
});
return NextResponse . json ({
response: completion . choices [ 0 ]. message . content ,
usage: completion . usage ,
protected_by: 'PromptGuard'
});
} catch ( error : any ) {
console . error ( 'AI request error:' , error );
// Handle PromptGuard security blocks
if ( error . message . includes ( 'policy_violation' )) {
return NextResponse . json (
{
error: 'Request blocked by security policy' ,
type: 'security_block'
},
{ status: 400 }
);
}
// Handle rate limits
if ( error . status === 429 ) {
return NextResponse . json (
{ error: 'Too many requests, please try again later' },
{ status: 429 }
);
}
return NextResponse . json (
{ error: 'Internal server error' },
{ status: 500 }
);
}
}
Streaming API Routes
Support real-time streaming responses:
app/api/chat/stream/route.ts (App Router)
pages/api/chat/stream.ts (Pages Router)
import { OpenAI } from 'openai' ;
import { NextRequest } from 'next/server' ;
const openai = new OpenAI ({
apiKey: process . env . PROMPTGUARD_API_KEY ! ,
baseURL: 'https://api.promptguard.co/api/v1'
});
export async function POST ( request : NextRequest ) {
try {
const { message , model = 'gpt-4' } = await request . json ();
const stream = await openai . chat . completions . create ({
model ,
messages: [{ role: 'user' , content: message }],
stream: true
});
// Create a ReadableStream
const encoder = new TextEncoder ();
const readable = new ReadableStream ({
async start ( controller ) {
try {
for await ( const chunk of stream ) {
const content = chunk . choices [ 0 ]?. delta ?. content || '' ;
if ( content ) {
controller . enqueue (
encoder . encode ( `data: ${ JSON . stringify ({ content }) } \n\n ` )
);
}
}
controller . enqueue ( encoder . encode ( 'data: [DONE] \n\n ' ));
controller . close ();
} catch ( error ) {
controller . error ( error );
}
}
});
return new Response ( readable , {
headers: {
'Content-Type' : 'text/event-stream' ,
'Cache-Control' : 'no-cache' ,
'Connection' : 'keep-alive' ,
},
});
} catch ( error : any ) {
console . error ( 'Streaming error:' , error );
return new Response (
JSON . stringify ({ error: 'Streaming failed' }),
{ status: 500 , headers: { 'Content-Type' : 'application/json' } }
);
}
}
Environment Configuration
Configure your Next.js environment securely:
# PromptGuard API Key (keep secure!)
PROMPTGUARD_API_KEY = pg_live_xxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxx
# Optional: Configure environment
NODE_ENV = development
NEXT_PUBLIC_APP_URL = http://localhost:3000
React Frontend Components
Basic Chat Component
Create a secure chat interface:
components/ChatInterface.tsx
components/StreamingChat.tsx
'use client' ;
import { useState } from 'react' ;
interface Message {
id : string ;
content : string ;
role : 'user' | 'assistant' ;
timestamp : Date ;
}
interface ChatResponse {
response ?: string ;
error ?: string ;
type ?: string ;
protected_by ?: string ;
}
export default function ChatInterface () {
const [ messages , setMessages ] = useState < Message []>([]);
const [ input , setInput ] = useState ( '' );
const [ isLoading , setIsLoading ] = useState ( false );
const sendMessage = async ( content : string ) => {
if ( ! content . trim ()) return ;
// Add user message
const userMessage : Message = {
id: Date . now (). toString (),
content ,
role: 'user' ,
timestamp: new Date ()
};
setMessages ( prev => [ ... prev , userMessage ]);
setInput ( '' );
setIsLoading ( true );
try {
const response = await fetch ( '/api/chat' , {
method: 'POST' ,
headers: { 'Content-Type' : 'application/json' },
body: JSON . stringify ({ message: content })
});
const data : ChatResponse = await response . json ();
if ( data . error ) {
// Handle different error types
let errorMessage = 'Sorry, something went wrong.' ;
if ( data . type === 'security_block' ) {
errorMessage = 'Your message was blocked by our security system. Please try rephrasing.' ;
} else if ( response . status === 429 ) {
errorMessage = 'Too many requests. Please wait a moment and try again.' ;
}
const errorMessageObj : Message = {
id: ( Date . now () + 1 ). toString (),
content: errorMessage ,
role: 'assistant' ,
timestamp: new Date ()
};
setMessages ( prev => [ ... prev , errorMessageObj ]);
} else if ( data . response ) {
// Add assistant response
const assistantMessage : Message = {
id: ( Date . now () + 1 ). toString (),
content: data . response ,
role: 'assistant' ,
timestamp: new Date ()
};
setMessages ( prev => [ ... prev , assistantMessage ]);
}
} catch ( error ) {
console . error ( 'Chat error:' , error );
const errorMessage : Message = {
id: ( Date . now () + 1 ). toString (),
content: 'Network error. Please check your connection and try again.' ,
role: 'assistant' ,
timestamp: new Date ()
};
setMessages ( prev => [ ... prev , errorMessage ]);
} finally {
setIsLoading ( false );
}
};
const handleSubmit = ( e : React . FormEvent ) => {
e . preventDefault ();
sendMessage ( input );
};
return (
< div className = "flex flex-col h-96 border rounded-lg" >
{ /* Messages */ }
< div className = "flex-1 overflow-y-auto p-4 space-y-4" >
{ messages . map (( message ) => (
< div
key = { message . id }
className = { `flex ${
message . role === 'user' ? 'justify-end' : 'justify-start'
} ` }
>
< div
className = { `max-w-xs px-4 py-2 rounded-lg ${
message . role === 'user'
? 'bg-blue-500 text-white'
: 'bg-gray-200 text-gray-800'
} ` }
>
{ message . content }
</ div >
</ div >
)) }
{ isLoading && (
< div className = "flex justify-start" >
< div className = "bg-gray-200 text-gray-800 px-4 py-2 rounded-lg" >
< div className = "flex space-x-1" >
< div className = "w-2 h-2 bg-gray-400 rounded-full animate-bounce" ></ div >
< div className = "w-2 h-2 bg-gray-400 rounded-full animate-bounce delay-100" ></ div >
< div className = "w-2 h-2 bg-gray-400 rounded-full animate-bounce delay-200" ></ div >
</ div >
</ div >
</ div >
) }
</ div >
{ /* Input */ }
< form onSubmit = { handleSubmit } className = "p-4 border-t" >
< div className = "flex space-x-2" >
< input
type = "text"
value = { input }
onChange = { ( e ) => setInput ( e . target . value ) }
placeholder = "Type your message..."
className = "flex-1 px-3 py-2 border rounded-md focus:outline-none focus:ring-2 focus:ring-blue-500"
disabled = { isLoading }
/>
< button
type = "submit"
disabled = { isLoading || ! input . trim () }
className = "px-4 py-2 bg-blue-500 text-white rounded-md hover:bg-blue-600 disabled:opacity-50 disabled:cursor-not-allowed"
>
Send
</ button >
</ div >
</ form >
{ /* Security Badge */ }
< div className = "px-4 pb-2" >
< span className = "text-xs text-gray-500 flex items-center" >
🛡️ Protected by PromptGuard
</ span >
</ div >
</ div >
);
}
Custom Hooks for AI Integration
Create reusable hooks for AI functionality:
hooks/useChat.ts
hooks/useStreamingChat.ts
import { useState , useCallback } from 'react' ;
interface Message {
id : string ;
content : string ;
role : 'user' | 'assistant' ;
timestamp : Date ;
}
interface UseChatOptions {
onError ?: ( error : string , type ?: string ) => void ;
maxMessages ?: number ;
}
export function useChat ( options : UseChatOptions = {}) {
const [ messages , setMessages ] = useState < Message []>([]);
const [ isLoading , setIsLoading ] = useState ( false );
const sendMessage = useCallback ( async ( content : string ) => {
if ( ! content . trim () || isLoading ) return ;
const userMessage : Message = {
id: Date . now (). toString (),
content ,
role: 'user' ,
timestamp: new Date ()
};
setMessages ( prev => {
const newMessages = [ ... prev , userMessage ];
// Limit message history if specified
if ( options . maxMessages && newMessages . length > options . maxMessages ) {
return newMessages . slice ( - options . maxMessages );
}
return newMessages ;
});
setIsLoading ( true );
try {
const response = await fetch ( '/api/chat' , {
method: 'POST' ,
headers: { 'Content-Type' : 'application/json' },
body: JSON . stringify ({ message: content })
});
const data = await response . json ();
if ( data . error ) {
options . onError ?.( data . error , data . type );
return ;
}
if ( data . response ) {
const assistantMessage : Message = {
id: ( Date . now () + 1 ). toString (),
content: data . response ,
role: 'assistant' ,
timestamp: new Date ()
};
setMessages ( prev => [ ... prev , assistantMessage ]);
}
} catch ( error ) {
options . onError ?.( 'Network error occurred' );
} finally {
setIsLoading ( false );
}
}, [ isLoading , options ]);
const clearMessages = useCallback (() => {
setMessages ([]);
}, []);
return {
messages ,
isLoading ,
sendMessage ,
clearMessages
};
}
Server-Side Rendering (SSR)
Handle AI content generation during SSR:
pages/ai-content/[slug].tsx
app/ai-content/[slug]/page.tsx (App Router)
import { GetServerSideProps } from 'next' ;
import { OpenAI } from 'openai' ;
interface AIContentPageProps {
content : string ;
topic : string ;
error ?: string ;
}
export default function AIContentPage ({ content , topic , error } : AIContentPageProps ) {
if ( error ) {
return (
< div className = "p-8" >
< h1 className = "text-2xl font-bold text-red-600" > Error </ h1 >
< p > { error } </ p >
</ div >
);
}
return (
< div className = "p-8 max-w-4xl mx-auto" >
< h1 className = "text-3xl font-bold mb-6" > { topic } </ h1 >
< div className = "prose prose-lg" >
{ content . split ( ' \n ' ). map (( paragraph , index ) => (
< p key = { index } > { paragraph } </ p >
)) }
</ div >
< div className = "mt-8 text-sm text-gray-500" >
🛡️ Generated with PromptGuard protection
</ div >
</ div >
);
}
export const getServerSideProps : GetServerSideProps = async ( context ) => {
const { slug } = context . params ! ;
const topic = Array . isArray ( slug ) ? slug . join ( ' ' ) : slug ;
try {
const openai = new OpenAI ({
apiKey: process . env . PROMPTGUARD_API_KEY ! ,
baseURL: 'https://api.promptguard.co/api/v1'
});
const completion = await openai . chat . completions . create ({
model: 'gpt-4' ,
messages: [{
role: 'user' ,
content: `Write a comprehensive article about ${ topic } . Make it informative and well-structured.`
}],
max_tokens: 1000
});
return {
props: {
content: completion . choices [ 0 ]. message . content || '' ,
topic: topic . charAt ( 0 ). toUpperCase () + topic . slice ( 1 )
}
};
} catch ( error : any ) {
console . error ( 'SSR AI generation error:' , error );
return {
props: {
content: '' ,
topic ,
error: error . message . includes ( 'policy_violation' )
? 'Content generation was blocked by security policy'
: 'Failed to generate content'
}
};
}
};
Error Boundaries
Create error boundaries for AI-powered components:
components/AIErrorBoundary.tsx
components/SecurityErrorBoundary.tsx
'use client' ;
import { Component , ErrorInfo , ReactNode } from 'react' ;
interface Props {
children : ReactNode ;
fallback ?: ReactNode ;
}
interface State {
hasError : boolean ;
error ?: Error ;
}
export class AIErrorBoundary extends Component < Props , State > {
public state : State = {
hasError: false
};
public static getDerivedStateFromError ( error : Error ) : State {
return { hasError: true , error };
}
public componentDidCatch ( error : Error , errorInfo : ErrorInfo ) {
console . error ( 'AI component error:' , error , errorInfo );
}
public render () {
if ( this . state . hasError ) {
if ( this . props . fallback ) {
return this . props . fallback ;
}
return (
< div className = "p-4 border border-red-200 rounded-lg bg-red-50" >
< h3 className = "text-lg font-semibold text-red-800" >
AI Service Unavailable
</ h3 >
< p className = "text-red-600 mt-2" >
We're experiencing issues with our AI service. Please try again later.
</ p >
< button
onClick = { () => this . setState ({ hasError: false , error: undefined }) }
className = "mt-3 px-4 py-2 bg-red-600 text-white rounded hover:bg-red-700"
>
Try Again
</ button >
</ div >
);
}
return this . props . children ;
}
}
// Usage wrapper
export function withAIErrorBoundary < P extends object >(
Component : React . ComponentType < P >
) {
return function WrappedComponent ( props : P ) {
return (
< AIErrorBoundary >
< Component { ... props } />
</ AIErrorBoundary >
);
};
}
Optimize your React/Next.js app for AI workloads:
utils/aiCache.ts
hooks/useCachedAI.ts
// Client-side caching for AI responses
class AICache {
private cache = new Map < string , { response : string ; timestamp : number }>();
private ttl = 5 * 60 * 1000 ; // 5 minutes
generateKey ( message : string , model : string = 'gpt-4' ) : string {
return ` ${ model } : ${ message } ` ;
}
get ( key : string ) : string | null {
const entry = this . cache . get ( key );
if ( ! entry ) return null ;
if ( Date . now () - entry . timestamp > this . ttl ) {
this . cache . delete ( key );
return null ;
}
return entry . response ;
}
set ( key : string , response : string ) : void {
this . cache . set ( key , {
response ,
timestamp: Date . now ()
});
// Clean up old entries
if ( this . cache . size > 100 ) {
const oldestKey = this . cache . keys (). next (). value ;
this . cache . delete ( oldestKey );
}
}
clear () : void {
this . cache . clear ();
}
}
export const aiCache = new AICache ();
Coming Soon: Native React SDK
This SDK is not yet available. The @promptguard/react package does not exist on npm.The code below is a preview of our planned React SDK. For now, use API routes with the official OpenAI SDK (see examples above). ETA: Q2 2025 (based on customer demand)
PromptGuard currently works through API routes for security. We’re building a native React SDK that will provide enhanced client-side capabilities while maintaining security.
Preview of Future React SDK
// 🚧 PREVIEW ONLY - Package not yet published
// For now, use API routes (see examples above)
import { PromptGuardProvider , usePromptGuard } from '@promptguard/react' ;
// Provider setup
function App () {
return (
< PromptGuardProvider
apiKey = "your_public_key"
config = { {
// Note: Security is configured via project presets in dashboard
// Available presets: default, support_bot, code_assistant, rag_system, data_analysis, creative_writing
enableClientFiltering: true ,
customPolicies: [ 'no-pii' , 'family-friendly' ]
} }
>
< ChatApp />
</ PromptGuardProvider >
);
}
// Hook usage
function ChatComponent () {
const { sendMessage , isLoading , securityStatus } = usePromptGuard ();
const handleMessage = async ( content : string ) => {
const result = await sendMessage ({
content ,
model: 'gpt-4' ,
securityOverrides: {
// Note: Security is configured via project presets in dashboard
customFilters: [ 'technical-only' ]
}
});
if ( result . blocked ) {
console . log ( 'Security reason:' , result . reason );
} else {
console . log ( 'Response:' , result . response );
}
};
return (
< div >
< SecurityIndicator status = { securityStatus } />
{ /* Chat interface */ }
</ div >
);
}
Planned features for React SDK:
Client-side input validation and filtering
Real-time security status indicators
Built-in UI components for secure chat
TypeScript-first with full type safety
Automatic retry logic and error handling
Integration with popular React frameworks
SSR and SSG support out of the box
Deployment Considerations
Vercel Deployment
Deploy your PromptGuard-powered Next.js app to Vercel:
{
"env" : {
"PROMPTGUARD_API_KEY" : "@promptguard-api-key"
},
"functions" : {
"app/api/chat/route.ts" : {
"maxDuration" : 30
},
"app/api/chat/stream/route.ts" : {
"maxDuration" : 60
}
}
}
Environment-Specific Configuration
const environments = {
development: {
promptguardBaseUrl: 'https://api.promptguard.co/api/v1' ,
logLevel: 'debug' ,
enableCaching: false
},
staging: {
promptguardBaseUrl: 'https://api.promptguard.co/api/v1' ,
logLevel: 'info' ,
enableCaching: true
},
production: {
promptguardBaseUrl: 'https://api.promptguard.co/api/v1' ,
logLevel: 'error' ,
enableCaching: true
}
};
export const config = environments [ process . env . NODE_ENV as keyof typeof environments ] || environments . development ;
Next Steps
Troubleshooting
Solutions :
Verify PROMPTGUARD_API_KEY in environment variables
Check API route file paths and export syntax
Ensure Next.js version compatibility (13+ for App Router)
Streaming Responses Cut Off
Solutions :
Increase function timeout in deployment settings
Check for proxy timeouts in production
Verify SSE header configuration
High Latency in Development
Solutions :
Enable request caching for repeated queries
Use development mode optimizations
Consider connection pooling for high traffic
Security Blocks in Production
Solutions :
Review security policy settings in dashboard
Implement graceful error handling for blocks
Consider adjusting security level for your use case
Need more help? Contact support or check our troubleshooting guide .