AI chat
The boilerplate includes professional AI chat templates that you can use to quickly add AI capabilities to your application. These templates support multiple AI providers with streaming responses and modern chat UX.
Accessing the templates
View the working AI templates at:
/ai- Tabbed interface with both Nuxt UI and shadcn versions/ai/chat/[id]- Full-featured chat interface with conversation history
What's included
The AI chat templates feature:
- Multiple AI providers - ChatGPT (GPT-4), Claude (Anthropic), and Grok (xAI)
- Two UI variants - shadcn-vue and Nuxt UI implementations
- Streaming responses - Real-time token-by-token output
- Conversation history - Persistent chat sessions with the Nuxt UI version
- Model selection - Easy switching between AI models
- Markdown rendering - Formatted responses with code highlighting
- Copy functionality - Copy AI responses to clipboard
- Error handling - Graceful error states and retry logic
- Dark mode support - Respects user preferences
- Mobile responsive - Works great on all devices
Two template variants
Shadcn version
The shadcn version (/ai → Shadcn tab) provides a simple, direct AI interface perfect for one-off interactions.
Use this when you need:
- Simple question/answer interface
- No conversation history required
- Quick AI interactions
- Full control over UI customization
Key features:
- Clean, minimal interface
- Direct streaming without persistence
- Prompt input with validation
- Real-time response display
Nuxt UI version
The Nuxt UI version (/ai → Nuxt UI tab or /ai/chat/[id]) provides a full-featured chat experience with persistent conversations.
Use this when you need:
- Multi-turn conversations
- Conversation history storage
- Chat session management
- Professional chat UX
Key features:
- Creates persistent chat sessions
- Quick prompt suggestions
- Message regeneration
- Copy message functionality
- Chat session list in sidebar
Getting started
1. Configure AI providers
Add your API keys to .env:
# OpenAI (for GPT models)
OPENAI_API_KEY="sk-..."
# Anthropic (for Claude models)
ANTHROPIC_API_KEY="sk-ant-..."
# Grok / xAI (for Grok models)
GROK_API_KEY="xai-..."
You only need to configure the providers you plan to use. Get API keys from:
2. Try the demo pages
Navigate to /ai in your application to see both template variants in action. Switch between tabs to compare the implementations.
3. Choose your template
Decide which template variant fits your use case:
- Use shadcn version for simple AI features
- Use Nuxt UI version for full chat applications
Using the templates
Simple AI interface (shadcn)
The shadcn version is located at app/components/ai/AiInterfaceShadcn.vue. Use it directly in your pages:
<script setup lang="ts">
import AiInterfaceShadcn from '@/components/ai/AiInterfaceShadcn.vue'
</script>
<template>
<div class="container mx-auto py-8">
<AiInterfaceShadcn />
</div>
</template>
Chat application (Nuxt UI)
The Nuxt UI version provides a complete chat experience with these key pages:
Main chat interface (app/pages/ai/chat/[id].vue):
- Displays conversation history
- Handles streaming responses
- Supports message regeneration
- Shows thinking state for reasoning models
Chat list page - Create a page to list user's chats:
<script setup lang="ts">
const { data: chats } = await useFetch('/api/chats')
</script>
<template>
<div class="space-y-4">
<h1>Your conversations</h1>
<div v-for="chat in chats" :key="chat.id">
<NuxtLink :to="`/ai/chat/${chat.id}`">
{{ chat.title || 'New chat' }}
</NuxtLink>
</div>
</div>
</template>
Customization
Changing AI models
The default models are configured in server/api/ai/stream.ts:
const models = {
chatgpt: 'gpt-4o-mini',
claude: 'claude-3-5-haiku-latest',
grok: 'grok-4',
}
To use more capable models, update the configuration:
const models = {
chatgpt: 'gpt-4o', // More capable GPT-4
claude: 'claude-3-5-sonnet-latest', // More capable Claude
grok: 'grok-vision-beta', // Grok with vision
}
Adjusting response parameters
Customize AI behavior by modifying the request parameters:
await fetch('/api/ai/stream', {
method: 'POST',
body: JSON.stringify({
model: 'chatgpt',
prompt: 'Your prompt here',
temperature: 0.7, // 0-2: Lower = focused, higher = creative
max_tokens: 2000, // Max response length
top_p: 0.95, // Alternative to temperature
}),
})
Common temperature values:
0.3- Factual responses, code generation0.7- Balanced creativity (recommended)1.2- Creative writing, brainstorming
Styling and branding
Both template variants use your app's design system:
Shadcn version:
- Uses shadcn-vue components from
app/components/ui/ - Customize by modifying component variants
- Follows Tailwind configuration
Nuxt UI version:
- Uses Nuxt UI components (
UChatMessages,UChatPrompt) - Customize with Nuxt UI theming
- Modify the
ailayout for sidebar customization
Building on the templates
Add authentication
Protect AI endpoints so only logged-in users can access them:
requireAuth() calls in:server/api/chats/index.post.tsserver/api/chats/[id].get.tsserver/api/chats/[id].post.ts
export default defineEventHandler(async event => {
// Require authentication
const userId = await requireAuth(event)
// ... rest of the handler
})
Add subscription gating
Limit AI access to paying subscribers:
import { requireSubscription } from '@@/server/utils/require-subscription'
export default defineEventHandler(async event => {
// Require pro or enterprise subscription
await requireSubscription(event, { plans: ['pro', 'enterprise'] })
// ... rest of the handler
})
Customize rate limits
The AI endpoint includes rate limiting (5 requests per 5 minutes by default). Adjust in server/api/ai/stream.ts:
await rateLimit(event, {
max: 5, // Number of requests
window: '5m', // Time window
prefix: 'ai-stream',
})
Add usage tracking
Track AI usage for monitoring and billing:
// After validation
await prisma.aiUsage.create({
data: {
userId: event.context.user.id,
model,
promptTokens: prompt.length / 4, // Rough estimate
completionTokens: 0, // Update after response
},
})
Database schema
The Nuxt UI chat template uses these Prisma models for conversation persistence:
model Chat {
id String @id @default(cuid())
title String?
userId String
user User @relation(fields: [userId], references: [id], onDelete: Cascade)
messages Message[]
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
}
model Message {
id String @id @default(cuid())
chatId String
chat Chat @relation(fields: [chatId], references: [id], onDelete: Cascade)
role String
content String @db.Text
createdAt DateTime @default(now())
}
These are already included in your Prisma schema if you're using the Nuxt UI version.
API endpoints
The templates use these API endpoints:
Stream endpoint
POST /api/ai/stream - Stream AI responses without persistence
// Request
{
model: 'chatgpt' | 'claude' | 'grok',
prompt: string,
temperature?: number,
max_tokens?: number,
top_p?: number
}
// Response: Server-Sent Events stream
Chat endpoints
GET /api/chats - List user's chats
POST /api/chats - Create new chat
GET /api/chats/[id] - Get chat with messages
POST /api/chats/[id] - Send message in chat (returns stream)
DELETE /api/chats/[id] - Delete chat
Best practices
User experience
- Loading states - Show clear indicators during streaming
- Error handling - Display friendly error messages
- Empty states - Guide users on how to start
- Copy functionality - Let users copy AI responses
- Mobile optimization - Test on small screens
Performance
- Streaming - Always use streaming for better perceived performance
- Rate limiting - Protect against abuse and API costs
- Caching - Cache static content, not AI responses
- Lazy loading - Load chat history incrementally
Security
- Authentication - Require login for AI features
- Input validation - Validate and sanitize all inputs
- Rate limiting - Prevent abuse with appropriate limits
- API key security - Never expose keys in client code
- Content filtering - Consider adding content moderation
Cost management
- Model selection - Start with cheaper models (gpt-4o-mini, claude-haiku)
- Token limits - Set reasonable max_tokens limits
- Usage tracking - Monitor API costs per user
- Subscription gating - Limit expensive features to paid tiers
- Prompt optimization - Keep prompts concise
Production considerations
Monitoring
Track important metrics:
- API response times
- Error rates
- Token usage per user
- Model distribution
- User satisfaction
Scaling
For high-traffic applications:
- Implement user-based rate limits
- Add Redis for rate limit storage
- Consider API response caching for common queries
- Monitor provider rate limits
- Have fallback providers configured
Compliance
Depending on your use case:
- Review AI provider terms of service
- Implement content moderation
- Add user consent for AI features
- Consider data retention policies
- Review privacy implications
Reference
- AI integration documentation - Detailed API information
- OpenAI documentation
- Anthropic documentation
- xAI documentation
- shadcn-vue components
- Nuxt UI AI components