A modern, feature-rich AI chat interface template built with Next.js. Users can configure their own API keys for multiple LLM providers, customize the interface, and integrate it into their projects. Perfect for developers who want to build AI-powered chat applications with their own API keys.
Senol Dogan
π§ Email: senoldogan02@hotmail.com
π GitHub: @senoldogann
This project is licensed under the MIT License - see the LICENSE file for details.
- OpenAI - GPT-4, GPT-4o, GPT-3.5-turbo, o1-preview, o1-mini
- Anthropic - Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 Haiku
- Google - Gemini 2.0 Flash, Gemini 1.5 Pro, Gemini 1.5 Flash
- Hugging Face - Qwen3-Omni, Qwen2.5, Llama 3.1, Mistral, and more
- Ollama - Local and cloud models (Llama 3.1, Mistral, CodeLlama, etc.)
- OpenRouter - Access to multiple models through one API
- QrokCloud - Custom provider support
- GitHub Copilot - GitHub Copilot API integration
- Dark/Light Theme - Automatic theme switching with system preference support
- Responsive Design - Works seamlessly on desktop, tablet, and mobile
- Real-time Streaming - Live streaming responses with typing indicators
- Message Editing - Edit and resend messages
- Message Search - Full-text search across all conversations
- Sidebar Navigation - Collapsible sidebar with chat history
- Image Support - Upload and display images in chat
- Markdown Rendering - Beautiful markdown rendering with syntax highlighting
- Export Conversations - Export chats as PDF, Markdown, or JSON
- Print Support - Print-friendly view for conversations
- Calculator - High-precision mathematical calculations
- Web Search - DuckDuckGo integration for real-time web search
- File Processing - Support for CSV, JSON, Excel, and text files
- Financial APIs - Real-time financial data (stocks, crypto, forex)
- Prompt Improvement - AI-powered prompt enhancement
- Native Function Calling - AI can directly use tools during conversation
- Input Validation - Comprehensive input sanitization and validation
- Prompt Injection Prevention - Advanced pattern detection and sanitization
- CSRF Protection - Built-in CSRF token validation
- Rate Limiting - Configurable rate limiting per user/IP
- Security Headers - XSS, clickjacking, and other security headers
- Environment Validation - Automatic environment variable validation
- Prisma ORM - Type-safe database access
- PostgreSQL - Production-ready database support
- Message Persistence - All conversations saved to database
- Chat Management - Create, delete, and manage multiple chats
- Node.js 18+ and npm/yarn/pnpm
- PostgreSQL database (or use SQLite for development)
- At least one LLM provider API key
Tech Stack:
- Next.js 16.0.1
- React 19.2.0
- TypeScript 5
- Prisma 6.19.0
- Tailwind CSS 4
-
Clone the repository
git clone https://github.com/yourusername/ai-chat-template.git cd ai-chat-template -
Install dependencies
npm install # or yarn install # or pnpm install
-
Set up environment variables
Create a
.envfile in the root directory. You can copy the example file:cp .env.example .env
Then edit
.envand fill in your values:# Database (Required) DATABASE_URL="postgresql://user:password@localhost:5432/ai_chat" # LLM Provider API Keys (Required - at least one provider must be configured) # Add API keys for the providers you want to use: OPENAI_API_KEY="sk-..." ANTHROPIC_API_KEY="sk-ant-..." GOOGLE_API_KEY="..." OLLAMA_API_KEY="..." # Optional for local Ollama, required for cloud OPENROUTER_API_KEY="sk-or-..." QROKCLOUD_API_KEY="..." GITHUB_COPILOT_API_KEY="..." HUGGINGFACE_API_KEY="hf_..." # or HF_API_KEY or HF_API
Important:
- API keys must be configured in
.envfile - UI configuration is not available - At least one LLM provider must be configured with an API key
- For Ollama local mode, you can omit
OLLAMA_API_KEY(it will usehttp://localhost:11434) - For Ollama cloud mode, you must set
OLLAMA_API_KEY(it will usehttps://ollama.com/api) - See
.env.examplefor all available options and detailed descriptions
- API keys must be configured in
-
Set up the database
# Generate Prisma Client npm run prisma:generate # Run migrations npm run prisma:migrate
-
Start the development server
npm run dev
-
Open your browser
Navigate to http://localhost:3000
- Configure API keys in
.envfile (see Configuration section below) - Select a provider from the top bar
- Select a model from the dropdown
- Type your message in the input field
- Press Enter or click Send
- The AI will respond with streaming text
- Click the
+button in the input area - Select "Upload Image"
- Choose an image file
- The image will be displayed in the chat and sent to the AI
- Click the
+button in the input area - Select "Web Search"
- Type your search query
- The AI will search the web and provide up-to-date information
- Click the
+button in the input area - Select "Improve Prompt"
- Type your prompt
- The AI will suggest improvements
- Click on any message you sent
- Click the edit icon
- Modify the message
- Click "Resend" to send the updated message
- Click the search icon in the sidebar
- Type your search query
- Browse through matching messages
- Open a chat conversation
- Click the download icon in the top bar
- Select export format (PDF, Markdown, or JSON)
- The file will be downloaded automatically
All API keys must be configured in the .env file. UI configuration is not available.
Add provider configuration to your .env file:
# Provider selection
LLM_PROVIDER=openai
# OpenAI
OPENAI_API_KEY=sk-...
OPENAI_MODEL=gpt-4o
OPENAI_TEMPERATURE=0.7
OPENAI_MAX_TOKENS=2000
# Anthropic
ANTHROPIC_API_KEY=sk-ant-...
ANTHROPIC_MODEL=claude-3-5-sonnet-20241022
ANTHROPIC_TEMPERATURE=0.7
# Google
GOOGLE_API_KEY=...
GOOGLE_MODEL=gemini-2.0-flash-exp
GOOGLE_TEMPERATURE=0.7
# Hugging Face
HF_API=hf_...
HUGGINGFACE_MODEL=Qwen/Qwen3-Omni-30B-A3B-Instruct
# Ollama (Local)
OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_MODEL=llama3.1
# Ollama (Cloud)
OLLAMA_API_KEY=...
OLLAMA_BASE_URL=https://ollama.com/api
OLLAMA_MODEL=deepseek-v3.1:671b-cloudImportant:
- API keys must be configured in
.envfile - UI configuration is not available - At least one LLM provider must be configured with an API key
- For Ollama local mode, you can omit
OLLAMA_API_KEY(it will usehttp://localhost:11434) - For Ollama cloud mode, you must set
OLLAMA_API_KEY(it will usehttps://ollama.com/api) - See
.env.examplefor all available options and detailed descriptions
DATABASE_URL="postgresql://user:password@localhost:5432/ai_chat"DATABASE_URL="file:./dev.db"The template includes built-in security features. You can customize them in:
lib/security/validation.ts- Input validation ruleslib/prompt-sanitizer.ts- Prompt injection preventionlib/security/rate-limiter.ts- Rate limiting configurationmiddleware.ts- Security headers and CSRF protection
ai-chat-template/
βββ app/
β βββ api/ # API routes
β β βββ chat/ # Chat API endpoints
β β βββ chats/ # Chat management
β β βββ llm/ # LLM provider configuration
β β βββ tools/ # Tool endpoints
β βββ components/ # React components
β β βββ Chat.tsx # Main chat component
β β βββ InputArea.tsx # Input component
β β βββ MessageBubble.tsx # Message display
β β βββ ...
β βββ contexts/ # React contexts
β βββ types/ # TypeScript types
βββ lib/
β βββ llm/ # LLM provider implementations
β β βββ providers/ # Individual provider files
β βββ security/ # Security utilities
β βββ tools/ # Tool implementations
β βββ utils/ # Utility functions
βββ prisma/
β βββ schema.prisma # Database schema
β βββ migrations/ # Database migrations
βββ public/ # Static assets
Send messages to the AI.
Request Body:
{
"messages": [
{
"role": "user",
"content": "Hello, how are you?"
}
],
"chatId": "optional-chat-id",
"provider": "openai",
"model": "gpt-4o",
"temperature": 0.7,
"max_tokens": 1000,
"stream": true
}Response:
- Streaming response (SSE format) if
stream: true - Or JSON response if
stream: false
Note: The messages array should contain the full conversation history. Each message must have role (user/assistant/system) and content fields.
Get all chats.
Create a new chat.
Get a specific chat.
Delete a chat.
Export a chat conversation in different formats.
Query Parameters:
format(required):pdf,markdown, orjson
Response:
- Returns the exported file as a download
Get available LLM providers.
Get provider configuration.
Get available models for a provider. Requires the provider to be configured in .env file.
npm run devnpm run build
npm start# Generate Prisma Client
npm run prisma:generate
# Create a new migration
npm run prisma:migrate
# Open Prisma Studio (database GUI)
npm run prisma:studionpm run lintContributions are welcome! Please read our Contributing Guide for details on our code of conduct and the process for submitting pull requests.
For quick start:
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Next.js - The React framework
- Prisma - Next-generation ORM
- Tailwind CSS - Utility-first CSS framework
- React Markdown - Markdown rendering
- All LLM providers for their amazing APIs
- Contributing Guide - How to contribute to this project
- Changelog - Project changelog and version history
- Tools Integration - How tools are integrated with LLM providers
If you have any questions or need help, please:
- Open an issue on GitHub
- Check the documentation
Features planned for future releases:
- Voice input/output support - Send and receive voice messages
- Multi-language support - UI and AI support for multiple languages
- Plugin system for custom tools - Plugin system for custom tools
- Export conversations - β Export conversations as PDF, Markdown, or JSON (Available now!)
- Collaborative chat rooms - Multiple users working in the same chat
- Custom model fine-tuning - Train models with your own datasets
- Advanced analytics dashboard - Usage statistics and analytics
For detailed information, see the Roadmap Documentation page.
Made with β€οΈ by Senol Dogan