Skip to content

senoldogann/ai-chat-template

AI Chat Template

A modern, feature-rich AI chat interface template built with Next.js. Users can configure their own API keys for multiple LLM providers, customize the interface, and integrate it into their projects. Perfect for developers who want to build AI-powered chat applications with their own API keys.

πŸ‘€ Author

Senol Dogan
πŸ“§ Email: senoldogan02@hotmail.com
πŸ”— GitHub: @senoldogann

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

✨ Features

πŸ€– Multiple LLM Providers

  • OpenAI - GPT-4, GPT-4o, GPT-3.5-turbo, o1-preview, o1-mini
  • Anthropic - Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 Haiku
  • Google - Gemini 2.0 Flash, Gemini 1.5 Pro, Gemini 1.5 Flash
  • Hugging Face - Qwen3-Omni, Qwen2.5, Llama 3.1, Mistral, and more
  • Ollama - Local and cloud models (Llama 3.1, Mistral, CodeLlama, etc.)
  • OpenRouter - Access to multiple models through one API
  • QrokCloud - Custom provider support
  • GitHub Copilot - GitHub Copilot API integration

🎨 Modern UI/UX

  • Dark/Light Theme - Automatic theme switching with system preference support
  • Responsive Design - Works seamlessly on desktop, tablet, and mobile
  • Real-time Streaming - Live streaming responses with typing indicators
  • Message Editing - Edit and resend messages
  • Message Search - Full-text search across all conversations
  • Sidebar Navigation - Collapsible sidebar with chat history
  • Image Support - Upload and display images in chat
  • Markdown Rendering - Beautiful markdown rendering with syntax highlighting
  • Export Conversations - Export chats as PDF, Markdown, or JSON
  • Print Support - Print-friendly view for conversations

πŸ› οΈ Advanced Tools

  • Calculator - High-precision mathematical calculations
  • Web Search - DuckDuckGo integration for real-time web search
  • File Processing - Support for CSV, JSON, Excel, and text files
  • Financial APIs - Real-time financial data (stocks, crypto, forex)
  • Prompt Improvement - AI-powered prompt enhancement
  • Native Function Calling - AI can directly use tools during conversation

πŸ”’ Security Features

  • Input Validation - Comprehensive input sanitization and validation
  • Prompt Injection Prevention - Advanced pattern detection and sanitization
  • CSRF Protection - Built-in CSRF token validation
  • Rate Limiting - Configurable rate limiting per user/IP
  • Security Headers - XSS, clickjacking, and other security headers
  • Environment Validation - Automatic environment variable validation

πŸ“Š Database & Storage

  • Prisma ORM - Type-safe database access
  • PostgreSQL - Production-ready database support
  • Message Persistence - All conversations saved to database
  • Chat Management - Create, delete, and manage multiple chats

πŸš€ Quick Start

Prerequisites

  • Node.js 18+ and npm/yarn/pnpm
  • PostgreSQL database (or use SQLite for development)
  • At least one LLM provider API key

Tech Stack:

  • Next.js 16.0.1
  • React 19.2.0
  • TypeScript 5
  • Prisma 6.19.0
  • Tailwind CSS 4

Installation

  1. Clone the repository

    git clone https://github.com/yourusername/ai-chat-template.git
    cd ai-chat-template
  2. Install dependencies

    npm install
    # or
    yarn install
    # or
    pnpm install
  3. Set up environment variables

    Create a .env file in the root directory. You can copy the example file:

    cp .env.example .env

    Then edit .env and fill in your values:

    # Database (Required)
    DATABASE_URL="postgresql://user:password@localhost:5432/ai_chat"
    
    # LLM Provider API Keys (Required - at least one provider must be configured)
    # Add API keys for the providers you want to use:
    OPENAI_API_KEY="sk-..."
    ANTHROPIC_API_KEY="sk-ant-..."
    GOOGLE_API_KEY="..."
    OLLAMA_API_KEY="..."  # Optional for local Ollama, required for cloud
    OPENROUTER_API_KEY="sk-or-..."
    QROKCLOUD_API_KEY="..."
    GITHUB_COPILOT_API_KEY="..."
    HUGGINGFACE_API_KEY="hf_..."  # or HF_API_KEY or HF_API

    Important:

    • API keys must be configured in .env file - UI configuration is not available
    • At least one LLM provider must be configured with an API key
    • For Ollama local mode, you can omit OLLAMA_API_KEY (it will use http://localhost:11434)
    • For Ollama cloud mode, you must set OLLAMA_API_KEY (it will use https://ollama.com/api)
    • See .env.example for all available options and detailed descriptions
  4. Set up the database

    # Generate Prisma Client
    npm run prisma:generate
    
    # Run migrations
    npm run prisma:migrate
  5. Start the development server

    npm run dev
  6. Open your browser

    Navigate to http://localhost:3000

πŸ“– Usage

Basic Chat

  1. Configure API keys in .env file (see Configuration section below)
  2. Select a provider from the top bar
  3. Select a model from the dropdown
  4. Type your message in the input field
  5. Press Enter or click Send
  6. The AI will respond with streaming text

Advanced Features

Image Upload

  • Click the + button in the input area
  • Select "Upload Image"
  • Choose an image file
  • The image will be displayed in the chat and sent to the AI

Web Search

  • Click the + button in the input area
  • Select "Web Search"
  • Type your search query
  • The AI will search the web and provide up-to-date information

Prompt Improvement

  • Click the + button in the input area
  • Select "Improve Prompt"
  • Type your prompt
  • The AI will suggest improvements

Message Editing

  • Click on any message you sent
  • Click the edit icon
  • Modify the message
  • Click "Resend" to send the updated message

Search Conversations

  • Click the search icon in the sidebar
  • Type your search query
  • Browse through matching messages

Export Conversations

  • Open a chat conversation
  • Click the download icon in the top bar
  • Select export format (PDF, Markdown, or JSON)
  • The file will be downloaded automatically

βš™οΈ Configuration

LLM Provider Configuration

All API keys must be configured in the .env file. UI configuration is not available.

Add provider configuration to your .env file:

# Provider selection
LLM_PROVIDER=openai

# OpenAI
OPENAI_API_KEY=sk-...
OPENAI_MODEL=gpt-4o
OPENAI_TEMPERATURE=0.7
OPENAI_MAX_TOKENS=2000

# Anthropic
ANTHROPIC_API_KEY=sk-ant-...
ANTHROPIC_MODEL=claude-3-5-sonnet-20241022
ANTHROPIC_TEMPERATURE=0.7

# Google
GOOGLE_API_KEY=...
GOOGLE_MODEL=gemini-2.0-flash-exp
GOOGLE_TEMPERATURE=0.7

# Hugging Face
HF_API=hf_...
HUGGINGFACE_MODEL=Qwen/Qwen3-Omni-30B-A3B-Instruct

# Ollama (Local)
OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_MODEL=llama3.1

# Ollama (Cloud)
OLLAMA_API_KEY=...
OLLAMA_BASE_URL=https://ollama.com/api
OLLAMA_MODEL=deepseek-v3.1:671b-cloud

Important:

  • API keys must be configured in .env file - UI configuration is not available
  • At least one LLM provider must be configured with an API key
  • For Ollama local mode, you can omit OLLAMA_API_KEY (it will use http://localhost:11434)
  • For Ollama cloud mode, you must set OLLAMA_API_KEY (it will use https://ollama.com/api)
  • See .env.example for all available options and detailed descriptions

Database Configuration

PostgreSQL (Production)

DATABASE_URL="postgresql://user:password@localhost:5432/ai_chat"

SQLite (Development)

DATABASE_URL="file:./dev.db"

Security Configuration

The template includes built-in security features. You can customize them in:

  • lib/security/validation.ts - Input validation rules
  • lib/prompt-sanitizer.ts - Prompt injection prevention
  • lib/security/rate-limiter.ts - Rate limiting configuration
  • middleware.ts - Security headers and CSRF protection

πŸ—οΈ Project Structure

ai-chat-template/
β”œβ”€β”€ app/
β”‚   β”œβ”€β”€ api/              # API routes
β”‚   β”‚   β”œβ”€β”€ chat/         # Chat API endpoints
β”‚   β”‚   β”œβ”€β”€ chats/        # Chat management
β”‚   β”‚   β”œβ”€β”€ llm/          # LLM provider configuration
β”‚   β”‚   └── tools/        # Tool endpoints
β”‚   β”œβ”€β”€ components/       # React components
β”‚   β”‚   β”œβ”€β”€ Chat.tsx      # Main chat component
β”‚   β”‚   β”œβ”€β”€ InputArea.tsx # Input component
β”‚   β”‚   β”œβ”€β”€ MessageBubble.tsx # Message display
β”‚   β”‚   └── ...
β”‚   β”œβ”€β”€ contexts/         # React contexts
β”‚   └── types/           # TypeScript types
β”œβ”€β”€ lib/
β”‚   β”œβ”€β”€ llm/             # LLM provider implementations
β”‚   β”‚   └── providers/   # Individual provider files
β”‚   β”œβ”€β”€ security/        # Security utilities
β”‚   β”œβ”€β”€ tools/           # Tool implementations
β”‚   └── utils/           # Utility functions
β”œβ”€β”€ prisma/
β”‚   β”œβ”€β”€ schema.prisma    # Database schema
β”‚   └── migrations/      # Database migrations
└── public/              # Static assets

πŸ”§ API Documentation

Chat API

POST /api/chat

Send messages to the AI.

Request Body:

{
  "messages": [
    {
      "role": "user",
      "content": "Hello, how are you?"
    }
  ],
  "chatId": "optional-chat-id",
  "provider": "openai",
  "model": "gpt-4o",
  "temperature": 0.7,
  "max_tokens": 1000,
  "stream": true
}

Response:

  • Streaming response (SSE format) if stream: true
  • Or JSON response if stream: false

Note: The messages array should contain the full conversation history. Each message must have role (user/assistant/system) and content fields.

Chat Management API

GET /api/chats

Get all chats.

POST /api/chats

Create a new chat.

GET /api/chats/[chatId]

Get a specific chat.

DELETE /api/chats/[chatId]

Delete a chat.

GET /api/chats/[chatId]/export?format=pdf|markdown|json

Export a chat conversation in different formats.

Query Parameters:

  • format (required): pdf, markdown, or json

Response:

  • Returns the exported file as a download

LLM Provider API

GET /api/llm/providers

Get available LLM providers.

GET /api/llm/config?provider=openai

Get provider configuration.

GET /api/llm/providers/[provider]/models

Get available models for a provider. Requires the provider to be configured in .env file.

πŸ› οΈ Development

Running in Development Mode

npm run dev

Building for Production

npm run build
npm start

Database Management

# Generate Prisma Client
npm run prisma:generate

# Create a new migration
npm run prisma:migrate

# Open Prisma Studio (database GUI)
npm run prisma:studio

Linting

npm run lint

🀝 Contributing

Contributions are welcome! Please read our Contributing Guide for details on our code of conduct and the process for submitting pull requests.

For quick start:

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/AmazingFeature)
  3. Commit your changes (git commit -m 'Add some AmazingFeature')
  4. Push to the branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

πŸ“ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ™ Acknowledgments

πŸ“š Documentation

πŸ“§ Support

If you have any questions or need help, please:

πŸ—ΊοΈ Roadmap

Features planned for future releases:

  • Voice input/output support - Send and receive voice messages
  • Multi-language support - UI and AI support for multiple languages
  • Plugin system for custom tools - Plugin system for custom tools
  • Export conversations - βœ… Export conversations as PDF, Markdown, or JSON (Available now!)
  • Collaborative chat rooms - Multiple users working in the same chat
  • Custom model fine-tuning - Train models with your own datasets
  • Advanced analytics dashboard - Usage statistics and analytics

For detailed information, see the Roadmap Documentation page.


Made with ❀️ by Senol Dogan

About

Ai-chat-template

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages