Russian README - README_RU.md
ColAI is a fully offline platform for collaborative work of multiple neural networks. The system allows multiple AI models to communicate with each other, play games (such as Mafia), discuss projects and work together using local Ollama models.
- Collaborative Mode: Up to 8 specialized neural networks work together on any topic
- Mafia Mode: AI players participate in Mafia game with realistic behavior
- Fully Offline: Everything works locally through Ollama, no dependency on external APIs
- Flexible Model Configuration: Choice of any Ollama model at startup
- Multimodality: Support for image and document uploads
- Live Chat: Dynamic communication between networks with initiative and fragmented messages
- OS: Windows 10/11, macOS 10.15+, Linux (Ubuntu 20.04+)
- RAM: 8 GB (16 GB recommended for large models)
- Storage: 20 GB free space (for models)
- CPU: Modern processor with AVX2 support
- GPU: Optional, but NVIDIA GPU with 6+ GB VRAM recommended for better performance
- RAM: 32 GB
- GPU: NVIDIA RTX 3060 or better (12+ GB VRAM)
- Storage: 50+ GB SSD
- Download Node.js from the official website
- Install the LTS version (18.x or higher recommended)
- Verify installation:
node --version
npm --versionWindows:
- Download the installer from ollama.ai
- Run the installer and follow instructions
- Ollama will be automatically added to PATH
macOS:
brew install ollama
# or download from ollama.aiLinux:
curl -fsSL https://ollama.ai/install.sh | shOpen terminal and run:
ollama serveOllama will be available at http://localhost:11434
Important: Ollama must be running before using ColAI!
Recommended models for ColAI:
# Main model (recommended)
ollama pull qwen2.5:14b
# Alternative models
ollama pull llama3.2:3b # Lightweight model for weak PCs
ollama pull deepseek-r1 # For analytical tasks
ollama pull gemma2:2b # For Mafia games
ollama pull mistral:7b # Universal modelNote: The qwen2.5:14b model requires ~8 GB RAM. For systems with less memory, use qwen2.5:7b or llama3.2:3b.
- Extract the project archive
- Open terminal in the project folder
- Install dependencies (if required):
npm installNote: ColAI uses native ES modules and can work without npm by opening index.html directly in the browser. However, for better compatibility, using a local server is recommended.
To run via local server, from the project root:
# Using Python (if installed) — serve from ColAI-master
cd ColAI-master && python -m http.server 8000
# Or using Node.js http-server
cd ColAI-master && npx http-server -p 8000
# Or using PHP
cd ColAI-master && php -S localhost:8000Then open in browser: http://localhost:8000
Alternative: Open ColAI-master/index.html directly in the browser (Chrome, Firefox, Edge).
- Ensure Ollama is running:
ollama serve - Open ColAI in browser (ColAI-master/index.html or via local server)
- Configure model:
- In the "Ollama Model" field, enter the model name (e.g.: qwen2.5:14b)
- Click "Check Connection" to verify Ollama availability
- Ensure the model is downloaded:
ollama pull qwen2.5:14b
- Start working:
- Enter project name
- Describe discussion topic
- Configure parameters (temperature, tokens, etc.)
- Click "Start Collaboration"
- Project Setup: Enter project name, describe topic, upload files if needed (images, PDF, text)
- Network Selection: Select which neural networks participate (up to 8)
- Parameter Configuration: Temperature, Max Tokens, Top P, Iterations
- Start Discussion: Networks discuss in turns, summaries are created, process repeats
- Navigate to Mafia mode through navigation menu
- Configure game: players (4–8), mafia count, discussion rounds, language
- Click "Start Game"
- Game proceeds through day and night phases
Examples: qwen2.5:14b, qwen2.5:7b, llama3.2:3b, deepseek-r1, mistral:7b
The model is saved in localStorage.
- System Prompt Template, Temperature (0.0–2.0), Max Tokens, Top P
- Presence Penalty, Frequency Penalty
Ollama not connecting: Run ollama serve, check http://localhost:11434/api/tags
Model not found: ollama list, ollama pull <model_name>
Slow performance: Use smaller model, decrease max_tokens, use fewer networks
CORS errors: Use local server, not file://
ColAI-master/
├── app.js # Main application
├── index.html # HTML interface
├── styles.css # Styles
├── darkModeManager.js # Dark theme
└── modules/
├── framework.js # Main framework
├── networkManager.js
├── ollamaManager.js
├── mafiaMode.js
└── ...
See LICENSE file in project root.
- Check the Troubleshooting section
- Ensure Ollama is installed and running
- Check that model is downloaded:
ollama list - Check browser logs (F12 → Console)
Enjoy using ColAI! 🚀