Skip to content

Installation Guide

Yiğit ERDOĞAN edited this page Jan 11, 2026 · 1 revision

📦 Installation Guide

Comprehensive installation instructions for all platforms and scenarios.

Table of Contents

  1. System Requirements
  2. Platform-Specific Installation
  3. Dependency Installation
  4. Ollama Setup
  5. Environment Variables
  6. Docker Installation
  7. Verification

System Requirements

Minimum Requirements

Component Specification
CPU Dual-core 2.0 GHz
RAM 8 GB
Storage 10 GB free (models + indexes)
OS Windows 10, macOS 10.15, Ubuntu 20.04
Internet Only for initial setup

Recommended Requirements

Component Specification
CPU Quad-core 3.0 GHz+
RAM 16 GB+
GPU NVIDIA GPU with 8GB+ VRAM (optional)
Storage 50 GB SSD
OS Windows 11, macOS 13+, Ubuntu 22.04

GPU Acceleration (Optional)

For faster LLM inference, CodeScope supports GPU acceleration through Ollama:

  • NVIDIA GPUs: CUDA 11.8+ (automatic detection)
  • Apple Silicon: Metal (automatic detection)
  • AMD GPUs: ROCm support (experimental)

Platform-Specific Installation

Windows 10/11

Step 1: Install Python 3.10+

Download from python.org and ensure "Add Python to PATH" is checked.

Verify installation:

python --version
# Output: Python 3.10.x or higher

Step 2: Install Node.js 18+

Download from nodejs.org (LTS version recommended).

Verify installation:

node --version  # Should be 18.x or higher
npm --version   # Should be 9.x or higher

Step 3: Install Git

Download from git-scm.com or use GitHub Desktop.

Step 4: Install Ollama

Download installer from ollama.com and run it.

Verify installation:

ollama --version

macOS (Intel & Apple Silicon)

Using Homebrew (Recommended)

# Install Homebrew (if not installed)
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

# Install dependencies
brew install [email protected] node git

# Install Ollama
brew install ollama

Manual Installation

  1. Python: Download from python.org
  2. Node.js: Download from nodejs.org
  3. Ollama: Download installer from ollama.com

Verify installations:

python3 --version
node --version
ollama --version

Linux (Ubuntu/Debian)

# Update package list
sudo apt update

# Install Python 3.10+
sudo apt install python3.10 python3.10-venv python3-pip

# Install Node.js 18+ (via NodeSource)
curl -fsSL https://deb.sourcesource.com/setup_18.x | sudo -E bash -
sudo apt install nodejs

# Install Git
sudo apt install git

# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh

Linux (Fedora/RHEL)

# Install Python
sudo dnf install python3 python3-pip python3-venv

# Install Node.js
sudo dnf install nodejs npm

# Install Git
sudo dnf install git

# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh

Dependency Installation

Clone Repository

git clone https://github.com/Yigtwxx/CodeScope.git
cd CodeScope

Backend Dependencies

Windows

cd backend
python -m venv .venv
.venv\Scripts\activate
pip install --upgrade pip
pip install -r requirements.txt

macOS/Linux

cd backend
python3 -m venv .venv
source .venv/bin/activate
pip install --upgrade pip
pip install -r requirements.txt

Dependencies Installed:

  • FastAPI 0.115.12 - Web framework
  • LangChain 0.3.20 - RAG orchestration
  • ChromaDB 0.5.26 - Vector database
  • Sentence-Transformers 3.4.1 - Embeddings
  • Uvicorn 0.37.0 - ASGI server

Frontend Dependencies

cd frontend
npm install

Dependencies Installed:

  • Next.js 16.0.10 - React framework
  • React 19.2.1 - UI library
  • Tailwind CSS 4.0 - Styling
  • Shadcn/UI - Component library
  • React Markdown - Markdown rendering

Ollama Setup

Pull LLM Models

CodeScope works with any Ollama model. Choose based on your needs:

General Purpose Models

# Llama 3 (Recommended, 4.7GB)
ollama pull llama3

# Llama 3.1 (Latest, 8.5GB, higher quality)
ollama pull llama3.1

# Mistral (Fast, 4.1GB)
ollama pull mistral

Code-Specialized Models

# CodeLlama (Python, JS, C++, 3.8GB)
ollama pull codellama

# DeepSeek Coder (Multi-language, 6.7GB)
ollama pull deepseek-coder

# StarCoder (Code generation, 15GB)
ollama pull starcoder

Small/Fast Models (for low-RAM systems)

# Phi-2 (2.7GB)
ollama pull phi

# TinyLlama (637MB)
ollama pull tinyllama

Verify Ollama is Running

# List installed models
ollama list

# Test a model
ollama run llama3 "Hello, test"

Configure Default Model

CodeScope uses the model Ollama is currently serving. To change models:

# Option 1: Environment variable (temporary)
export OLLAMA_MODEL=codellama
uvicorn main:app --reload

# Option 2: Edit backend/app/core/config.py
# Change OLLAMA_MODEL setting

Environment Variables

Create a .env file in the backend/ directory for custom configuration:

# backend/.env

# Ollama Configuration
OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_MODEL=llama3

# Server Configuration
API_V1_STR=/api/v1
PROJECT_NAME=CodeScope
VERSION=0.1.0

# ChromaDB Configuration
CHROMA_DB_DIR=./chroma_db

# Ingestion Settings
CHUNK_SIZE=1000
CHUNK_OVERLAP=200
MAX_FILES_TO_PROCESS=10000

# Logging
LOG_LEVEL=INFO

Configuration Options:

Variable Description Default
OLLAMA_BASE_URL Ollama API endpoint http://localhost:11434
OLLAMA_MODEL LLM model to use llama3
CHUNK_SIZE Code chunk size (chars) 1000
CHUNK_OVERLAP Overlap between chunks 200
LOG_LEVEL Logging verbosity INFO

Docker Installation (Optional)

For containerized deployment:

Using Docker Compose

Create docker-compose.yml:

version: '3.8'

services:
  backend:
    build: ./backend
    ports:
      - "8000:8000"
    volumes:
      - ./backend:/app
      - chroma-data:/app/chroma_db
    environment:
      - OLLAMA_BASE_URL=http://host.docker.internal:11434

  frontend:
    build: ./frontend
    ports:
      - "3000:3000"
    depends_on:
      - backend
    environment:
      - NEXT_PUBLIC_API_URL=http://localhost:8000

volumes:
  chroma-data:

Run:

docker-compose up -d

Note: Ollama must run on the host machine (not containerized) for GPU access.


Verification

Backend Health Check

curl http://localhost:8000/health

Expected output:

{
  "status": "healthy",
  "service": "CodeScope",
  "version": "0.1.0"
}

Frontend Access

Visit http://localhost:3000 - you should see the CodeScope UI.

Ollama Connection Test

curl http://localhost:11434/api/tags

Should return list of installed models.

Full System Test

  1. Start backend and frontend
  2. Open Settings in UI
  3. Ingest a small test repository
  4. Ask: "What files are in this project?"
  5. Verify you receive an AI response

Troubleshooting Installation

Python Issues

Problem: python: command not found
Solution: Install Python and add to PATH

Problem: Permission denied when installing packages
Solution: Use virtual environment (recommended) or pip install --user

Node.js Issues

Problem: npm ERR! ERESOLVE
Solution: npm install --legacy-peer-deps

Problem: Port 3000 already in use
Solution: npm run dev -- -p 3001 (use different port)

Ollama Issues

Problem: Ollama command not found
Solution: Restart terminal after installation

Problem: Out of memory during model inference
Solution: Use smaller model (phi, tinyllama)

Problem: Slow responses
Solution: Enable GPU acceleration or reduce context size

For more issues, see Troubleshooting.


Next Steps

Installation complete! 🎉

Clone this wiki locally