Skip to content

Getting Started

Yiğit ERDOĞAN edited this page Jan 11, 2026 · 1 revision

🚀 Getting Started with CodeScope

This guide will help you get CodeScope up and running in under 10 minutes.

Prerequisites Check

Before you begin, ensure you have the following installed:

Required Software

Software Minimum Version Check Command Download Link
Python 3.10+ python --version python.org
Node.js 18+ node --version nodejs.org
npm 9+ npm --version (Comes with Node.js)
Ollama Latest ollama --version ollama.com

System Requirements

  • RAM: 8GB minimum (16GB recommended)
  • Storage: 10GB free space (for models and indexes)
  • OS: Windows 10/11, macOS 10.15+, or Linux
  • GPU: Optional (speeds up LLM inference)

Step 1: Install Ollama

Ollama is the LLM runtime that powers CodeScope's AI capabilities.

Windows/Mac

  1. Download from ollama.com
  2. Run the installer
  3. Verify installation:
ollama --version

Linux

curl -fsSL https://ollama.com/install.sh | sh

Pull an LLM Model

CodeScope works with any Ollama-compatible model. We recommend starting with Llama 3:

# Recommended: Llama 3 (4.7GB)
ollama pull llama3

# Alternative: CodeLlama for code-specific tasks (3.8GB)
ollama pull codellama

# Alternative: Mistral (4.1GB)
ollama pull mistral

Note: The first pull will download several GB. Subsequent models share layers and download faster.

Step 2: Clone the Repository

git clone https://github.com/Yigtwxx/CodeScope.git
cd CodeScope

Alternative: Download ZIP from GitHub and extract.

Step 3: Backend Setup

The backend handles code ingestion and the RAG pipeline.

Windows

cd backend

# Create virtual environment
python -m venv .venv

# Activate virtual environment
.venv\Scripts\activate

# Install dependencies
pip install -r requirements.txt

# Start backend server
uvicorn main:app --reload --host 0.0.0.0 --port 8000

macOS/Linux

cd backend

# Create virtual environment
python3 -m venv .venv

# Activate virtual environment
source .venv/bin/activate

# Install dependencies
pip install -r requirements.txt

# Start backend server
uvicorn main:app --reload --host 0.0.0.0 --port 8000

Step 4: Frontend Setup

Open a new terminal (keep the backend running):

cd frontend

# Install dependencies
npm install

# Start development server
npm run dev

Verify Frontend

Open http://localhost:3000 in your browser. You should see the CodeScope interface with:

  • Dark mode UI with circuit board background
  • Empty chat interface
  • Settings icon in the top right

Step 5: Index Your First Repository

  1. Click the ⚙️ Settings icon (top right)
  2. In the "Repository Path" field, enter the absolute path to a local code repository:
    • Windows example: C:\Users\YourName\Projects\my-project
    • Mac/Linux example: /Users/yourname/projects/my-project
  3. Click "Ingest Repository"
  4. Wait for the ingestion process to complete (you'll see a progress indicator)

Tip: Start with a small repository (< 100 files) for your first test.

Step 6: Start Chatting!

Once ingestion completes, try asking questions like:

  • "What does this project do?"
  • "How does authentication work?"
  • "Show me the API endpoints"
  • "Explain the database schema"
  • "Find the login function"

✅ Success Checklist

After completing these steps, you should have:

  • Ollama installed and running
  • At least one LLM model pulled (llama3, codellama, etc.)
  • Backend server running on http://localhost:8000
  • Frontend server running on http://localhost:3000
  • A repository successfully indexed
  • Received AI responses to your questions

🐛 Something Not Working?

If you encounter issues:

  1. Backend won't start: Check Python version (python --version should be 3.10+)
  2. Frontend won't start: Check Node.js version (node --version should be 18+)
  3. Ollama not found: Restart your terminal after installation
  4. No AI responses: Ensure Ollama is running (ollama list to verify)
  5. Ingestion fails: Check repository path is correct and accessible

For detailed troubleshooting, see the Troubleshooting page.

🎓 Next Steps

Now that you're up and running:

🎯 Quick Tips

  • Switch LLM models: Stop backend, run export OLLAMA_MODEL=codellama, restart backend
  • Clear database: Delete backend/chroma_db/ folder to start fresh
  • Multiple repositories: Re-run ingestion with different paths (clears previous index)
  • Performance: Use smaller models (e.g., phi) on low-RAM machines

Need Help? Visit the FAQ or open an issue.

Clone this wiki locally