-
-
Notifications
You must be signed in to change notification settings - Fork 1
Getting Started
This guide will help you get CodeScope up and running in under 10 minutes.
Before you begin, ensure you have the following installed:
| Software | Minimum Version | Check Command | Download Link |
|---|---|---|---|
| Python | 3.10+ | python --version |
python.org |
| Node.js | 18+ | node --version |
nodejs.org |
| npm | 9+ | npm --version |
(Comes with Node.js) |
| Ollama | Latest | ollama --version |
ollama.com |
- RAM: 8GB minimum (16GB recommended)
- Storage: 10GB free space (for models and indexes)
- OS: Windows 10/11, macOS 10.15+, or Linux
- GPU: Optional (speeds up LLM inference)
Ollama is the LLM runtime that powers CodeScope's AI capabilities.
- Download from ollama.com
- Run the installer
- Verify installation:
ollama --versioncurl -fsSL https://ollama.com/install.sh | shCodeScope works with any Ollama-compatible model. We recommend starting with Llama 3:
# Recommended: Llama 3 (4.7GB)
ollama pull llama3
# Alternative: CodeLlama for code-specific tasks (3.8GB)
ollama pull codellama
# Alternative: Mistral (4.1GB)
ollama pull mistralNote: The first pull will download several GB. Subsequent models share layers and download faster.
git clone https://github.com/Yigtwxx/CodeScope.git
cd CodeScopeAlternative: Download ZIP from GitHub and extract.
The backend handles code ingestion and the RAG pipeline.
cd backend
# Create virtual environment
python -m venv .venv
# Activate virtual environment
.venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Start backend server
uvicorn main:app --reload --host 0.0.0.0 --port 8000cd backend
# Create virtual environment
python3 -m venv .venv
# Activate virtual environment
source .venv/bin/activate
# Install dependencies
pip install -r requirements.txt
# Start backend server
uvicorn main:app --reload --host 0.0.0.0 --port 8000Open a new terminal (keep the backend running):
cd frontend
# Install dependencies
npm install
# Start development server
npm run devOpen http://localhost:3000 in your browser. You should see the CodeScope interface with:
- Dark mode UI with circuit board background
- Empty chat interface
- Settings icon in the top right
- Click the ⚙️ Settings icon (top right)
- In the "Repository Path" field, enter the absolute path to a local code repository:
- Windows example:
C:\Users\YourName\Projects\my-project - Mac/Linux example:
/Users/yourname/projects/my-project
- Windows example:
- Click "Ingest Repository"
- Wait for the ingestion process to complete (you'll see a progress indicator)
Tip: Start with a small repository (< 100 files) for your first test.
Once ingestion completes, try asking questions like:
- "What does this project do?"
- "How does authentication work?"
- "Show me the API endpoints"
- "Explain the database schema"
- "Find the login function"
After completing these steps, you should have:
- Ollama installed and running
- At least one LLM model pulled (llama3, codellama, etc.)
- Backend server running on http://localhost:8000
- Frontend server running on http://localhost:3000
- A repository successfully indexed
- Received AI responses to your questions
If you encounter issues:
-
Backend won't start: Check Python version (
python --versionshould be 3.10+) -
Frontend won't start: Check Node.js version (
node --versionshould be 18+) - Ollama not found: Restart your terminal after installation
-
No AI responses: Ensure Ollama is running (
ollama listto verify) - Ingestion fails: Check repository path is correct and accessible
For detailed troubleshooting, see the Troubleshooting page.
Now that you're up and running:
- Read the User Guide for advanced features
- Learn about Configuration options
- Explore the Architecture to understand how it works
- Check Best Practices for optimal usage
-
Switch LLM models: Stop backend, run
export OLLAMA_MODEL=codellama, restart backend -
Clear database: Delete
backend/chroma_db/folder to start fresh - Multiple repositories: Re-run ingestion with different paths (clears previous index)
-
Performance: Use smaller models (e.g.,
phi) on low-RAM machines