Skip to content

Deployment

Yiğit ERDOĞAN edited this page Jan 11, 2026 · 1 revision

🚀 Deployment Guide

While CodeScope is primarily designed for local use, it can be deployed to a server for shared access within a private network.

Deployment Strategies

1. Local Network Server (Shared)

You can run CodeScope on a dedicated machine in your local network.

  • Backend: Bind to 0.0.0.0 instead of localhost.
  • Frontend: Update NEXT_PUBLIC_API_URL to the server's IP.
  • Access: Other team members can access it via http://server-ip:3000.

2. Docker Compose (Recommended for Servers)

Use Docker for a consistent environment.

# docker-compose.yml
services:
  backend:
    build: ./backend
    ports: ["8000:8000"]
  frontend:
    build: ./frontend
    ports: ["3000:3000"]

Note: Ensure the host has Ollama installed and accessible.

Security Considerations

1. Access Control

CodeScope currently does not have built-in authentication.

  • VPN: Only allow access via a corporate VPN.
  • Reverse Proxy: Use Nginx with Basic Auth or an OAuth proxy (like Authelia) for an extra layer of security.

2. Model Hosting

  • Ollama: If deploying to a server without a GPU, model inference will be slow. A dedicated GPU (NVIDIA) is highly recommended for multi-user scenarios.
  • Network Isolation: Ensure the backend can only talk to Ollama and the vector database.

Resource Allocation

Component Minimum Recommended
Core Count 4 Cores 8 Cores+
RAM 16 GB 32 GB+
GPU Optional NVIDIA RTX 3060+ (12GB VRAM)
Storage 50 GB SSD 200 GB SSD (for many models/repos)

Maintenance

  • Cleaning Database: Periodically clear the chroma_db directory if ingestion starts failing or slowing down.
  • Updating Models: Run ollama pull [model] to update to the latest versions.
  • Logs: Monitor system logs to track resource usage and potential errors.

Next Steps

Clone this wiki locally