-
-
Notifications
You must be signed in to change notification settings - Fork 1
Deployment
Yiğit ERDOĞAN edited this page Jan 11, 2026
·
1 revision
While CodeScope is primarily designed for local use, it can be deployed to a server for shared access within a private network.
You can run CodeScope on a dedicated machine in your local network.
-
Backend: Bind to
0.0.0.0instead oflocalhost. -
Frontend: Update
NEXT_PUBLIC_API_URLto the server's IP. -
Access: Other team members can access it via
http://server-ip:3000.
Use Docker for a consistent environment.
# docker-compose.yml
services:
backend:
build: ./backend
ports: ["8000:8000"]
frontend:
build: ./frontend
ports: ["3000:3000"]Note: Ensure the host has Ollama installed and accessible.
CodeScope currently does not have built-in authentication.
- VPN: Only allow access via a corporate VPN.
- Reverse Proxy: Use Nginx with Basic Auth or an OAuth proxy (like Authelia) for an extra layer of security.
- Ollama: If deploying to a server without a GPU, model inference will be slow. A dedicated GPU (NVIDIA) is highly recommended for multi-user scenarios.
- Network Isolation: Ensure the backend can only talk to Ollama and the vector database.
| Component | Minimum | Recommended |
|---|---|---|
| Core Count | 4 Cores | 8 Cores+ |
| RAM | 16 GB | 32 GB+ |
| GPU | Optional | NVIDIA RTX 3060+ (12GB VRAM) |
| Storage | 50 GB SSD | 200 GB SSD (for many models/repos) |
-
Cleaning Database: Periodically clear the
chroma_dbdirectory if ingestion starts failing or slowing down. -
Updating Models: Run
ollama pull [model]to update to the latest versions. - Logs: Monitor system logs to track resource usage and potential errors.
- Configuration details: Configuration
- Troubleshooting: Troubleshooting
- Architecture: Architecture