-
-
Notifications
You must be signed in to change notification settings - Fork 1
Troubleshooting
Yiğit ERDOĞAN edited this page Jan 11, 2026
·
1 revision
This guide addresses common issues you might encounter while installing or using CodeScope.
Symptom: "Ollama is not connected" or "Model not found" errors in the UI. Solution:
- Ensure Ollama is running (
ollama serveor check system tray). - Verify you have pulled the model (
ollama pull llama3). - Check
OLLAMA_BASE_URLin your.envfile. - If running in Docker, use
host.docker.internalinstead oflocalhost.
Symptom: Progress bar stops or an "Internal Server Error" occurs during ingestion. Solution:
- Check if the path is an absolute path.
- Ensure the backend has read permissions for the directory.
- For very large repositories, try reducing
MAX_FILES_TO_PROCESSinbackend/app/core/config.py. - Check backend logs for specific Python errors.
Symptom: Blank page or "Connection Refused" at localhost:3000.
Solution:
- Ensure the frontend server is running (
npm run dev). - Check if port 3000 is occupied by another process.
- Verify
NEXT_PUBLIC_API_URLis pointing to the correct backend address.
Symptom: AI gives irrelevant or "I don't know" answers. Solution:
- Re-ingest the repository to ensure the index is fresh.
- Try a larger model (e.g.,
llama3.1) if your hardware allows. - Be more specific in your query.
- Check if the files you are asking about have supported extensions.
Symptom: Computer slows down significantly during chat or ingestion. Solution:
- Use a smaller model like
phiortinyllama. - Reduce
CHUNK_SIZEin configuration. - Ensure you are not indexing unnecessary large folders like
node_modules.
If your issue isn't listed here:
- Check Logs: Look at the terminal output for both frontend and backend.
- Open an Issue: Visit the Issues page and provide as much detail as possible.
- Check FAQ: See the FAQ for general questions.
- Configuration: Configuration
- Best Practices: Best Practices
- Architecture: Architecture