-
-
Notifications
You must be signed in to change notification settings - Fork 1
Best Practices
Maximize the efficiency and accuracy of CodeScope with these recommended patterns.
Before ingesting, ensure your repository is clean of large, irrelevant files.
- Delete or ignore
node_modules,dist,build, and.gitdirectories. - Large binary files or massive datasets will slow down ingestion and decrease search relevance.
CodeScope uses metadata during retrieval. Descriptive file and folder names help the AI locate relevant context more effectively.
Instead of "How does it work?", try:
- "Explain the logic in
auth_service.py." - "Show me how the
Usermodel is related to thePostmodel."
The vector search relies on semantic similarity. Use technical terms relevant to your query.
- "Show me the middleware for CORS configuration."
- "Find the decorator used for private routes."
Don't expect the perfect answer on the first try. Use the AI's response to refine your next question.
- "That's helpful. Now can you show me where that variable is initialized?"
- Debugging & Logic: Llama 3 or Mistral.
- Pure Code Generation: CodeLlama or DeepSeek Coder.
- Quick Queries: Phi-2 or TinyLlama.
- If responses are slow, switch to a smaller model.
- If the AI is "hallucinating", switch to a larger, more capable model.
Remember that CodeScope is 100% local. You can safely chat about sensitive proprietary logic or API keys (though you should still avoid committing keys to your repo!).
Always review code generated by the AI before integrating it into your production codebase. Local LLMs, like all AI, can make mistakes.
- Installation: Installation Guide
- User Guide: User Guide
- FAQ: FAQ