Skip to content

Best Practices

Yiğit ERDOĞAN edited this page Jan 11, 2026 · 1 revision

🌟 Best Practices

Maximize the efficiency and accuracy of CodeScope with these recommended patterns.

Repository Preparation

1. Clean Your Codebase

Before ingesting, ensure your repository is clean of large, irrelevant files.

  • Delete or ignore node_modules, dist, build, and .git directories.
  • Large binary files or massive datasets will slow down ingestion and decrease search relevance.

2. Meaningful File Names

CodeScope uses metadata during retrieval. Descriptive file and folder names help the AI locate relevant context more effectively.

Effective Prompting

1. Be Specific

Instead of "How does it work?", try:

  • "Explain the logic in auth_service.py."
  • "Show me how the User model is related to the Post model."

2. Use Keywords

The vector search relies on semantic similarity. Use technical terms relevant to your query.

  • "Show me the middleware for CORS configuration."
  • "Find the decorator used for private routes."

3. Iterative Refinement

Don't expect the perfect answer on the first try. Use the AI's response to refine your next question.

  • "That's helpful. Now can you show me where that variable is initialized?"

Model Selection

1. Match the Task to the Model

  • Debugging & Logic: Llama 3 or Mistral.
  • Pure Code Generation: CodeLlama or DeepSeek Coder.
  • Quick Queries: Phi-2 or TinyLlama.

2. Hardware Considerations

  • If responses are slow, switch to a smaller model.
  • If the AI is "hallucinating", switch to a larger, more capable model.

Privacy & Security

1. Local is Safe

Remember that CodeScope is 100% local. You can safely chat about sensitive proprietary logic or API keys (though you should still avoid committing keys to your repo!).

2. Review AI Output

Always review code generated by the AI before integrating it into your production codebase. Local LLMs, like all AI, can make mistakes.

Next Steps

Clone this wiki locally