-
Notifications
You must be signed in to change notification settings - Fork 0
Adapt SWEAgentTranslator for local model use via Ollama #26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adapts the SWEAgentTranslator class to support local LLM models via Ollama, in addition to existing cloud-based models. The changes enable users to run translations using local models with appropriate configurations.
Key changes:
- Adds automatic detection and setup for Ollama-based models (prefixed with "ollama/")
- Implements automatic Ollama server launch with health checks
- Introduces new configuration parameters for parser type, max input tokens, and custom config files
Reviewed Changes
Copilot reviewed 2 out of 3 changed files in this pull request and generated 2 comments.
| File | Description |
|---|---|
| targets/microXOR/cuda/repo/translation_task.md | Removes translation task documentation (entire file deleted) |
| src/translate/swe_agent/swe_agent_translator.py | Adds Ollama support with server management, new configuration options, and model detection logic |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| start_new_session=True, | ||
| env=env) |
Copilot
AI
Oct 28, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Undefined variable 'env'. The Popen call references 'env' parameter but it is not defined in this function. Either remove the env parameter or define it before use.
| start_new_session=True, | |
| env=env) | |
| start_new_session=True) |
| subprocess.Popen(ollama_command, | ||
| stdout=subprocess.DEVNULL, | ||
| stderr=subprocess.STDOUT, | ||
| stdin=subprocess.DEVNULL, | ||
| start_new_session=True, | ||
| env=env) |
Copilot
AI
Oct 28, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inconsistent indentation in Popen arguments. The arguments should be consistently indented, and trailing whitespace on lines 108-109 should be removed.
| subprocess.Popen(ollama_command, | |
| stdout=subprocess.DEVNULL, | |
| stderr=subprocess.STDOUT, | |
| stdin=subprocess.DEVNULL, | |
| start_new_session=True, | |
| env=env) | |
| subprocess.Popen( | |
| ollama_command, | |
| stdout=subprocess.DEVNULL, | |
| stderr=subprocess.STDOUT, | |
| stdin=subprocess.DEVNULL, | |
| start_new_session=True, | |
| env=env | |
| ) |
No description provided.