This repository contains benchmark evaluation infrastructure for OpenHands agents. It provides standardized evaluation pipelines for testing agent capabilities across various real-world tasks.
| Benchmark | Description | Status |
|---|---|---|
| SWE-Bench | Software engineering tasks from GitHub issues | ✅ Active |
| GAIA | General AI assistant tasks requiring multi-step reasoning | ✅ Active |
See the individual benchmark directories for detailed usage instructions.
Before running any benchmarks, you need to set up the environment and ensure the local Agent SDK submodule is initialized.
make build📦 Submodule & Environment Setup (click to expand)
The Benchmarks project uses a local git submodule for the OpenHands Agent SDK.
This ensures your code runs against a specific, reproducible commit.
Run once after cloning (already done in make build for you):
git submodule update --init --recursiveThis command will:
- clone the SDK into
vendor/software-agent-sdk/ - check out the exact commit pinned by this repo
- make it available for local development (
uv syncwill install from the local folder)
If you ever clone this repository again, remember to re-initialize the submodule with the same command.
Once the submodule is set up, install dependencies via uv:
make buildThis runs:
uv syncand ensures the openhands-* packages (SDK, tools, workspace, agent-server) are installed from the local workspace declared in pyproject.toml.
If you want to update to a newer version of the SDK:
cd vendor/software-agent-sdk
git fetch
git checkout <new_commit_or_branch>
cd ../..
git add vendor/software-agent-sdk
git commit -m "Update software-agent-sdk submodule to <new_commit_sha>"Then re-run:
make buildto rebuild your environment with the new SDK code.
All benchmarks require an LLM configuration file. Define your LLM config as a JSON following the model fields in the LLM class.
Example (.llm_config/example.json):
{
"model": "litellm_proxy/anthropic/claude-sonnet-4-20250514",
"base_url": "https://llm-proxy.eval.all-hands.dev",
"api_key": "YOUR_API_KEY_HERE"
}Validate your configuration:
uv run validate-cfg .llm_config/YOUR_CONFIG_PATH.jsonAfter setting up the environment and configuring your LLM, see the individual benchmark directories for specific usage instructions.
- Original OpenHands: https://github.com/OpenHands/OpenHands/
- Agent SDK: https://github.com/OpenHands/software-agent-sdk
- SWE-Bench: https://www.swebench.com/