Skip to content

Conversation

@jpshackelford
Copy link
Contributor

@jpshackelford jpshackelford commented Nov 6, 2025

Problem

The hello world example uses a hardcoded Anthropic model name instead of reading from environment variables like other examples. This creates an inconsistent experience:

  • Users must change LLM_API_KEY depending on which example they want to run
  • Demo workflows are disrupted when you want to run hello world first, then move to other examples
  • First-time users encounter confusing stack traces instead of clear error messages when API keys don't match making first impression that SDK or examples are buggy.

Solution

This change makes hello world consistent with other examples by:

  • Using LLM_MODEL environment variable with default openhands/claude-sonnet-4-5-20250929
  • Adding a comment with instructions on where to get API keys and how to switch to other models

Now users can set up their environment once and run all examples with the same configuration, making the SDK more approachable for first-time users and smoother for demos.


Agent Server images for this PR

GHCR package: https://github.com/OpenHands/agent-sdk/pkgs/container/agent-server

Variants & Base Images

Variant Architectures Base Image Docs / Tags
java amd64, arm64 eclipse-temurin:17-jdk Link
python amd64, arm64 nikolaik/python-nodejs:python3.12-nodejs22 Link
golang amd64, arm64 golang:1.21-bookworm Link

Pull (multi-arch manifest)

# Each variant is a multi-arch manifest supporting both amd64 and arm64
docker pull ghcr.io/openhands/agent-server:76dbec2-python

Run

docker run -it --rm \
  -p 8000:8000 \
  --name agent-server-76dbec2-python \
  ghcr.io/openhands/agent-server:76dbec2-python

All tags pushed for this build

ghcr.io/openhands/agent-server:76dbec2-golang-amd64
ghcr.io/openhands/agent-server:76dbec2-golang_tag_1.21-bookworm-amd64
ghcr.io/openhands/agent-server:76dbec2-golang-arm64
ghcr.io/openhands/agent-server:76dbec2-golang_tag_1.21-bookworm-arm64
ghcr.io/openhands/agent-server:76dbec2-java-amd64
ghcr.io/openhands/agent-server:76dbec2-eclipse-temurin_tag_17-jdk-amd64
ghcr.io/openhands/agent-server:76dbec2-java-arm64
ghcr.io/openhands/agent-server:76dbec2-eclipse-temurin_tag_17-jdk-arm64
ghcr.io/openhands/agent-server:76dbec2-python-amd64
ghcr.io/openhands/agent-server:76dbec2-nikolaik_s_python-nodejs_tag_python3.12-nodejs22-amd64
ghcr.io/openhands/agent-server:76dbec2-python-arm64
ghcr.io/openhands/agent-server:76dbec2-nikolaik_s_python-nodejs_tag_python3.12-nodejs22-arm64
ghcr.io/openhands/agent-server:76dbec2-golang
ghcr.io/openhands/agent-server:76dbec2-java
ghcr.io/openhands/agent-server:76dbec2-python

About Multi-Architecture Support

  • Each variant tag (e.g., 76dbec2-python) is a multi-arch manifest supporting both amd64 and arm64
  • Docker automatically pulls the correct architecture for your platform
  • Individual architecture tags (e.g., 76dbec2-python-amd64) are also available if needed

Allow hello world example to use the same LLM_API_KEY and LLM_BASE_URL
environment variables as other examples, enabling consistent configuration
across all examples.

Changes:
- Use os.getenv() for model with default 'openhands/claude-sonnet-4-5-20250929'
- Add base_url parameter from LLM_BASE_URL environment variable
- Add usage_id for better tracking
- Add comments about API key configuration
@jpshackelford jpshackelford force-pushed the jps/same-settings-for-all-examples branch from dfa6594 to 7f3ad3c Compare November 6, 2025 18:09
@jpshackelford jpshackelford changed the title fix(examples): use environment variables for LLM config in hello world fix(examples): allow all examples to run with same LLM_API_KEY Nov 6, 2025
Copy link
Contributor

@rbren rbren left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will give feedback in a min :)

- Update all example files to use anthropic/claude-sonnet-4-5-20250929 as the default model
- Remove comments about obtaining credits from hello world example
- Model and API key can be overridden via LLM_MODEL and LLM_API_KEY environment variables
@jpshackelford
Copy link
Contributor Author

Based on feedback via slack, added a commit that:

  1. Removes comment from 01_hello_world.py
  2. Updates all examples from openhands/claude-sonnet-4-5-20250929 to anthropic/claude-sonnet-4-5-20250929
  3. Allows LLM_MODEL and LLM_API_KEY environment variables to override settings in 01_hello_world.py as elsewhere

This commit DOES NOT address with the few cases where a model besides openhands/claude-sonnet-4-5-20250929 is hardcoded.

enyst and others added 2 commits November 9, 2025 04:49
…LM routing example

Ensures both primary and secondary LLMs use the same provider/key by default.

Co-authored-by: openhands <[email protected]>
…M_MODEL

Restores in-line env read for API key with model still read from env with fallback.

Co-authored-by: openhands <[email protected]>
@enyst enyst added the test-examples Run all applicable "examples/" files. Expensive operation. label Nov 9, 2025
@github-actions
Copy link
Contributor

github-actions bot commented Nov 9, 2025

🔄 Running Examples with openhands/claude-haiku-4-5-20251001

Last updated: 2025-11-09 05:21:47 UTC

Example Status Duration Cost
01_standalone_sdk/02_custom_tools.py ✅ PASS 26s $0.03
01_standalone_sdk/03_activate_skill.py ✅ PASS 11s $0.01
01_standalone_sdk/05_use_llm_registry.py ✅ PASS 8s $0.01
01_standalone_sdk/07_mcp_integration.py ✅ PASS 43s $0.02
01_standalone_sdk/09_pause_example.py ✅ PASS 13s $0.01
01_standalone_sdk/10_persistence.py ✅ PASS 37s $0.03
01_standalone_sdk/11_async.py ✅ PASS 32s $0.03
01_standalone_sdk/12_custom_secrets.py ✅ PASS 14s $0.01
01_standalone_sdk/13_get_llm_metrics.py ✅ PASS 31s $0.02
01_standalone_sdk/14_context_condenser.py ✅ PASS 198s $0.40
01_standalone_sdk/17_image_input.py ✅ PASS 19s $0.02
01_standalone_sdk/18_send_message_while_processing.py ✅ PASS 20s $0.01
01_standalone_sdk/19_llm_routing.py ✅ PASS 16s $0.01
01_standalone_sdk/20_stuck_detector.py ✅ PASS 16s $0.01
01_standalone_sdk/21_generate_extraneous_conversation_costs.py ❌ FAIL (exit: 1) 10s $0.00
01_standalone_sdk/22_anthropic_thinking.py ✅ PASS 14s $0.01
01_standalone_sdk/23_responses_reasoning.py ✅ PASS 36s $0.01
01_standalone_sdk/24_planning_agent_workflow.py ✅ PASS 312s $0.42
01_standalone_sdk/25_agent_delegation.py ❌ FAIL (exit: 1) 45s $0.00
01_standalone_sdk/26_custom_visualizer.py ✅ PASS 19s $0.00N/A
02_remote_agent_server/01_convo_with_local_agent_server.py ✅ PASS 55s $0.04
02_remote_agent_server/02_convo_with_docker_sandboxed_server.py ✅ PASS 116s $0.04
02_remote_agent_server/03_browser_use_with_docker_sandboxed_server.py ✅ PASS 63s $0.04
02_remote_agent_server/04_convo_with_api_sandboxed_server.py ❌ FAIL (exit: 1) 3s $0.00

❌ Some tests failed

Total: 24 | Passed: 21 | Failed: 3

View full workflow run

…osts example

Ensures CI uses the same provider/key by reading model from LLM_MODEL for both LLMs.

Co-authored-by: openhands <[email protected]>
@OpenHands OpenHands deleted a comment from openhands-ai bot Nov 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

test-examples Run all applicable "examples/" files. Expensive operation.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants