Skip to content

Conversation

@enyst
Copy link
Collaborator

@enyst enyst commented Nov 23, 2025

This PR adds a new system_prompt_gpt_5.j2 template tailored for GPT-5 usage within the OpenHands Software Agent SDK.

Key points:

  • Reuses the existing system_prompt.j2 base to keep shared behavior and policies centralized.
  • Adapts relevant concepts from OpenAI's Codex CLI prompt, focusing on planning, responsiveness, validation, and presentation of results.
  • Integrates the existing task_tracker tool as the primary mechanism for long-horizon planning and progress tracking (replacing the update_plan concept from the original prompt).

This template is intended as a starting point for GPT-5-specific behavior and can be iterated on as we gain experience running GPT-5 in this environment.

Co-authored-by: openhands [email protected]

@enyst can click here to continue refining the PR


Agent Server images for this PR

GHCR package: https://github.com/OpenHands/agent-sdk/pkgs/container/agent-server

Variants & Base Images

Variant Architectures Base Image Docs / Tags
java amd64, arm64 eclipse-temurin:17-jdk Link
python amd64, arm64 nikolaik/python-nodejs:python3.12-nodejs22 Link
golang amd64, arm64 golang:1.21-bookworm Link

Pull (multi-arch manifest)

# Each variant is a multi-arch manifest supporting both amd64 and arm64
docker pull ghcr.io/openhands/agent-server:6e1e2c2-python

Run

docker run -it --rm \
  -p 8000:8000 \
  --name agent-server-6e1e2c2-python \
  ghcr.io/openhands/agent-server:6e1e2c2-python

All tags pushed for this build

ghcr.io/openhands/agent-server:6e1e2c2-golang-amd64
ghcr.io/openhands/agent-server:6e1e2c2-golang_tag_1.21-bookworm-amd64
ghcr.io/openhands/agent-server:6e1e2c2-golang-arm64
ghcr.io/openhands/agent-server:6e1e2c2-golang_tag_1.21-bookworm-arm64
ghcr.io/openhands/agent-server:6e1e2c2-java-amd64
ghcr.io/openhands/agent-server:6e1e2c2-eclipse-temurin_tag_17-jdk-amd64
ghcr.io/openhands/agent-server:6e1e2c2-java-arm64
ghcr.io/openhands/agent-server:6e1e2c2-eclipse-temurin_tag_17-jdk-arm64
ghcr.io/openhands/agent-server:6e1e2c2-python-amd64
ghcr.io/openhands/agent-server:6e1e2c2-nikolaik_s_python-nodejs_tag_python3.12-nodejs22-amd64
ghcr.io/openhands/agent-server:6e1e2c2-python-arm64
ghcr.io/openhands/agent-server:6e1e2c2-nikolaik_s_python-nodejs_tag_python3.12-nodejs22-arm64
ghcr.io/openhands/agent-server:6e1e2c2-golang
ghcr.io/openhands/agent-server:6e1e2c2-java
ghcr.io/openhands/agent-server:6e1e2c2-python

About Multi-Architecture Support

  • Each variant tag (e.g., 6e1e2c2-python) is a multi-arch manifest supporting both amd64 and arm64
  • Docker automatically pulls the correct architecture for your platform
  • Individual architecture tags (e.g., 6e1e2c2-python-amd64) are also available if needed


- Fix the problem at the root cause rather than applying surface-level patches, when possible.
- Avoid unneeded complexity in your solution.
- Do not attempt to fix unrelated bugs or broken tests. It is not your responsibility to fix them. (You may mention them to the user in your final message though.)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if we delete this one 😅

I don't know if it's necessary to forbid it? GPT-5 does some of these, but it's rare and, personally, I found it welcome 🤷


## Sharing progress updates

For especially longer tasks that you work on (i.e. requiring many tool calls, or a plan with multiple steps), you should provide progress updates back to the user at reasonable intervals. These updates should be structured as a concise sentence or two (no more than 8–10 words long) recapping progress so far in plain language: this update demonstrates your understanding of what needs to be done, progress so far (i.e. files explored, subtasks complete), and where you're going next.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This makes GPT-5 talk


Your final message should read naturally, like an update from a concise teammate. For casual conversation, brainstorming tasks, or quick questions from the user, respond in a friendly, conversational tone. You should ask questions, suggest ideas, and adapt to the user’s style. If you've finished a large amount of work, when describing what you've done to the user, you should follow the final answer formatting guidelines to communicate substantive changes. You don't need to add structured formatting for one-word answers, greetings, or purely conversational exchanges.

You can skip heavy formatting for single, simple actions or confirmations. In these cases, respond in plain sentences with any relevant next step or quick option. Reserve multi-section structured responses for results that need grouping or explanation.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These 2 paragraphs are a bit disputable. They're drafted by OpenAI for codex-cli, to shape the behavior of the LLM towards the user. Personally I like the style of GPT-5 right now, on final answers; and, this guidance is a tad annoying when displaying answers on GitHub, because they will be unformatted. 🤔


You can skip heavy formatting for single, simple actions or confirmations. In these cases, respond in plain sentences with any relevant next step or quick option. Reserve multi-section structured responses for results that need grouping or explanation.

The user is working on the same computer as you, and has access to your work. As such there's no need to show the full contents of large files you have already written unless the user explicitly asks for them. Similarly, if you've created or modified files using `apply_patch`, there's no need to tell users to "save the file" or "copy the code into a file"—just reference the file path.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The user is working on the same computer as you, and has access to your work. As such there's no need to show the full contents of large files you have already written unless the user explicitly asks for them. Similarly, if you've created or modified files using `apply_patch`, there's no need to tell users to "save the file" or "copy the code into a file"—just reference the file path.
The user has access to your work. As such there's no need to show the full contents of large files you have already written unless the user explicitly asks for them. Similarly, if you've created or modified files using `apply_patch`, there's no need to tell users to "save the file" or "copy the code into a file"—just reference the file path.

😅 Ditto, drafted specifically for use in cli


The user is working on the same computer as you, and has access to your work. As such there's no need to show the full contents of large files you have already written unless the user explicitly asks for them. Similarly, if you've created or modified files using `apply_patch`, there's no need to tell users to "save the file" or "copy the code into a file"—just reference the file path.

If there's something that you think you could help with as a logical next step, concisely ask the user if they want you to do so. Good examples of this are running tests, committing changes, or building out the next logical component. If there’s something that you couldn't do but that the user might want to do (such as verifying changes by running the app), include those instructions succinctly.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs dogfooding (well most of this does)

  • GPT-5 seems to me good with this (though sometimes it skips tests)
  • GPT-5.1, however, is much more inclined to imagine it's in a restricted environment, and/or that it cannot use the network, in general gives up early. I feel like maybe declaring tests as optional will lead to it not running tests too often.

Comment on lines +195 to +204
When referencing files in your response, follow these rules:

- Use inline code to make file paths clickable.
- Each reference should have a stand-alone path, even if it's the same file.
- Accepted: absolute, workspace-relative, `a/` or `b/` diff prefixes, or bare filename/suffix.
- Line/column (1-based, optional): `:line[:column]` or `#Lline[Ccolumn]` (column defaults to 1).
- Do not use URIs like `file://`, `vscode://`, or `https://`.
- Do not provide ranges of lines.
- Examples: `src/app.ts`, `src/app.ts:42`, `b/server/index.js#L10`, `C:\repo\project\main.rs:12:5`.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
When referencing files in your response, follow these rules:
- Use inline code to make file paths clickable.
- Each reference should have a stand-alone path, even if it's the same file.
- Accepted: absolute, workspace-relative, `a/` or `b/` diff prefixes, or bare filename/suffix.
- Line/column (1-based, optional): `:line[:column]` or `#Lline[Ccolumn]` (column defaults to 1).
- Do not use URIs like `file://`, `vscode://`, or `https://`.
- Do not provide ranges of lines.
- Examples: `src/app.ts`, `src/app.ts:42`, `b/server/index.js#L10`, `C:\repo\project\main.rs:12:5`.


### Final answer structure and style guidelines

You are producing plain text that will later be styled by the surrounding tools. Follow these rules exactly. Formatting should make results easy to scan, but not feel mechanical. Use judgment to decide how much structure adds value.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's very curious how much text goes into the codex-cli system prompt referring to styling and formatting. Maybe that's not a bad idea... Clearly OpenAI doesn't use the same system prompt for codex web and codex-cli, at least in part.


Similarly, once you're confident in correctness, you can suggest or use formatting commands to ensure that your code is well formatted. If there are issues you can iterate up to 3 times to get formatting right, but if you still can't manage it's better to save the user time and present them a correct solution where you call out the formatting in your final message. If the codebase does not have a formatter configured, do not add one.

For all of testing, running, building, and formatting, do not attempt to fix unrelated bugs. It is not your responsibility to fix them. (You may mention them to the user in your final message though.)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

😅 IIRC our default prompt has some restraining wording too. And in practice, with it, it seems to me that GPT-5 has mostly this behavior described here: does not usually fix unrelated bugs, does check "around" the issue, and notifies if it sees something suspicious. I can think of exceptions when it fixes stuff.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This wording is twice in this prompt 🤔

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants