Skip to content

Conversation

@titu1994
Copy link

@titu1994 titu1994 commented Jan 9, 2026

What does this PR do?

Fixes #7535

This pull request introduces a new trace logging feature for LLM request-response pairs, allowing users to save detailed traces of model interactions for debugging or auditing purposes. It adds a --trace-dir CLI option (also configurable via the OPENCODE_TRACE_DIR environment variable), implements a TraceLogger utility, and integrates trace logging into the LLM streaming pipeline.

Additionally, it has some workarounds for project directory handling for the run command due to bun dev issues. I can revert those.

The most important changes are:

Trace Logging Infrastructure:

  • Added a new TraceLogger utility (trace-logger.ts) to create, update, and persist detailed trace logs of LLM requests and responses, including errors and system info. Traces are saved as JSON files in a configurable directory.
  • Introduced a --trace-dir CLI option and OPENCODE_TRACE_DIR env variable to enable trace logging, and initialized the logger during CLI startup.
  • Integrated trace logging into the LLM streaming pipeline: traces are created for each request, updated with streamed response data or errors, and written to disk upon completion.
  • Add a env flag to disable plugin installation of node modules simply because a .opencode directory exists.
    • This is necessary when large scale data generation is being done. Each execution of opencode run is done on a separate directory with specific .opencode configuration of different permissions. This causes repeated network IO to download and install 6.4MB of opencode package in node modules and blows up disk space as well as network bandwidth.

CLI and Project Directory Handling:

  • Added a --project-dir option to the run command, allowing users to specify the working directory for project execution. All relevant paths and server initialization now respect this directory.
    • I needed this because running opencode run from the directory I wanted to wasnt working. Maybe this isnt strictly necessary and i just didnt use it properly with the bun dev setup.

Model Data Loading Fix:

  • Replaced a macro import for model metadata with a runtime function that fetches model data from disk or the network, ensuring compatibility with browser conditions and bun run.
    • Again, this seems to be an issue with bun dev / bun run setup. Maybe not necessary and i can revert in that case.

How did you verify your code works?

I ran the following command

opencode run {problem_text}  --project-dir {task_dir} --trace-dir {trace_dir}

And here are some traces -
2026-01-09T22-43-08-873Z_ses_45b128da9ffevGAq3ugKoAeozK_trace_mk7gphvn_1smwbk5h.json
2026-01-09T22-43-40-672Z_ses_45b128da9ffevGAq3ugKoAeozK_trace_mk7gpm2h_8cwrj23j.json
2026-01-09T22-43-45-843Z_ses_45b128da9ffevGAq3ugKoAeozK_trace_mk7gqam5_fubms5zt.json

…rray

Updated the TraceLogger to combine system prompts into the messages array with the role "system". Adjusted related tests to verify the new structure and ensure proper functionality.
…onse structure

Introduced a new CLI option `--trace-dir` to specify the directory for saving request-response trace logs. Updated the TraceLogger to initialize from this option or the corresponding environment variable. Enhanced the response structure to include additional fields and improved error handling, ensuring comprehensive logging of requests and responses.
…on fields

Added modelID and duration fields to the TraceLogger for improved request tracking. Updated tools handling to support both array and object formats. Adjusted tests to reflect changes in response structure and ensure accurate logging of errors and tool calls.
Introduced a new CLI option `--project-dir` to allow users to define the project directory for running commands. Updated the command handler to use the specified directory or default to the current working directory. Adjusted file resolution and SDK initialization to reflect the new directory option.
Implemented a new function to fetch models data at runtime, replacing the previous macro import due to compatibility issues with `bun run --conditions=browser`. The function checks for a local environment variable and falls back to fetching from a remote API if not found.
- Resolved conflicts in run.ts: kept both project-dir and variant options
- Resolved conflicts in llm.ts: kept both TraceLogger and PermissionNext imports
@github-actions
Copy link
Contributor

github-actions bot commented Jan 9, 2026

Hey! Your PR title Support for logging session request-response pairs as OpenAI messages doesn't follow conventional commit format.

Please update it to start with one of:

  • feat: or feat(scope): new feature
  • fix: or fix(scope): bug fix
  • docs: or docs(scope): documentation changes
  • chore: or chore(scope): maintenance tasks
  • refactor: or refactor(scope): code refactoring
  • test: or test(scope): adding or updating tests

Where scope is the package name (e.g., app, desktop, opencode).

See CONTRIBUTING.md for details.

@github-actions
Copy link
Contributor

github-actions bot commented Jan 9, 2026

The following comment was made by an LLM, it may be inaccurate:

No duplicate PRs found

@titu1994 titu1994 changed the title Support for logging session request-response pairs as OpenAI messages feat: Support for logging session request-response pairs as OpenAI messages Jan 9, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[FEATURE]: Session logging as OpenAI messages

1 participant