Skip to content

Conversation

@eric642
Copy link
Contributor

@eric642 eric642 commented Nov 19, 2025

  • Simplified generateStream by using openai-go's ChatCompletionAccumulator
  • Removed manual tool call accumulation logic (currentToolCall, toolCallCollects)
  • Created convertChatCompletionToModelResponse helper for unified response conversion
  • Added support for detailed token usage fields:
    • ThoughtsTokens (reasoning tokens)
    • CachedContentTokens (cached tokens)
    • Audio, prediction tokens in custom field
  • Added support for refusal messages and system fingerprint metadata
  • Refactored generateComplete to reuse convertChatCompletionToModelResponse

Description here... Help the reviewer by:

  • linking to an issue that includes more details
  • if it's a new feature include samples of how to use the new feature
  • (optional if issue link is provided) if you fixed a bug include basic bug details

Checklist (if applicable):

…ming

- Simplified generateStream by using openai-go's ChatCompletionAccumulator
- Removed manual tool call accumulation logic (currentToolCall, toolCallCollects)
- Created convertChatCompletionToModelResponse helper for unified response conversion
- Added support for detailed token usage fields:
  - ThoughtsTokens (reasoning tokens)
  - CachedContentTokens (cached tokens)
  - Audio, prediction tokens in custom field
- Added support for refusal messages and system fingerprint metadata
- Refactored generateComplete to reuse convertChatCompletionToModelResponse
This commit adds support for configuring whether tools can be executed
concurrently or must be executed sequentially.

Changes:
- Add Concurrent() method to Tool interface to query concurrency support
- Add WithConcurrent() option for configuring tool concurrency settings
- Update all tool definition functions (DefineTool, NewTool, etc.) to
  accept optional ToolOption parameters using the functional options pattern
- Store concurrency flag in tool metadata and tool struct
- Modify handleToolRequests() to execute sequential tools first, then
  concurrent tools in parallel
- Extract toolExecution struct and executeToolRequest() helper function
  to improve code organization and eliminate duplication

By default, all tools support concurrent execution to maintain backward
compatibility. Tools can opt-out by using WithConcurrent(false).

Example usage:
  tool := ai.DefineTool(registry, "myTool", "description",
    func(ctx *ai.ToolContext, input string) (string, error) {
      return "result", nil
    },
    ai.WithConcurrent(false)) // Sequential execution
@eric642
Copy link
Contributor Author

eric642 commented Nov 20, 2025

@hugoaguirre @apascal07 pls review it

@apascal07
Copy link
Collaborator

apascal07 commented Nov 25, 2025

Hi Eric,

Thank you for this PR. I have given it a quick look over and will do a more thorough review once I investigate how, if at all, the changes to the tools API will interact with our in-progress implementation for multi-part tool responses as you are eager to see supported. We'd like to roll multiple changes to the API into one release. Thanks for your patience!

@eric642
Copy link
Contributor Author

eric642 commented Nov 26, 2025

@apascal07 OK. Thank you very much for your patience in handling this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

Status: No status

Development

Successfully merging this pull request may close these issues.

2 participants