Skip to content

Conversation

@quanru
Copy link
Collaborator

@quanru quanru commented Oct 29, 2025

Summary

This PR integrates GPT-5 model support from PR #1193 (by @kidandcat) into the 1.0 branch. The implementation automatically detects GPT-5 models and uses the OpenAI Responses API with appropriate parameters.

Changes

Core Implementation

  • Automatic GPT-5 Detection: Detects any model name containing "gpt-5" (case-insensitive)
  • Parameter Adaptation:
    • Uses max_completion_tokens instead of max_tokens for GPT-5 models
    • Handles GPT-5's temperature restrictions (uses default value)
  • Response Parsing: Properly handles GPT-5 Responses API response format for both streaming and non-streaming calls
  • Backward Compatibility: Maintains full compatibility with GPT-4 and earlier models

Files Modified

  • packages/core/src/ai-model/service-caller/index.ts: Integrated GPT-5 logic into 1.0's codebase structure
  • apps/site/docs/en/model-provider.mdx: Added GPT-5 documentation while preserving 1.0's environment variable naming conventions
  • packages/core/tests/unit-test/service-caller.test.ts: Merged GPT-5 tests with existing 1.0 test suite

Conflict Resolution

The original PR was based on the main branch. This PR adapts all changes to work with the 1.0 branch's structure, which includes:

  • Azure OpenAI support
  • Anthropic SDK integration
  • Updated environment variable naming (MODEL_API_KEY)

Credit

Original implementation by @kidandcat in PR #1193. The commit history preserves the original author's credit.

Testing

  • ✅ All 21 unit tests pass
  • ✅ Project builds successfully
  • ✅ GPT-5 model detection works with various naming conventions (gpt-5, gpt-5-turbo, GPT-5, etc.)

Usage Example

export MIDSCENE_MODEL_NAME="gpt-5-turbo"
export MODEL_API_KEY="your-api-key"
export OPENAI_MAX_TOKENS="4096"

The system automatically detects GPT-5 models and uses the Responses API with max_completion_tokens.

Closes #1060 (for 1.0 branch)
Related to #1193

kidandcat and others added 2 commits October 29, 2025 15:25
- Automatically detect GPT-5 models by name
- Use max_completion_tokens instead of max_tokens for GPT-5
- Handle GPT-5's temperature restrictions (only supports default)
- Parse GPT-5 Responses API response format
- Add tests for GPT-5 functionality
- Update documentation with GPT-5 configuration

Fixes #1060

🤖 Generated with Claude Code

Co-Authored-By: Claude <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants