Skip to content

release: 0.15.0 #171

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Jul 22, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "0.14.0"
".": "0.15.0"
}
6 changes: 3 additions & 3 deletions .stats.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
configured_endpoints: 109
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-670ea0d2cc44f52a87dd3cadea45632953283e0636ba30788fdbdb22a232ccac.yml
openapi_spec_hash: d8b7d38911fead545adf3e4297956410
config_hash: 5525bda35e48ea6387c6175c4d1651fa
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-b2a451656ca64d30d174391ebfd94806b4de3ab76dc55b92843cfb7f1a54ecb6.yml
openapi_spec_hash: 27d9691b400f28c17ef063a1374048b0
config_hash: e822d0c9082c8b312264403949243179
18 changes: 18 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,23 @@
# Changelog

## 0.15.0 (2025-07-21)

Full Changelog: [v0.14.0...v0.15.0](https://github.com/openai/openai-ruby/compare/v0.14.0...v0.15.0)

### Features

* **api:** manual updates ([fb53071](https://github.com/openai/openai-ruby/commit/fb530713d08a4ba49e8bdaecd9848674bb35c333))


### Bug Fixes

* **internal:** tests should use normalized property names ([801e9c2](https://github.com/openai/openai-ruby/commit/801e9c29f65e572a3b49f5cf7891d3053e1d087f))


### Chores

* **api:** event shapes more accurate ([29f32ce](https://github.com/openai/openai-ruby/commit/29f32cedf6112d38fe8de454658a5afd7ad0d2cb))

## 0.14.0 (2025-07-16)

Full Changelog: [v0.13.1...v0.14.0](https://github.com/openai/openai-ruby/compare/v0.13.1...v0.14.0)
Expand Down
2 changes: 1 addition & 1 deletion Gemfile.lock
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ GIT
PATH
remote: .
specs:
openai (0.14.0)
openai (0.15.0)
connection_pool

GEM
Expand Down
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ To use this gem, install via Bundler by adding the following to your application
<!-- x-release-please-start-version -->

```ruby
gem "openai", "~> 0.14.0"
gem "openai", "~> 0.15.0"
```

<!-- x-release-please-end -->
Expand Down Expand Up @@ -443,7 +443,7 @@ You can provide typesafe request parameters like so:

```ruby
openai.chat.completions.create(
messages: [OpenAI::Chat::ChatCompletionUserMessageParam.new(role: "user", content: "Say this is a test")],
messages: [OpenAI::Chat::ChatCompletionUserMessageParam.new(content: "Say this is a test")],
model: :"gpt-4.1"
)
```
Expand All @@ -459,7 +459,7 @@ openai.chat.completions.create(

# You can also splat a full Params class:
params = OpenAI::Chat::CompletionCreateParams.new(
messages: [OpenAI::Chat::ChatCompletionUserMessageParam.new(role: "user", content: "Say this is a test")],
messages: [OpenAI::Chat::ChatCompletionUserMessageParam.new(content: "Say this is a test")],
model: :"gpt-4.1"
)
openai.chat.completions.create(**params)
Expand Down
2 changes: 0 additions & 2 deletions lib/openai.rb
Original file line number Diff line number Diff line change
Expand Up @@ -425,8 +425,6 @@
require_relative "openai/models/responses/response_output_text_annotation_added_event"
require_relative "openai/models/responses/response_prompt"
require_relative "openai/models/responses/response_queued_event"
require_relative "openai/models/responses/response_reasoning_delta_event"
require_relative "openai/models/responses/response_reasoning_done_event"
require_relative "openai/models/responses/response_reasoning_item"
require_relative "openai/models/responses/response_reasoning_summary_delta_event"
require_relative "openai/models/responses/response_reasoning_summary_done_event"
Expand Down
9 changes: 0 additions & 9 deletions lib/openai/models/audio/speech_create_params.rb
Original file line number Diff line number Diff line change
Expand Up @@ -111,12 +111,6 @@ module Voice

variant const: -> { OpenAI::Models::Audio::SpeechCreateParams::Voice::ECHO }

variant const: -> { OpenAI::Models::Audio::SpeechCreateParams::Voice::FABLE }

variant const: -> { OpenAI::Models::Audio::SpeechCreateParams::Voice::ONYX }

variant const: -> { OpenAI::Models::Audio::SpeechCreateParams::Voice::NOVA }

variant const: -> { OpenAI::Models::Audio::SpeechCreateParams::Voice::SAGE }

variant const: -> { OpenAI::Models::Audio::SpeechCreateParams::Voice::SHIMMER }
Expand All @@ -137,9 +131,6 @@ module Voice
BALLAD = :ballad
CORAL = :coral
ECHO = :echo
FABLE = :fable
ONYX = :onyx
NOVA = :nova
SAGE = :sage
SHIMMER = :shimmer
VERSE = :verse
Expand Down
4 changes: 2 additions & 2 deletions lib/openai/models/chat/chat_completion.rb
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ class ChatCompletion < OpenAI::Internal::Type::BaseModel
# - If set to 'auto', then the request will be processed with the service tier
# configured in the Project settings. Unless otherwise configured, the Project
# will use 'default'.
# - If set to 'default', then the requset will be processed with the standard
# - If set to 'default', then the request will be processed with the standard
# pricing and performance for the selected model.
# - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
# 'priority', then the request will be processed with the corresponding service
Expand Down Expand Up @@ -193,7 +193,7 @@ class Logprobs < OpenAI::Internal::Type::BaseModel
# - If set to 'auto', then the request will be processed with the service tier
# configured in the Project settings. Unless otherwise configured, the Project
# will use 'default'.
# - If set to 'default', then the requset will be processed with the standard
# - If set to 'default', then the request will be processed with the standard
# pricing and performance for the selected model.
# - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
# 'priority', then the request will be processed with the corresponding service
Expand Down
9 changes: 0 additions & 9 deletions lib/openai/models/chat/chat_completion_audio_param.rb
Original file line number Diff line number Diff line change
Expand Up @@ -67,12 +67,6 @@ module Voice

variant const: -> { OpenAI::Models::Chat::ChatCompletionAudioParam::Voice::ECHO }

variant const: -> { OpenAI::Models::Chat::ChatCompletionAudioParam::Voice::FABLE }

variant const: -> { OpenAI::Models::Chat::ChatCompletionAudioParam::Voice::ONYX }

variant const: -> { OpenAI::Models::Chat::ChatCompletionAudioParam::Voice::NOVA }

variant const: -> { OpenAI::Models::Chat::ChatCompletionAudioParam::Voice::SAGE }

variant const: -> { OpenAI::Models::Chat::ChatCompletionAudioParam::Voice::SHIMMER }
Expand All @@ -93,9 +87,6 @@ module Voice
BALLAD = :ballad
CORAL = :coral
ECHO = :echo
FABLE = :fable
ONYX = :onyx
NOVA = :nova
SAGE = :sage
SHIMMER = :shimmer
VERSE = :verse
Expand Down
4 changes: 2 additions & 2 deletions lib/openai/models/chat/chat_completion_chunk.rb
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ class ChatCompletionChunk < OpenAI::Internal::Type::BaseModel
# - If set to 'auto', then the request will be processed with the service tier
# configured in the Project settings. Unless otherwise configured, the Project
# will use 'default'.
# - If set to 'default', then the requset will be processed with the standard
# - If set to 'default', then the request will be processed with the standard
# pricing and performance for the selected model.
# - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
# 'priority', then the request will be processed with the corresponding service
Expand Down Expand Up @@ -376,7 +376,7 @@ class Logprobs < OpenAI::Internal::Type::BaseModel
# - If set to 'auto', then the request will be processed with the service tier
# configured in the Project settings. Unless otherwise configured, the Project
# will use 'default'.
# - If set to 'default', then the requset will be processed with the standard
# - If set to 'default', then the request will be processed with the standard
# pricing and performance for the selected model.
# - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
# 'priority', then the request will be processed with the corresponding service
Expand Down
4 changes: 2 additions & 2 deletions lib/openai/models/chat/completion_create_params.rb
Original file line number Diff line number Diff line change
Expand Up @@ -224,7 +224,7 @@ class CompletionCreateParams < OpenAI::Internal::Type::BaseModel
# - If set to 'auto', then the request will be processed with the service tier
# configured in the Project settings. Unless otherwise configured, the Project
# will use 'default'.
# - If set to 'default', then the requset will be processed with the standard
# - If set to 'default', then the request will be processed with the standard
# pricing and performance for the selected model.
# - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
# 'priority', then the request will be processed with the corresponding service
Expand Down Expand Up @@ -553,7 +553,7 @@ module ResponseFormat
# - If set to 'auto', then the request will be processed with the service tier
# configured in the Project settings. Unless otherwise configured, the Project
# will use 'default'.
# - If set to 'default', then the requset will be processed with the standard
# - If set to 'default', then the request will be processed with the standard
# pricing and performance for the selected model.
# - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
# 'priority', then the request will be processed with the corresponding service
Expand Down
2 changes: 1 addition & 1 deletion lib/openai/models/function_definition.rb
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ class FunctionDefinition < OpenAI::Internal::Type::BaseModel
# set to true, the model will follow the exact schema defined in the `parameters`
# field. Only a subset of JSON Schema is supported when `strict` is `true`. Learn
# more about Structured Outputs in the
# [function calling guide](docs/guides/function-calling).
# [function calling guide](https://platform.openai.com/docs/guides/function-calling).
#
# @return [Boolean, nil]
optional :strict, OpenAI::Internal::Type::Boolean, nil?: true
Expand Down
5 changes: 4 additions & 1 deletion lib/openai/models/image_edit_params.rb
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ module OpenAI
module Models
# @see OpenAI::Resources::Images#edit
#
# @see OpenAI::Resources::Images#stream_raw
# @see OpenAI::Resources::Images#edit_stream_raw
class ImageEditParams < OpenAI::Internal::Type::BaseModel
extend OpenAI::Internal::Type::RequestParameters::Converter
include OpenAI::Internal::Type::RequestParameters
Expand Down Expand Up @@ -92,6 +92,9 @@ class ImageEditParams < OpenAI::Internal::Type::BaseModel
# responses that return partial images. Value must be between 0 and 3. When set to
# 0, the response will be a single image sent in one streaming event.
#
# Note that the final image may be sent before the full number of partial images
# are generated if the full image is generated more quickly.
#
# @return [Integer, nil]
optional :partial_images, Integer, nil?: true

Expand Down
5 changes: 4 additions & 1 deletion lib/openai/models/image_generate_params.rb
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ module OpenAI
module Models
# @see OpenAI::Resources::Images#generate
#
# @see OpenAI::Resources::Images#stream_raw
# @see OpenAI::Resources::Images#generate_stream_raw
class ImageGenerateParams < OpenAI::Internal::Type::BaseModel
extend OpenAI::Internal::Type::RequestParameters::Converter
include OpenAI::Internal::Type::RequestParameters
Expand Down Expand Up @@ -71,6 +71,9 @@ class ImageGenerateParams < OpenAI::Internal::Type::BaseModel
# responses that return partial images. Value must be between 0 and 3. When set to
# 0, the response will be a single image sent in one streaming event.
#
# Note that the final image may be sent before the full number of partial images
# are generated if the full image is generated more quickly.
#
# @return [Integer, nil]
optional :partial_images, Integer, nil?: true

Expand Down
7 changes: 2 additions & 5 deletions lib/openai/models/images_response.rb
Original file line number Diff line number Diff line change
Expand Up @@ -140,7 +140,7 @@ class Usage < OpenAI::Internal::Type::BaseModel
required :input_tokens_details, -> { OpenAI::ImagesResponse::Usage::InputTokensDetails }

# @!attribute output_tokens
# The number of image tokens in the output image.
# The number of output tokens generated by the model.
#
# @return [Integer]
required :output_tokens, Integer
Expand All @@ -152,16 +152,13 @@ class Usage < OpenAI::Internal::Type::BaseModel
required :total_tokens, Integer

# @!method initialize(input_tokens:, input_tokens_details:, output_tokens:, total_tokens:)
# Some parameter documentations has been truncated, see
# {OpenAI::Models::ImagesResponse::Usage} for more details.
#
# For `gpt-image-1` only, the token usage information for the image generation.
#
# @param input_tokens [Integer] The number of tokens (images and text) in the input prompt.
#
# @param input_tokens_details [OpenAI::Models::ImagesResponse::Usage::InputTokensDetails] The input tokens detailed information for the image generation.
#
# @param output_tokens [Integer] The number of image tokens in the output image.
# @param output_tokens [Integer] The number of output tokens generated by the model.
#
# @param total_tokens [Integer] The total number of tokens (images and text) used for the image generation.

Expand Down
4 changes: 2 additions & 2 deletions lib/openai/models/responses/response.rb
Original file line number Diff line number Diff line change
Expand Up @@ -186,7 +186,7 @@ class Response < OpenAI::Internal::Type::BaseModel
# - If set to 'auto', then the request will be processed with the service tier
# configured in the Project settings. Unless otherwise configured, the Project
# will use 'default'.
# - If set to 'default', then the requset will be processed with the standard
# - If set to 'default', then the request will be processed with the standard
# pricing and performance for the selected model.
# - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
# 'priority', then the request will be processed with the corresponding service
Expand Down Expand Up @@ -401,7 +401,7 @@ module ToolChoice
# - If set to 'auto', then the request will be processed with the service tier
# configured in the Project settings. Unless otherwise configured, the Project
# will use 'default'.
# - If set to 'default', then the requset will be processed with the standard
# - If set to 'default', then the request will be processed with the standard
# pricing and performance for the selected model.
# - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
# 'priority', then the request will be processed with the corresponding service
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,8 @@ class ResponseCodeInterpreterToolCall < OpenAI::Internal::Type::BaseModel
nil?: true

# @!attribute status
# The status of the code interpreter tool call.
# The status of the code interpreter tool call. Valid values are `in_progress`,
# `completed`, `incomplete`, `interpreting`, and `failed`.
#
# @return [Symbol, OpenAI::Models::Responses::ResponseCodeInterpreterToolCall::Status]
required :status, enum: -> { OpenAI::Responses::ResponseCodeInterpreterToolCall::Status }
Expand All @@ -59,7 +60,7 @@ class ResponseCodeInterpreterToolCall < OpenAI::Internal::Type::BaseModel
#
# @param outputs [Array<OpenAI::Models::Responses::ResponseCodeInterpreterToolCall::Output::Logs, OpenAI::Models::Responses::ResponseCodeInterpreterToolCall::Output::Image>, nil] The outputs generated by the code interpreter, such as logs or images.
#
# @param status [Symbol, OpenAI::Models::Responses::ResponseCodeInterpreterToolCall::Status] The status of the code interpreter tool call.
# @param status [Symbol, OpenAI::Models::Responses::ResponseCodeInterpreterToolCall::Status] The status of the code interpreter tool call. Valid values are `in_progress`, `c
#
# @param type [Symbol, :code_interpreter_call] The type of the code interpreter tool call. Always `code_interpreter_call`.

Expand Down Expand Up @@ -121,7 +122,8 @@ class Image < OpenAI::Internal::Type::BaseModel
# @return [Array(OpenAI::Models::Responses::ResponseCodeInterpreterToolCall::Output::Logs, OpenAI::Models::Responses::ResponseCodeInterpreterToolCall::Output::Image)]
end

# The status of the code interpreter tool call.
# The status of the code interpreter tool call. Valid values are `in_progress`,
# `completed`, `incomplete`, `interpreting`, and `failed`.
#
# @see OpenAI::Models::Responses::ResponseCodeInterpreterToolCall#status
module Status
Expand Down
4 changes: 2 additions & 2 deletions lib/openai/models/responses/response_create_params.rb
Original file line number Diff line number Diff line change
Expand Up @@ -138,7 +138,7 @@ class ResponseCreateParams < OpenAI::Internal::Type::BaseModel
# - If set to 'auto', then the request will be processed with the service tier
# configured in the Project settings. Unless otherwise configured, the Project
# will use 'default'.
# - If set to 'default', then the requset will be processed with the standard
# - If set to 'default', then the request will be processed with the standard
# pricing and performance for the selected model.
# - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
# 'priority', then the request will be processed with the corresponding service
Expand Down Expand Up @@ -328,7 +328,7 @@ module Input
# - If set to 'auto', then the request will be processed with the service tier
# configured in the Project settings. Unless otherwise configured, the Project
# will use 'default'.
# - If set to 'default', then the requset will be processed with the standard
# - If set to 'default', then the request will be processed with the standard
# pricing and performance for the selected model.
# - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
# 'priority', then the request will be processed with the corresponding service
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,11 @@ module Models
module Responses
class ResponseMcpCallArgumentsDeltaEvent < OpenAI::Internal::Type::BaseModel
# @!attribute delta
# The partial update to the arguments for the MCP tool call.
# A JSON string containing the partial update to the arguments for the MCP tool
# call.
#
# @return [Object]
required :delta, OpenAI::Internal::Type::Unknown
# @return [String]
required :delta, String

# @!attribute item_id
# The unique identifier of the MCP tool call item being processed.
Expand All @@ -35,10 +36,14 @@ class ResponseMcpCallArgumentsDeltaEvent < OpenAI::Internal::Type::BaseModel
required :type, const: :"response.mcp_call_arguments.delta"

# @!method initialize(delta:, item_id:, output_index:, sequence_number:, type: :"response.mcp_call_arguments.delta")
# Some parameter documentations has been truncated, see
# {OpenAI::Models::Responses::ResponseMcpCallArgumentsDeltaEvent} for more
# details.
#
# Emitted when there is a delta (partial update) to the arguments of an MCP tool
# call.
#
# @param delta [Object] The partial update to the arguments for the MCP tool call.
# @param delta [String] A JSON string containing the partial update to the arguments for the MCP tool ca
#
# @param item_id [String] The unique identifier of the MCP tool call item being processed.
#
Expand Down
Loading