Skip to content

release: 0.7.0 #154

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "0.6.0"
".": "0.7.0"
}
4 changes: 2 additions & 2 deletions .stats.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
configured_endpoints: 109
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-0205acb1015d29b2312a48526734c0399f93026d4fe2dff5c7768f566e333fd2.yml
openapi_spec_hash: 1772cc9056c2f6dfb2a4e9cb77ee6343
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-4865dda2b62927bd141cbc85f81be3d88602f103e2c581e15eb1caded3e3aaa2.yml
openapi_spec_hash: 7d14a9b23ef4ac93ea46d629601b6f6b
config_hash: ed1e6b3c5f93d12b80d31167f55c557c
15 changes: 15 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,20 @@
# Changelog

## 0.7.0 (2025-06-09)

Full Changelog: [v0.6.0...v0.7.0](https://github.com/openai/openai-ruby/compare/v0.6.0...v0.7.0)

### Features

* **api:** Add tools and structured outputs to evals ([6ee3392](https://github.com/openai/openai-ruby/commit/6ee33924e9146e2450e9c43d052886ed3214cbde))


### Bug Fixes

* default content-type for text in multi-part formdata uploads should be text/plain ([105cf47](https://github.com/openai/openai-ruby/commit/105cf4717993c744ee6c453d2a99ae03f51035d4))
* tool parameter mapping for chat completions ([#156](https://github.com/openai/openai-ruby/issues/156)) ([5999b9f](https://github.com/openai/openai-ruby/commit/5999b9f6ad6dc73a290a8ef7b1b52bd89897039c))
* tool parameter mapping for responses ([#704](https://github.com/openai/openai-ruby/issues/704)) ([ac8bf11](https://github.com/openai/openai-ruby/commit/ac8bf11cf59fcc778f1658429a1fc06eaca79bba))

## 0.6.0 (2025-06-03)

Full Changelog: [v0.5.1...v0.6.0](https://github.com/openai/openai-ruby/compare/v0.5.1...v0.6.0)
Expand Down
2 changes: 1 addition & 1 deletion Gemfile.lock
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ GIT
PATH
remote: .
specs:
openai (0.6.0)
openai (0.7.0)
connection_pool

GEM
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ To use this gem, install via Bundler by adding the following to your application
<!-- x-release-please-start-version -->

```ruby
gem "openai", "~> 0.6.0"
gem "openai", "~> 0.7.0"
```

<!-- x-release-please-end -->
Expand Down
17 changes: 8 additions & 9 deletions lib/openai/internal/util.rb
Original file line number Diff line number Diff line change
Expand Up @@ -497,7 +497,7 @@ class << self
# @param closing [Array<Proc>]
# @param content_type [String, nil]
private def write_multipart_content(y, val:, closing:, content_type: nil)
content_type ||= "application/octet-stream"
content_line = "Content-Type: %s\r\n\r\n"

case val
in OpenAI::FilePart
Expand All @@ -508,24 +508,21 @@ class << self
content_type: val.content_type
)
in Pathname
y << "Content-Type: #{content_type}\r\n\r\n"
y << format(content_line, content_type || "application/octet-stream")
io = val.open(binmode: true)
closing << io.method(:close)
IO.copy_stream(io, y)
in IO
y << "Content-Type: #{content_type}\r\n\r\n"
y << format(content_line, content_type || "application/octet-stream")
IO.copy_stream(val, y)
in StringIO
y << "Content-Type: #{content_type}\r\n\r\n"
y << format(content_line, content_type || "application/octet-stream")
y << val.string
in String
y << "Content-Type: #{content_type}\r\n\r\n"
y << val.to_s
in -> { primitive?(_1) }
y << "Content-Type: text/plain\r\n\r\n"
y << format(content_line, content_type || "text/plain")
y << val.to_s
else
y << "Content-Type: application/json\r\n\r\n"
y << format(content_line, content_type || "application/json")
y << JSON.generate(val)
end
y << "\r\n"
Expand Down Expand Up @@ -563,6 +560,8 @@ class << self

# @api private
#
# https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.1.1.md#special-considerations-for-multipart-content
#
# @param body [Object]
#
# @return [Array(String, Enumerable<String>)]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -432,6 +432,24 @@ class SamplingParams < OpenAI::Internal::Type::BaseModel
# @return [Integer, nil]
optional :max_completion_tokens, Integer

# @!attribute response_format
# An object specifying the format that the model must output.
#
# Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured
# Outputs which ensures the model will match your supplied JSON schema. Learn more
# in the
# [Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).
#
# Setting to `{ "type": "json_object" }` enables the older JSON mode, which
# ensures the message the model generates is valid JSON. Using `json_schema` is
# preferred for models that support it.
#
# @return [OpenAI::Models::ResponseFormatText, OpenAI::Models::ResponseFormatJSONSchema, OpenAI::Models::ResponseFormatJSONObject, nil]
optional :response_format,
union: -> {
OpenAI::Evals::CreateEvalCompletionsRunDataSource::SamplingParams::ResponseFormat
}

# @!attribute seed
# A seed value to initialize the randomness, during sampling.
#
Expand All @@ -444,20 +462,68 @@ class SamplingParams < OpenAI::Internal::Type::BaseModel
# @return [Float, nil]
optional :temperature, Float

# @!attribute tools
# A list of tools the model may call. Currently, only functions are supported as a
# tool. Use this to provide a list of functions the model may generate JSON inputs
# for. A max of 128 functions are supported.
#
# @return [Array<OpenAI::Models::Chat::ChatCompletionTool>, nil]
optional :tools, -> { OpenAI::Internal::Type::ArrayOf[OpenAI::Chat::ChatCompletionTool] }

# @!attribute top_p
# An alternative to temperature for nucleus sampling; 1.0 includes all tokens.
#
# @return [Float, nil]
optional :top_p, Float

# @!method initialize(max_completion_tokens: nil, seed: nil, temperature: nil, top_p: nil)
# @!method initialize(max_completion_tokens: nil, response_format: nil, seed: nil, temperature: nil, tools: nil, top_p: nil)
# Some parameter documentations has been truncated, see
# {OpenAI::Models::Evals::CreateEvalCompletionsRunDataSource::SamplingParams} for
# more details.
#
# @param max_completion_tokens [Integer] The maximum number of tokens in the generated output.
#
# @param response_format [OpenAI::Models::ResponseFormatText, OpenAI::Models::ResponseFormatJSONSchema, OpenAI::Models::ResponseFormatJSONObject] An object specifying the format that the model must output.
#
# @param seed [Integer] A seed value to initialize the randomness, during sampling.
#
# @param temperature [Float] A higher temperature increases randomness in the outputs.
#
# @param tools [Array<OpenAI::Models::Chat::ChatCompletionTool>] A list of tools the model may call. Currently, only functions are supported as a
#
# @param top_p [Float] An alternative to temperature for nucleus sampling; 1.0 includes all tokens.

# An object specifying the format that the model must output.
#
# Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured
# Outputs which ensures the model will match your supplied JSON schema. Learn more
# in the
# [Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).
#
# Setting to `{ "type": "json_object" }` enables the older JSON mode, which
# ensures the message the model generates is valid JSON. Using `json_schema` is
# preferred for models that support it.
#
# @see OpenAI::Models::Evals::CreateEvalCompletionsRunDataSource::SamplingParams#response_format
module ResponseFormat
extend OpenAI::Internal::Type::Union

# Default response format. Used to generate text responses.
variant -> { OpenAI::ResponseFormatText }

# JSON Schema response format. Used to generate structured JSON responses.
# Learn more about [Structured Outputs](https://platform.openai.com/docs/guides/structured-outputs).
variant -> { OpenAI::ResponseFormatJSONSchema }

# JSON object response format. An older method of generating JSON responses.
# Using `json_schema` is recommended for models that support it. Note that the
# model will not generate JSON without a system or user message instructing it
# to do so.
variant -> { OpenAI::ResponseFormatJSONObject }

# @!method self.variants
# @return [Array(OpenAI::Models::ResponseFormatText, OpenAI::Models::ResponseFormatJSONSchema, OpenAI::Models::ResponseFormatJSONObject)]
end
end
end
end
Expand Down
78 changes: 77 additions & 1 deletion lib/openai/models/evals/run_cancel_response.rb
Original file line number Diff line number Diff line change
Expand Up @@ -616,20 +616,96 @@ class SamplingParams < OpenAI::Internal::Type::BaseModel
# @return [Float, nil]
optional :temperature, Float

# @!attribute text
# Configuration options for a text response from the model. Can be plain text or
# structured JSON data. Learn more:
#
# - [Text inputs and outputs](https://platform.openai.com/docs/guides/text)
# - [Structured Outputs](https://platform.openai.com/docs/guides/structured-outputs)
#
# @return [OpenAI::Models::Evals::RunCancelResponse::DataSource::Responses::SamplingParams::Text, nil]
optional :text,
-> { OpenAI::Models::Evals::RunCancelResponse::DataSource::Responses::SamplingParams::Text }

# @!attribute tools
# An array of tools the model may call while generating a response. You can
# specify which tool to use by setting the `tool_choice` parameter.
#
# The two categories of tools you can provide the model are:
#
# - **Built-in tools**: Tools that are provided by OpenAI that extend the model's
# capabilities, like
# [web search](https://platform.openai.com/docs/guides/tools-web-search) or
# [file search](https://platform.openai.com/docs/guides/tools-file-search).
# Learn more about
# [built-in tools](https://platform.openai.com/docs/guides/tools).
# - **Function calls (custom tools)**: Functions that are defined by you, enabling
# the model to call your own code. Learn more about
# [function calling](https://platform.openai.com/docs/guides/function-calling).
#
# @return [Array<OpenAI::Models::Responses::FunctionTool, OpenAI::Models::Responses::FileSearchTool, OpenAI::Models::Responses::ComputerTool, OpenAI::Models::Responses::Tool::Mcp, OpenAI::Models::Responses::Tool::CodeInterpreter, OpenAI::Models::Responses::Tool::ImageGeneration, OpenAI::Models::Responses::Tool::LocalShell, OpenAI::Models::Responses::WebSearchTool>, nil]
optional :tools, -> { OpenAI::Internal::Type::ArrayOf[union: OpenAI::Responses::Tool] }

# @!attribute top_p
# An alternative to temperature for nucleus sampling; 1.0 includes all tokens.
#
# @return [Float, nil]
optional :top_p, Float

# @!method initialize(max_completion_tokens: nil, seed: nil, temperature: nil, top_p: nil)
# @!method initialize(max_completion_tokens: nil, seed: nil, temperature: nil, text: nil, tools: nil, top_p: nil)
# Some parameter documentations has been truncated, see
# {OpenAI::Models::Evals::RunCancelResponse::DataSource::Responses::SamplingParams}
# for more details.
#
# @param max_completion_tokens [Integer] The maximum number of tokens in the generated output.
#
# @param seed [Integer] A seed value to initialize the randomness, during sampling.
#
# @param temperature [Float] A higher temperature increases randomness in the outputs.
#
# @param text [OpenAI::Models::Evals::RunCancelResponse::DataSource::Responses::SamplingParams::Text] Configuration options for a text response from the model. Can be plain
#
# @param tools [Array<OpenAI::Models::Responses::FunctionTool, OpenAI::Models::Responses::FileSearchTool, OpenAI::Models::Responses::ComputerTool, OpenAI::Models::Responses::Tool::Mcp, OpenAI::Models::Responses::Tool::CodeInterpreter, OpenAI::Models::Responses::Tool::ImageGeneration, OpenAI::Models::Responses::Tool::LocalShell, OpenAI::Models::Responses::WebSearchTool>] An array of tools the model may call while generating a response. You
#
# @param top_p [Float] An alternative to temperature for nucleus sampling; 1.0 includes all tokens.

# @see OpenAI::Models::Evals::RunCancelResponse::DataSource::Responses::SamplingParams#text
class Text < OpenAI::Internal::Type::BaseModel
# @!attribute format_
# An object specifying the format that the model must output.
#
# Configuring `{ "type": "json_schema" }` enables Structured Outputs, which
# ensures the model will match your supplied JSON schema. Learn more in the
# [Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).
#
# The default format is `{ "type": "text" }` with no additional options.
#
# **Not recommended for gpt-4o and newer models:**
#
# Setting to `{ "type": "json_object" }` enables the older JSON mode, which
# ensures the message the model generates is valid JSON. Using `json_schema` is
# preferred for models that support it.
#
# @return [OpenAI::Models::ResponseFormatText, OpenAI::Models::Responses::ResponseFormatTextJSONSchemaConfig, OpenAI::Models::ResponseFormatJSONObject, nil]
optional :format_,
union: -> {
OpenAI::Responses::ResponseFormatTextConfig
},
api_name: :format

# @!method initialize(format_: nil)
# Some parameter documentations has been truncated, see
# {OpenAI::Models::Evals::RunCancelResponse::DataSource::Responses::SamplingParams::Text}
# for more details.
#
# Configuration options for a text response from the model. Can be plain text or
# structured JSON data. Learn more:
#
# - [Text inputs and outputs](https://platform.openai.com/docs/guides/text)
# - [Structured Outputs](https://platform.openai.com/docs/guides/structured-outputs)
#
# @param format_ [OpenAI::Models::ResponseFormatText, OpenAI::Models::Responses::ResponseFormatTextJSONSchemaConfig, OpenAI::Models::ResponseFormatJSONObject] An object specifying the format that the model must output.
end
end
end

Expand Down
Loading