You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Some parameter documentations has been truncated, see
481
+
# {OpenAI::Models::Evals::CreateEvalCompletionsRunDataSource::SamplingParams} for
482
+
# more details.
483
+
#
454
484
# @param max_completion_tokens [Integer] The maximum number of tokens in the generated output.
455
485
#
486
+
# @param response_format [OpenAI::Models::ResponseFormatText, OpenAI::Models::ResponseFormatJSONSchema, OpenAI::Models::ResponseFormatJSONObject] An object specifying the format that the model must output.
487
+
#
456
488
# @param seed [Integer] A seed value to initialize the randomness, during sampling.
457
489
#
458
490
# @param temperature [Float] A higher temperature increases randomness in the outputs.
459
491
#
492
+
# @param tools [Array<OpenAI::Models::Chat::ChatCompletionTool>] A list of tools the model may call. Currently, only functions are supported as a
493
+
#
460
494
# @param top_p [Float] An alternative to temperature for nucleus sampling; 1.0 includes all tokens.
495
+
496
+
# An object specifying the format that the model must output.
# @param max_completion_tokens [Integer] The maximum number of tokens in the generated output.
627
661
#
628
662
# @param seed [Integer] A seed value to initialize the randomness, during sampling.
629
663
#
630
664
# @param temperature [Float] A higher temperature increases randomness in the outputs.
631
665
#
666
+
# @param text [OpenAI::Models::Evals::RunCancelResponse::DataSource::Responses::SamplingParams::Text] Configuration options for a text response from the model. Can be plain
667
+
#
668
+
# @param tools [Array<OpenAI::Models::Responses::FunctionTool, OpenAI::Models::Responses::FileSearchTool, OpenAI::Models::Responses::ComputerTool, OpenAI::Models::Responses::Tool::Mcp, OpenAI::Models::Responses::Tool::CodeInterpreter, OpenAI::Models::Responses::Tool::ImageGeneration, OpenAI::Models::Responses::Tool::LocalShell, OpenAI::Models::Responses::WebSearchTool>] An array of tools the model may call while generating a response. You
669
+
#
632
670
# @param top_p [Float] An alternative to temperature for nucleus sampling; 1.0 includes all tokens.
# @param format_ [OpenAI::Models::ResponseFormatText, OpenAI::Models::Responses::ResponseFormatTextJSONSchemaConfig, OpenAI::Models::ResponseFormatJSONObject] An object specifying the format that the model must output.
# @param max_completion_tokens [Integer] The maximum number of tokens in the generated output.
587
623
#
588
624
# @param seed [Integer] A seed value to initialize the randomness, during sampling.
589
625
#
590
626
# @param temperature [Float] A higher temperature increases randomness in the outputs.
591
627
#
628
+
# @param text [OpenAI::Models::Evals::RunCreateParams::DataSource::CreateEvalResponsesRunDataSource::SamplingParams::Text] Configuration options for a text response from the model. Can be plain
629
+
#
630
+
# @param tools [Array<OpenAI::Models::Responses::FunctionTool, OpenAI::Models::Responses::FileSearchTool, OpenAI::Models::Responses::ComputerTool, OpenAI::Models::Responses::Tool::Mcp, OpenAI::Models::Responses::Tool::CodeInterpreter, OpenAI::Models::Responses::Tool::ImageGeneration, OpenAI::Models::Responses::Tool::LocalShell, OpenAI::Models::Responses::WebSearchTool>] An array of tools the model may call while generating a response. You
631
+
#
592
632
# @param top_p [Float] An alternative to temperature for nucleus sampling; 1.0 includes all tokens.
# @param format_ [OpenAI::Models::ResponseFormatText, OpenAI::Models::Responses::ResponseFormatTextJSONSchemaConfig, OpenAI::Models::ResponseFormatJSONObject] An object specifying the format that the model must output.
0 commit comments