You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* updates to doc comments and types
* deprecated
* update ChatCompletionFunctions to FunctionObject
* More type updates
* add logprobs field
* update from spec
* updated spec
* fixes suggested by cargo clippy
/// Upload a file that contains document(s) to be used across various endpoints/features. Currently, the size of all the files uploaded by one organization can be up to 1 GB. Please contact us if you need to increase the storage limit.
18
+
/// Upload a file that can be used across various endpoints. The size of all the files uploaded by one organization can be up to 100 GB.
19
+
///
20
+
/// The size of individual files can be a maximum of 512 MB or 2 million tokens for Assistants. See the [Assistants Tools guide](https://platform.openai.com/docs/assistants/tools) to learn more about the types of files supported. The Fine-tuning API only supports `.jsonl` files.
21
+
///
22
+
/// Please [contact us](https://help.openai.com/) if you need to increase these storage limits.
/// A list of [File](https://platform.openai.com/docs/api-reference/files) IDs attached to this assistant. There can be a maximum of 20 files attached to the assistant. Files are ordered by their creation date in ascending order. If a file was previously attached to the list but does not show up in the list, it will be deleted from the assistant.
/// One of the available [TTS models](https://platform.openai.com/docs/models/tts): `tts-1` or `tts-1-hd`
102
102
pubmodel:SpeechModel,
103
103
104
-
/// The voice to use when generating the audio. Supported voices are `alloy`, `echo`, `fable`, `onyx`, `nova`, and `shimmer`.
104
+
/// The voice to use when generating the audio. Supported voices are `alloy`, `echo`, `fable`, `onyx`, `nova`, and `shimmer`. Previews of the voices are available in the [Text to speech guide](https://platform.openai.com/docs/guides/text-to-speech/voice-options).
105
105
pubvoice:Voice,
106
106
107
107
/// The format to audio in. Supported formats are mp3, opus, aac, and flac.
Copy file name to clipboardExpand all lines: async-openai/src/types/chat.rs
+74-18Lines changed: 74 additions & 18 deletions
Original file line number
Diff line number
Diff line change
@@ -103,7 +103,7 @@ pub struct CompletionUsage {
103
103
#[builder(build_fn(error = "OpenAIError"))]
104
104
pubstructChatCompletionRequestSystemMessage{
105
105
/// The contents of the system message.
106
-
pubcontent:Option<String>,
106
+
pubcontent:String,
107
107
/// The role of the messages author, in this case `system`.
108
108
#[builder(default = "Role::System")]
109
109
pubrole:Role,
@@ -142,7 +142,7 @@ pub enum ImageUrlDetail {
142
142
pubstructImageUrl{
143
143
/// Either a URL of the image or the base64 encoded image data.
144
144
puburl:String,
145
-
/// Specifies the detail level of the image.
145
+
/// Specifies the detail level of the image. Learn more in the [Vision guide](https://platform.openai.com/docs/guides/vision/low-or-high-fidelity-image-understanding).
/// The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
297
298
pubname:String,
298
299
/// A description of what the function does, used by the model to choose when and how to call the function.
299
300
#[serde(skip_serializing_if = "Option::is_none")]
300
301
pubdescription:Option<String>,
301
-
/// The parameters the functions accepts, described as a JSON Schema object.
302
-
/// See the [guide](https://platform.openai.com/docs/guides/gpt/function-calling) for examples,
303
-
/// and the [JSON Schema reference](https://json-schema.org/understanding-json-schema/) for
304
-
/// documentation about the format.
302
+
/// The parameters the functions accepts, described as a JSON Schema object. See the [guide](https://platform.openai.com/docs/guides/text-generation/function-calling) for examples, and the [JSON Schema reference](https://json-schema.org/understanding-json-schema/) for documentation about the format.
305
303
///
306
-
/// To describe a function that accepts no parameters, provide the
307
-
/// value `{\"type\": \"object\", \"properties\": {}}`.
304
+
/// Omitting `parameters` defines a function with an empty parameter list.
/// The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
316
+
pubname:String,
317
+
/// A description of what the function does, used by the model to choose when and how to call the function.
318
+
#[serde(skip_serializing_if = "Option::is_none")]
319
+
pubdescription:Option<String>,
320
+
/// The parameters the functions accepts, described as a JSON Schema object. See the [guide](https://platform.openai.com/docs/guides/text-generation/function-calling) for examples, and the [JSON Schema reference](https://json-schema.org/understanding-json-schema/) for documentation about the format.
321
+
///
322
+
/// Omitting `parameters` defines a function with an empty parameter list.
/// The maximum number of [tokens](https://platform.openai.com/tokenizer) to generate in the chat completion.
426
+
/// Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the `content` of `message`. This option is currently not available on the `gpt-4-vision-preview` model.
427
+
#[serde(skip_serializing_if = "Option::is_none")]
428
+
publogprobs:Option<bool>,
429
+
430
+
/// An integer between 0 and 5 specifying the number of most likely tokens to return at each token position, each with an associated log probability. `logprobs` must be set to `true` if this parameter is used.
431
+
#[serde(skip_serializing_if = "Option::is_none")]
432
+
pubtop_logprobs:Option<u8>,
433
+
434
+
/// The maximum number of [tokens](https://platform.openai.com/tokenizer) that can be generated in the chat completion.
411
435
///
412
-
/// The total length of input tokens and generated tokens is limited by the model's context length. [Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb) for counting tokens.
436
+
/// The total length of input tokens and generated tokens is limited by the model's context length. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens.
413
437
#[serde(skip_serializing_if = "Option::is_none")]
414
438
pubmax_tokens:Option<u16>,
415
439
416
-
/// How many chat completion choices to generate for each input message.
440
+
/// How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep `n` as `1` to minimize costs.
/// An object specifying the format that the model must output.
450
+
/// An object specifying the format that the model must output. Compatible with `gpt-4-1106-preview` and `gpt-3.5-turbo-1106`.
427
451
///
428
452
/// Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates is valid JSON.
429
453
///
430
-
/// **Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in increased latency and appearance of a "stuck" request. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.
454
+
/// **Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.
/// A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be `null` if there is no bytes representation for the token.
/// A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be `null` if there is no bytes representation for the token.
544
+
pubbytes:Option<Vec<u8>>,
545
+
/// List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested `top_logprobs` returned.
0 commit comments