You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
let file_part = reqwest::multipart::Part::stream(stream)
.file_name(file_name)
.mime_str("application/octet-stream")
.unwrap();
Ok(file_part)
}
It appears that create_file_part() hardcodes a mime type of application/octet-stream.
While this is probably OK for older models and some auto-detection may have been happening, the gpt-image-1 model appears to be picky.
I'm not sure what the best way to solve this is, but one possibility may be to add an additional field to each InputSource variant. Maybe something like:
This is a huge patch which does some major refactoring like:
- renaming "Image Generation" to "Image Creation" in most places,
to better match its new command (`!bai image create`)
- relocating image creation command (`!bai image` -> `!bai image create`),
so it wouldn't conflict with the new image editing command (`!bai image edit`)
- introducing a new image editing command (`!bai image edit`), which
is meant to work only with the OpenAI provider, but doesn't fully work yet
due to 64bit/async-openai#364, though a next patch will fix it
- adding support for reading images off of Matrix conversations and forwarding them to
text conversations. Works for OpenAI, but not for Anthropic yet
(requires custom patches) and not for OpenAI-Compat (no support for
images there)
- relocating some utils around (base64, mime)
Related to:
- CreateImageEditRequest forces an application/octet-stream content type
for images (64bit/async-openai#364)
- CreateImageEditRequest only deals with a single image
(64bit/async-openai#363)
This is a continuation of 8f86289
and fixes the Image Editing feature for OpenAI.
I'm trying to use the Create image edit API with the
gpt-image-1
model viaclient.images().create_edit()
with aCreateImageEditRequest
.#363 is a blocker for multi-image use-cases, but even single images fail.
My code is something like this:
This fails with:
The payload that's sent to
/v1/images/edits
looks like this:It seems like the relevant code is:
async-openai/async-openai/src/types/impls.rs
Lines 892 to 900 in aeb6d1f
.. and:
async-openai/async-openai/src/util.rs
Lines 28 to 61 in aeb6d1f
It appears that
create_file_part()
hardcodes a mime type ofapplication/octet-stream
.While this is probably OK for older models and some auto-detection may have been happening, the
gpt-image-1
model appears to be picky.I'm not sure what the best way to solve this is, but one possibility may be to add an additional field to each
InputSource
variant. Maybe something like:This could then be used in
create_file_part()
.The text was updated successfully, but these errors were encountered: