-
Notifications
You must be signed in to change notification settings - Fork 6.2k
[modular] add Modular flux for text-to-image #11995
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@@ -11,12 +11,14 @@ | |||
@dataclass | |||
class FluxPipelineOutput(BaseOutput): | |||
""" | |||
Output class for Stable Diffusion pipelines. | |||
Output class for Flux image generation pipelines. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hope this change is okay.
return mu | ||
|
||
|
||
def _pack_latents(latents, batch_size, num_channels_latents, height, width): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Didn't use "Copied from ..." here because:
make fix-copies
enforces a weird indentation for this, which is errored out by the repo consistency check.
So, say you have the following as a standalone function in a module:
# Copied from diffusers.pipelines.flux.pipeline_flux.FluxPipeline._pack_latents
def _pack_latents(latents, batch_size, num_channels_latents, height, width):
latents = latents.view(batch_size, num_channels_latents, height // 2, 2, width // 2, 2)
latents = latents.permute(0, 2, 4, 1, 3, 5)
latents = latents.reshape(batch_size, (height // 2) * (width // 2), num_channels_latents * 4)
return latents
The moment you run make fix-copies
after this, you will have the following diff
:
+# Copied from diffusers.pipelines.flux.pipeline_flux.FluxPipeline._pack_latents
def _pack_latents(latents, batch_size, num_channels_latents, height, width):
- latents = latents.view(batch_size, num_channels_latents, height // 2, 2, width // 2, 2)
+ latents = latents.view(batch_size, num_channels_latents, height // 2, 2, width // 2, 2)
+ latents = latents.permute(0, 2, 4, 1, 3, 5)
+ latents = latents.reshape(batch_size, (height // 2) * (width // 2), num_channels_latents * 4)
+
+ return latents
latents = latents.permute(0, 2, 4, 1, 3, 5)
latents = latents.reshape(batch_size, (height // 2) * (width // 2), num_channels_latents * 4)
One can notice the messed up indentation. We should fix in a separate PR. Cc: @DN6
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice actually
I think we should move a lot more methods away from pipeline and as functions
# Copied from
does not work well for poeple that's not maintainers; with modular system, all the methods are refactored to not depends on state anyway
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed. Could be cool to consider in the set of refactors @DN6 is doing 👀
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks @sayakpaul
can you manually create a modular repo for flux too? (see #11913 (comment))
raise ValueError(f"`prompt` or `prompt_2` has to be of type `str` or `list` but is {type(prompt)}") | ||
|
||
@staticmethod | ||
def _get_t5_prompt_embeds( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can turn these two methods to functions and use across different models: flux/ltx/sd3 ....
I will put up a prototype in one of my PRs, just FYI here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed. Would be very curious to learn more.
@yiyixuxu here is the repo: https://huggingface.co/diffusers-internal-dev/modular-flux.1-dev/. Do we have to manually populate I will merge this PR once the above point is clarified. |
yes, manually
|
Alright. I manually populated the |
Failing tests are unrelated. |
What does this PR do?
Plan to add the other tasks in a follow-up! I hope that's okay. Code to test this PR:
Unfold
Output:
Also, I have decided to not implement any guidance in this PR as the original Flux pipeline doesn't have any guidance. LMK if that is okay.