-
Notifications
You must be signed in to change notification settings - Fork 10.4k
Support Multi/InfiniteTalk #10179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
kijai
wants to merge
17
commits into
comfyanonymous:master
Choose a base branch
from
kijai:multitalk
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
+678
−4
Open
Support Multi/InfiniteTalk #10179
Changes from all commits
Commits
Show all changes
17 commits
Select commit
Hold shift + click to select a range
efe83f5
re-init
kijai 460ce7f
Update model_multitalk.py
kijai 6f6db12
whitespace...
kijai 00c069d
Update model_multitalk.py
kijai 57567bd
remove print
kijai 9c5022e
this is redundant
kijai d0dce6b
Merge remote-tracking branch 'upstream/master' into multitalk
kijai 7842a5c
remove import
kijai 99dc959
Merge remote-tracking branch 'upstream/master' into multitalk
kijai 4cbc1a6
Merge remote-tracking branch 'upstream/master' into multitalk
kijai f5d53f2
Restore preview functionality
kijai 897ffeb
Merge remote-tracking branch 'upstream/master' into multitalk
kijai 25063f2
Merge remote-tracking branch 'upstream/master' into multitalk
kijai 6bfce54
Move block_idx to transformer_options
kijai 8d62661
Remove LoopingSamplerCustomAdvanced
kijai d53e629
Remove looping functionality, keep extension functionality
kijai 3ae78a4
Update model_multitalk.py
kijai File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is some uncertainty about whether returning this will in general increase the memory peak of WAN within native ComfyUI. Instead, comfy suggests that you add a patch to replace the
x = optimized_attention(...)call on line 81 byreusing theModelPatcher.set_model_attn1_replacefunctionality (in unet, attn1 is self, attn2 is cross), which can then do the optimized_attention call + the partial attention thing that happens inside thecross_attnpatch. To get the q + k for thecross_attnpatch, you can store the q and k values in transformer_options instead and then pop them out after usage.The
transformer_indexcan stay None (not given) since that was something unique to unet models.It would probably be more optimal to not call optimized_attention anymore and just reuse the logic of hte slower partial attention thingy in this code, but comfy said he would be fine if you didn't go that far and just kept both within that attention replacement function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This one I'm a bit unsure of, is there an example of such a patch?