You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Before chunk, sequence length was 57826(226text + 57600img).
After xFuserLongContextAttention() and get_sp_group().all_gather(), sequence length becomes 57832(226text + 57606img).
The text was updated successfully, but these errors were encountered:
yinian-lw
changed the title
chunk the sequence into multi distributed gpus, after xFuserLongContextAttention() return different length sequnce
chunk the sequence into multi distributed gpus, after xFuserLongContextAttention() return different length sequence
Mar 6, 2025
yinian-lw
changed the title
chunk the sequence into multi distributed gpus, after xFuserLongContextAttention() return different length sequence
chunk the sequence into multi distributed gpus, after get_sp_group().all_gather() return different length sequence
Mar 6, 2025
USP is used in my exp, ulysses_size=8, cogvideox model
Before chunk, sequence length was 57826(226text + 57600img).
After xFuserLongContextAttention() and get_sp_group().all_gather(), sequence length becomes 57832(226text + 57606img).
print the query& shape
The text was updated successfully, but these errors were encountered: