Hi all,
As the document illustrated, TurboDiffusion primarily uses SageAttention, SLA for attention acceleration, and rCM for timestep distillation.
Is it possible we can only apply rCM and obtain the model without changing its structure?
If so, is there any pretrained model for TurboWan2.1 only apply rCM?
Thanks