diff --git a/README.md b/README.md index a695c67e..8f564b95 100644 --- a/README.md +++ b/README.md @@ -93,6 +93,7 @@ Furthermore, xDiT incorporates optimization techniques from [DiTFastAttn](https:

📢 Updates

+* 🎉**December 7, 2024**: xDiT is the official parallel inference engine for [HunyuanVideo](https://github.com/Tencent-Hunyuan/HunyuanVideo), reducing 5-sec video generation latency from 31 minutes to 5 minutes! * 🎉**November 28, 2024**: xDiT achieves 1.6 sec end-to-end latency for 28-step [Flux.1-Dev](https://huggingface.co/black-forest-labs/FLUX.1-dev) inference on 4xH100! * 🎉**November 20, 2024**: xDiT supports [CogVideoX-1.5](https://huggingface.co/THUDM/CogVideoX1.5-5B) and achieved 6.12x speedup compare to the implementation in diffusers! * 🎉**November 11, 2024**: xDiT has been applied to [mochi-1](https://github.com/xdit-project/mochi-xdit) and achieved 3.54x speedup compare to the official open source implementation! @@ -158,31 +159,35 @@ Currently, if you need the parallel version of ComfyUI, please fill in this [app

Mochi1

-1. [mochi1-xdit: Reducing the Inference Latency by 3.54x Compare to the Official Open Souce Implementation!](https://github.com/xdit-project/mochi-xdit) +1. [HunyuanVideo Performance Report](./docs/performance/hunyuanvideo.md)

CogVideo

-2. [CogVideo Performance Report](./docs/performance/cogvideo.md) +2. [mochi1-xdit: Reducing the Inference Latency by 3.54x Compare to the Official Open Souce Implementation!](https://github.com/xdit-project/mochi-xdit) + +

CogVideo

+ +3. [CogVideo Performance Report](./docs/performance/cogvideo.md)

Flux.1

-3. [Flux Performance Report](./docs/performance/flux.md) +4. [Flux Performance Report](./docs/performance/flux.md)

Latte

-4. [Latte Performance Report](./docs/performance/latte.md) +5. [Latte Performance Report](./docs/performance/latte.md)

HunyuanDiT

-5. [HunyuanDiT Performance Report](./docs/performance/hunyuandit.md) +6. [HunyuanDiT Performance Report](./docs/performance/hunyuandit.md)

SD3

-6. [Stable Diffusion 3 Performance Report](./docs/performance/sd3.md) +7. [Stable Diffusion 3 Performance Report](./docs/performance/sd3.md)

Pixart

-7. [Pixart-Alpha Performance Report (legacy)](./docs/performance/pixart_alpha_legacy.md) +8. [Pixart-Alpha Performance Report (legacy)](./docs/performance/pixart_alpha_legacy.md)

🚀 QuickStart

diff --git a/docs/performance/hunyuanvideo.md b/docs/performance/hunyuanvideo.md new file mode 100644 index 00000000..413b73f6 --- /dev/null +++ b/docs/performance/hunyuanvideo.md @@ -0,0 +1,25 @@ +## HunyuanVideo Performance Report + +xDiT is [HunyuanVideo](https://github.com/Tencent/HunyuanVideo)'s official parallel inference engine. On H100 and H20 GPUs, xDiT reduces the generation time of 1028x720 videos from 31 minutes to 5 minutes, and 960x960 videos from 28 minutes to 6 minutes. + +### 1280x720 Resolution (129 frames, 50 steps) - Ulysses Latency (seconds) + +
+ +| GPU Type | 1 GPU | 2 GPUs | 4 GPUs | 8 GPUs | +|----------|--------|---------|---------|---------| +| H100 | 1904.08 | 925.04 | 514.08 | 337.58 | +| H20 | 6,639.17 | 3,400.55 | 1,762.86 | 940.97 | + +
+ +### 960x960 Resolution (129 frames, 50 steps) - Ulysses Latency (seconds) + +
+ +| GPU Type | 1 GPU | 2 GPUs | 3 GPUs | 6 GPUs | +|----------|--------|---------|---------|---------| +| H100 | 1,735.01 | 934.09 | 645.45 | 367.02 | +| H20 | 6,621.46 | 3,400.55 | 2,310.48 | 1,214.67 | + +
\ No newline at end of file