Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[doc] ray launch parallel inferene #442

Merged
merged 2 commits into from
Jan 23, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 7 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -269,7 +269,13 @@ The warmup step impacts the efficiency of PipeFusion as it cannot be executed in
We observed that a warmup of 0 had no effect on the PixArt model.
Users can tune this value according to their specific tasks.

### 5. Launch an HTTP Service
### 5. Launch parallel inference example with ray

We also provide a ray example to launch parallel inference. With ray, we can disaggregate the VAE module and DiT backbone, and allocate different GPU parallelism for them.

[Launch parallel inference example with ray](./examples/ray/README.md)

### 6. Launch an HTTP Service

You can also launch an HTTP service to generate images with xDiT.

Expand Down
22 changes: 22 additions & 0 deletions examples/ray/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
## Running DiT Backbone and VAE Module Separately

The DiT model typically consists of DiT backbone (encoder + transformers) and VAE module.
The DiT backbone module has high computational requirements but stable memory usage.
For high-resolution images, the VAE module has high memory consumption due to temporary memory spikes from convolution operators, despite its low computational requirements. This often leads to OOM (Out of Memory) issues caused by the VAE module.

Therefore, separating the encoder + DiT backbone from the VAE module can effectively alleviate OOM issues.
We use Ray to implement the separation of backbone and VAE functionality, and allocate different GPU parallelism for VAE and DiT backbone.

In `ray_run.sh`, we define different model configurations.
For example, if we use 3 GPUs and want to allocate 1 GPU for VAE and 2 GPUs for DiT backbone, the settings in `ray_run.sh` would be:

```
N_GPUS=3 # world size
PARALLEL_ARGS="--pipefusion_parallel_degree 2 --ulysses_degree 1 --ring_degree 1"
VAE_PARALLEL_SIZE=1
DIT_PARALLEL_SIZE=2
```

Here, `VAE_PARALLEL_SIZE` specifies the parallelism for VAE, DIT_PARALLEL_SIZE defines DiT parallelism, and PARALLEL_ARGS contains the parallel configuration for DiT backbone, which in this case uses PipeFusion to run on 2 GPUs.


Loading