Skip to content

Commit 80e8f3f

Browse files
committed
draft
1 parent 00b179f commit 80e8f3f

File tree

3 files changed

+154
-4
lines changed

3 files changed

+154
-4
lines changed

docs/source/en/_toctree.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -180,6 +180,8 @@
180180
title: Caching
181181
- local: optimization/memory
182182
title: Reduce memory usage
183+
- local: optimization/speed-memory-optims
184+
title: Compile and offloading
183185
- local: optimization/xformers
184186
title: xFormers
185187
- local: optimization/tome

docs/source/en/optimization/memory.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ Modern diffusion models like [Flux](../api/pipelines/flux) and [Wan](../api/pipe
1717
This guide will show you how to reduce your memory usage.
1818

1919
> [!TIP]
20-
> Keep in mind these techniques may need to be adjusted depending on the model! For example, a transformer-based diffusion model may not benefit equally from these inference speed optimizations as a UNet-based model.
20+
> Keep in mind these techniques may need to be adjusted depending on the model. For example, a transformer-based diffusion model may not benefit equally from these memory optimizations as a UNet-based model.
2121
2222
## Multiple GPUs
2323

@@ -145,7 +145,7 @@ print(f"Max memory reserved: {torch.cuda.max_memory_allocated() / 1024**3:.2f} G
145145
```
146146

147147
> [!WARNING]
148-
> [`AutoencoderKLWan`] and [`AsymmetricAutoencoderKL`] don't support slicing.
148+
> The [`AutoencoderKLWan`] and [`AsymmetricAutoencoderKL`] classes don't support slicing.
149149
150150
## VAE tiling
151151

@@ -219,7 +219,7 @@ from diffusers import DiffusionPipeline
219219
pipeline = DiffusionPipeline.from_pretrained(
220220
"black-forest-labs/FLUX.1-schnell", torch_dtype=torch.bfloat16
221221
)
222-
pipline.enable_model_cpu_offload()
222+
pipeline.enable_model_cpu_offload()
223223

224224
pipeline(
225225
prompt="An astronaut riding a horse on Mars",
@@ -493,7 +493,7 @@ with torch.inference_mode():
493493
## Memory-efficient attention
494494

495495
> [!TIP]
496-
> Memory-efficient attention optimizes for memory usage *and* [inference speed](./fp16#scaled-dot-product-attention!
496+
> Memory-efficient attention optimizes for memory usage *and* [inference speed](./fp16#scaled-dot-product-attention)!
497497
498498
The Transformers attention mechanism is memory-intensive, especially for long sequences, so you can try using different and more memory-efficient attention types.
499499

Lines changed: 148 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,148 @@
1+
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License.
11+
-->
12+
13+
# Compile and offloading
14+
15+
There are trade-offs associated with optimizing solely for [inference speed](./fp16) or [memory-usage](./memory). For example, [caching](./cache) increases inference speed but requires more memory to store the intermediate outputs from the attention layers.
16+
17+
If your hardware is sufficiently powerful, you can choose to focus on one or the other. For a more balanced approach that doesn't sacrifice too much in terms of inference speed and memory-usage, try compiling and offloading a model.
18+
19+
Refer to the table below for the latency and memory-usage of each combination.
20+
21+
| combination | latency | memory usage |
22+
|---|---|---|
23+
| quantization, torch.compile | | |
24+
| quantization, torch.compile, model CPU offloading | | |
25+
| quantization, torch.compile, group offloading | | |
26+
27+
This guide will show you how to compile and offload a model to improve both inference speed and memory-usage.
28+
29+
## Quantization and torch.compile
30+
31+
> [!TIP]
32+
> The quantization backend, such as [bitsandbytes](../quantization/bitsandbytes#torchcompile), must be compatible with torch.compile. Refer to the quantization [overview](https://huggingface.co/docs/transformers/quantization/overview#overview) table to see which backends support torch.compile.
33+
34+
Start by [quantizing](../quantization/overview) a model to reduce the memory required to store it and [compiling](./fp16#torchcompile) it to accelerate inference.
35+
36+
```py
37+
import torch
38+
from diffusers import DiffusionPipeline
39+
from diffusers.quantizers import PipelineQuantizationConfig
40+
41+
# quantize
42+
pipeline_quant_config = PipelineQuantizationConfig(
43+
quant_backend="bitsandbytes_4bit",
44+
quant_kwargs={"load_in_4bit": True, "bnb_4bit_quant_type": "nf4", "bnb_4bit_compute_dtype": torch.bfloat16},
45+
components_to_quantize=["transformer", "text_encoder_2"],
46+
)
47+
pipeline = DiffusionPipeline.from_pretrained(
48+
"black-forest-labs/FLUX.1-dev",
49+
quantization_config=pipeline_quant_config,
50+
torch_dtype=torch.bfloat16,
51+
).to("cuda")
52+
53+
# compile
54+
pipeline.transformer.to(memory_format=torch.channels_last)
55+
pipeline.transformer = torch.compile(
56+
pipeline.transformer, mode="ax-autotune", fullgraph=True
57+
)
58+
pipeline("""
59+
cinematic film still of a cat sipping a margarita in a pool in Palm Springs, California
60+
highly detailed, high budget hollywood movie, cinemascope, moody, epic, gorgeous, film grain
61+
"""
62+
).images[0]
63+
```
64+
65+
## Quantization, torch.compile, and offloading
66+
67+
In addition to quantization and torch.compile, try offloading if you need to reduce memory-usage further. Offloading moves various layers or model components from the CPU to the GPU as needed for computations.
68+
69+
<hfoptions id="offloading">
70+
<hfoption id="model CPU offloading">
71+
72+
[Model CPU offloading](./memory#model-offloading) moves an individual pipeline component, like the transformer model, to the GPU when it is needed for computation. Otherwise, it is offloaded to the CPU.
73+
74+
```py
75+
import torch
76+
from diffusers import DiffusionPipeline
77+
from diffusers.quantizers import PipelineQuantizationConfig
78+
79+
# quantize
80+
pipeline_quant_config = PipelineQuantizationConfig(
81+
quant_backend="bitsandbytes_4bit",
82+
quant_kwargs={"load_in_4bit": True, "bnb_4bit_quant_type": "nf4", "bnb_4bit_compute_dtype": torch.bfloat16},
83+
components_to_quantize=["transformer", "text_encoder_2"],
84+
)
85+
pipeline = DiffusionPipeline.from_pretrained(
86+
"black-forest-labs/FLUX.1-dev",
87+
quantization_config=pipeline_quant_config,
88+
torch_dtype=torch.bfloat16,
89+
).to("cuda")
90+
91+
# model CPU offloading
92+
pipeline.enable_model_cpu_offload()
93+
94+
# compile
95+
pipeline.transformer.to(memory_format=torch.channels_last)
96+
pipeline.transformer = torch.compile(
97+
pipeline.transformer, mode="ax-autotune", fullgraph=True
98+
)
99+
pipeline(
100+
"cinematic film still of a cat sipping a margarita in a pool in Palm Springs, California, highly detailed, high budget hollywood movie, cinemascope, moody, epic, gorgeous, film grain"
101+
).images[0]
102+
```
103+
104+
</hfoption>
105+
<hfoption id="group offloading">
106+
107+
[Group offloading](./memory#group-offloading) moves the internal layers of an individual pipeline component, like the transformer model, to the GPU for computation and offloads it when it's not required. At the same time, it uses the [CUDA stream](./memory#cuda-stream) feature to prefetch the next layer for execution.
108+
109+
By overlapping computation and data transfer, it is faster than model CPU offloading while also saving memory.
110+
111+
```py
112+
import torch
113+
from diffusers import DiffusionPipeline
114+
from diffusers.hooks import apply_group_offloading
115+
from diffusers.quantizers import PipelineQuantizationConfig
116+
117+
# quantize
118+
pipeline_quant_config = PipelineQuantizationConfig(
119+
quant_backend="bitsandbytes_4bit",
120+
quant_kwargs={"load_in_4bit": True, "bnb_4bit_quant_type": "nf4", "bnb_4bit_compute_dtype": torch.bfloat16},
121+
components_to_quantize=["transformer", "text_encoder_2"],
122+
)
123+
pipeline = DiffusionPipeline.from_pretrained(
124+
"black-forest-labs/FLUX.1-dev",
125+
quantization_config=pipeline_quant_config,
126+
torch_dtype=torch.bfloat16,
127+
).to("cuda")
128+
129+
# group offloading
130+
onload_device = torch.device("cuda")
131+
offload_device = torch.device("cpu")
132+
133+
pipeline.transformer.enable_group_offload(onload_device=onload_device, offload_device=offload_device, offload_type="leaf_level", use_stream=True)
134+
pipeline.vae.enable_group_offload(onload_device=onload_device, offload_type="leaf_level", use_stream=True)
135+
apply_group_offloading(pipeline.text_encoder, onload_device=onload_device, offload_type="block_level", num_blocks_per_group=1, use_stream=True)
136+
137+
# compile
138+
pipeline.transformer.to(memory_format=torch.channels_last)
139+
pipeline.transformer = torch.compile(
140+
pipeline.transformer, mode="ax-autotune", fullgraph=True
141+
)
142+
pipeline(
143+
"cinematic film still of a cat sipping a margarita in a pool in Palm Springs, California, highly detailed, high budget hollywood movie, cinemascope, moody, epic, gorgeous, film grain"
144+
).images[0]
145+
```
146+
147+
</hfoption>
148+
</hfoptions>

0 commit comments

Comments
 (0)