diff --git a/docs/source/en/using-diffusers/other-formats.md b/docs/source/en/using-diffusers/other-formats.md index df3df92f0693..11afbf29d3f2 100644 --- a/docs/source/en/using-diffusers/other-formats.md +++ b/docs/source/en/using-diffusers/other-formats.md @@ -70,41 +70,32 @@ pipeline = StableDiffusionPipeline.from_single_file( -#### LoRA files +#### LoRAs -[LoRA](https://hf.co/docs/peft/conceptual_guides/adapter#low-rank-adaptation-lora) is a lightweight adapter that is fast and easy to train, making them especially popular for generating images in a certain way or style. These adapters are commonly stored in a safetensors file, and are widely popular on model sharing platforms like [civitai](https://civitai.com/). +[LoRAs](../tutorials/using_peft_for_inference) are lightweight checkpoints fine-tuned to generate images or video in a specific style. If you are using a checkpoint trained with a Diffusers training script, the LoRA configuration is automatically saved as metadata in a safetensors file. When the safetensors file is loaded, the metadata is parsed to correctly configure the LoRA and avoids missing or incorrect LoRA configurations. -LoRAs are loaded into a base model with the [`~loaders.StableDiffusionLoraLoaderMixin.load_lora_weights`] method. +The easiest way to inspect the metadata, if available, is by clicking on the Safetensors logo next to the weights. + +
+ +
+ +For LoRAs that aren't trained with Diffusers, you can still save metadata with the `transformer_lora_adapter_metadata` and `text_encoder_lora_adapter_metadata` arguments in [`~loaders.FluxLoraLoaderMixin.save_lora_weights`] as long as it is a safetensors file. ```py -from diffusers import StableDiffusionXLPipeline import torch +from diffusers import FluxPipeline -# base model -pipeline = StableDiffusionXLPipeline.from_pretrained( - "Lykon/dreamshaper-xl-1-0", torch_dtype=torch.float16, variant="fp16" +pipeline = FluxPipeline.from_pretrained( + "black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16 ).to("cuda") - -# download LoRA weights -!wget https://civitai.com/api/download/models/168776 -O blueprintify.safetensors - -# load LoRA weights -pipeline.load_lora_weights(".", weight_name="blueprintify.safetensors") -prompt = "bl3uprint, a highly detailed blueprint of the empire state building, explaining how to build all parts, many txt, blueprint grid backdrop" -negative_prompt = "lowres, cropped, worst quality, low quality, normal quality, artifacts, signature, watermark, username, blurry, more than one bridge, bad architecture" - -image = pipeline( - prompt=prompt, - negative_prompt=negative_prompt, - generator=torch.manual_seed(0), -).images[0] -image +pipeline.load_lora_weights("linoyts/yarn_art_Flux_LoRA") +pipeline.save_lora_weights( + transformer_lora_adapter_metadata={"r": 16, "lora_alpha": 16}, + text_encoder_lora_adapter_metadata={"r": 8, "lora_alpha": 8} +) ``` -
- -
- ### ckpt > [!WARNING]