Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 5 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,23 +37,13 @@ For information on installation, configuration, and usage, please visit our [doc

Please see [this guide](https://docs.framepackstudio.com/docs/get_started/) on our documentation site to get FP-Studio installed.

## LoRAs
## Contributing

Add LoRAs to the /loras/ folder at the root of the installation. Select the LoRAs you wish to load and set the weights for each generation. Most Hunyuan LoRAs were originally trained for T2V, it's often helpful to run a T2V generation to ensure they're working before using input images.
We would love your help building FramePack Studio! To make collaboration effective, please adhere to the following:
- Keep Pull Requests Focused: Each Pull Request should address a single issue or add one specific feature. Please do not mix bug fixes, new features, and code refactoring in the same PR.
- Target the develop Branch: All Pull Requests must be opened against the develop branch. PRs opened against the main branch will be closed.
- Discuss Big Changes First: If you plan to work on a large feature or a significant refactor, please announce it first in the #contributors channel on our [Discord server](https://discord.com/invite/MtuM7gFJ3V). This helps us coordinate efforts and prevent duplicate work.

NOTE: Slow lora loading is a known issue

## Working with Timestamped Prompts

You can create videos with changing prompts over time using the following syntax:

```
[0s: A serene forest with sunlight filtering through the trees ]
[5s: A deer appears in the clearing ]
[10s: The deer drinks from a small stream ]
```

Each timestamp defines when that prompt should start influencing the generation. The system will (hopefully) smoothly transition between prompts for a cohesive video.

## Credits

Expand Down
16 changes: 9 additions & 7 deletions modules/pipelines/worker.py
Original file line number Diff line number Diff line change
Expand Up @@ -1302,13 +1302,15 @@ def fmt_eta(sec):
),
)
)
move_model_to_device_with_memory_preservation(
studio_module.current_generator.transformer,
target_device=gpu,
preserved_memory_gb=settings.get("gpu_memory_preservation"),
)
if selected_loras:
studio_module.current_generator.move_lora_adapters_to_device(gpu)

move_model_to_device_with_memory_preservation(
studio_module.current_generator.transformer,
target_device=gpu,
preserved_memory_gb=settings.get("gpu_memory_preservation"),
)

if selected_loras:
studio_module.current_generator.move_lora_adapters_to_device(gpu)

from diffusers_helper.pipelines.k_diffusion_hunyuan import sample_hunyuan

Expand Down
65 changes: 33 additions & 32 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -1,36 +1,37 @@
accelerate==1.6.0
av==12.1.0
decord
diffusers==0.33.1
einops
ffmpeg-python==0.2.0
gradio==5.25.2
imageio-ffmpeg==0.4.8
imageio==2.31.1
jinja2>=3.1.2
numpy==1.26.2
opencv-contrib-python
peft
pillow==11.1.0
requests==2.31.0
safetensors
scipy==1.12.0
sentencepiece==0.2.0
torchsde==0.2.6
tqdm
timm
transformers==4.46.2

accelerate==1.6.0
av==12.1.0
decord
diffusers==0.33.1
einops
ffmpeg-python==0.2.0
gradio==5.25.2
huggingface_hub<0.35.1
imageio-ffmpeg==0.4.8
imageio==2.31.1
jinja2>=3.1.2
numpy==1.26.2
opencv-contrib-python
peft<0.18.0
pillow==11.1.0
requests==2.31.0
safetensors
scipy==1.12.0
sentencepiece==0.2.0
torchsde==0.2.6
tqdm
timm
transformers==4.46.2

# quantization
bitsandbytes>=0.41.1

# for toolbox
basicsr
# basicsr-fixed
devicetorch
facexlib>=0.2.5
gfpgan>=1.3.5
psutil
realesrgan
colorlog
# for toolbox
basicsr
# basicsr-fixed
devicetorch
facexlib>=0.2.5
gfpgan>=1.3.5
psutil
realesrgan
colorlog

Loading