Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ queue_images/
modules/toolbox/model_esrgan/
modules/toolbox/model_rife/
.framepack/
.claude/
modules/toolbox/data/
modules/toolbox/bin
queue.json
Expand Down
139 changes: 139 additions & 0 deletions INSTALLATION_NOTES.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,139 @@
# FramePack Studio Installation Notes

## Installation Summary

**Installation Method:** Fresh venv created and packages installed via automated process
**Date:** December 29, 2025
**Python Version:** 3.13

### Installed Packages

**Core AI/ML Packages:**
- PyTorch: 2.8.0+cu128
- TorchVision: 0.23.0+cu128
- TorchAudio: 2.8.0+cu128
- Diffusers: 0.36.0
- Transformers: 4.57.3
- Accelerate: 1.12.0
- PEFT: 0.18.0

**UI & Web:**
- Gradio: 6.2.0

**Media Processing:**
- OpenCV Contrib Python: 4.12.0.88
- av: 16.0.1
- imageio: 2.37.2
- imageio-ffmpeg: 0.6.0
- decord: 0.6.0

**Upscaling/Enhancement:**
- facexlib: 0.3.0
- gfpgan: 1.3.8
- realesrgan: 0.3.0

**Utilities:**
- sentencepiece: 0.2.1
- torchsde: 0.2.6
- scipy: 1.16.3
- numpy: 2.2.6

### CUDA Configuration
- CUDA Version: 12.8
- Optimized for RTX 40xx and 50xx GPUs

### Known Issues

#### BasicSR Not Installed
BasicSR has compatibility issues with Python 3.13 and could not be installed.

**Impact:**
- Some advanced toolbox features (ESRGAN, GFPGAN, RealESRGAN) may have limited functionality
- Main FramePack Studio features should work normally

**Workaround:**
1. Use Python 3.10, 3.11, or 3.12 if BasicSR features are critical
2. Wait for BasicSR to release Python 3.13 compatible wheels

**Alternative:** You can try installing BasicSR manually later if needed:
```bash
venv\Scripts\activate
pip install basicsr --no-build-isolation
```

#### NumPy Version
- Requirements specify numpy==1.26.2 but numpy 2.2.6 was installed
- This is a newer version with better Python 3.13 support
- Most packages are compatible with NumPy 2.x

### Optional Acceleration Packages

The install script offers optional acceleration packages:

**Sage Attention:**
- Requires: triton-windows<3.4
- Significant speed improvements for RTX 40xx/50xx
- Pre-built wheels available for Python 3.10-3.12

**Flash Attention:**
- Alternative acceleration method
- Pre-built wheels available

**Note:** These may not be available for Python 3.13 yet.

### Scripts Created

1. **install_40xx_50xx.bat** - Full installation script for RTX 40xx/50xx GPUs
2. **start.bat** - Launches FramePack Studio with venv activation
3. **activate_venv.bat** - Activates venv for manual commands

### Running the Application

```bash
# Option 1: Use the start script
start.bat

# Option 2: Use the existing run script
run.bat

# Option 3: Manual
venv\Scripts\activate
python studio.py
```

### Verifying Installation

```bash
venv\Scripts\activate
python -c "import torch; print(f'PyTorch: {torch.__version__}'); print(f'CUDA: {torch.cuda.is_available()}')"
```

Expected output:
```
PyTorch: 2.8.0+cu128
CUDA: True
```

## Troubleshooting

### If PyTorch CUDA is not detected:
1. Ensure NVIDIA drivers are up to date
2. Verify nvidia-smi shows your GPU
3. Reinstall PyTorch with CUDA 12.8

### If packages are missing:
```bash
venv\Scripts\activate
pip install -r requirements.txt
```

### Clean Reinstall:
1. Delete the `venv` folder
2. Run `install_40xx_50xx.bat` again

## Python Version Recommendation

**Recommended:** Python 3.10, 3.11, or 3.12
**Current:** Python 3.13 (newer, some packages may have limited compatibility)

If you experience issues, consider using Python 3.12 for maximum compatibility.
24 changes: 24 additions & 0 deletions activate_venv.bat
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
@echo off
echo ============================================
echo Activating FramePack-Studio Virtual Environment
echo ============================================
echo.

REM Check if venv exists
if not exist "%cd%\venv\Scripts\activate.bat" (
echo Error: Virtual environment not found!
echo Please run install_40xx_50xx.bat first to set up the environment.
echo.
pause
exit /b 1
)

echo Virtual environment activated.
echo You can now run Python commands within this environment.
echo.
echo To deactivate, type: deactivate
echo To run FramePack-Studio, type: python studio.py
echo.

REM Activate and keep command prompt open
cmd /k "%cd%\venv\Scripts\activate.bat"
8 changes: 4 additions & 4 deletions diffusers_helper/models/hunyuan_video_packed.py
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@
has_xformers = xformers_attn_func is not None

if has_sage:
print(" Using SAGE Attention (highest performance).")
print("[OK] Using SAGE Attention (highest performance).")
ignored = []
if has_flash:
ignored.append("Flash Attention")
Expand All @@ -72,16 +72,16 @@
if ignored:
print(f" - Ignoring other installed attention libraries: {', '.join(ignored)}")
elif has_flash:
print(" Using Flash Attention (high performance).")
print("[OK] Using Flash Attention (high performance).")
if has_xformers:
print(" - Consider installing SAGE Attention for highest performance.")
print(" - Ignoring other installed attention library: xFormers")
elif has_xformers:
print(" Using xFormers.")
print("[OK] Using xFormers.")
print(" - Consider installing SAGE Attention for highest performance.")
print(" - or Consider installing Flash Attention for high performance.")
else:
print("⚠️ No attention library found. Using native PyTorch Scaled Dot Product Attention.")
print("[WARNING] No attention library found. Using native PyTorch Scaled Dot Product Attention.")
print(" - For better performance, consider installing one of:")
print(" SAGE Attention (highest performance), Flash Attention (high performance), or xFormers.")
print("-------------------------------\n")
Expand Down
Loading