-
Notifications
You must be signed in to change notification settings - Fork 479
Open
Description
Title: Chandra 0.1.8 completely unusable on Windows - CPU offload makes inference incredibly slow
Hardware:
- RTX 4080 FE (16 GB VRAM)
- Intel i9 CPU
- 64 GB RAM
- CUDA 13.0
- Windows 11
Software:
- Python 3.11.0
- PyTorch 2.9.0.dev (nightly CU130)
- Chandra 0.1.8
Problem:
Chandra is completely unusable on Windows. Both from the app and from the client, it remains loading forever without doing anything.
What happens:
- Model loads successfully (16.3GB Qwen2-VL)
- Checkpoint fragments load at 100%
- Then the message: "Some parameters are on the meta device because they were offloaded to the CPU"
- Perpetual loading
- GPU shows 79% utilization, but inference fails
Commands tried:
# GUI version
$env:CHANDRA_DEVICE="cuda"
chandra_app
# CLI version
chandra input.png ./output --method hf --device cudaBoth have the same problem: CPU offload makes it unusable.
Installation Issues:
- The documentation doesn't mention the requirements for PyTorch 2.8+
- PyPI versions 0.1.7 and 0.1.8 require torch >=2.8.0
- Stable CUDA builds of PyTorch 2.8+ don't exist for Windows
- I had to use nightly builds (undocumented)
- It took me 9 hours to figure out how to install correctly
Expected:
With RTX 4080, it should run in 1-3 seconds
Actual:
Infinite loading
This makes Chandra completely unusable on Windows.
They call it the #1 software currently, but for me, it's total crap, a nightmare, and I wasted 9 hours of my life.
Metadata
Metadata
Assignees
Labels
No labels