A curated list of models, text encoders, and tools for the LTX-2 video generation suite.
-
ComfyUI official blogpost
LTX2.3-Multifunctional is a desktop-optimized version of LTX that lowers GPU requirements and simplifies usage. It integrates all features including image-to-video, text-to-video, start/end frames, lip-sync, video enhancement, and image generation into a single application.
Key Features:
- Lower GPU Requirements: Only needs 24GB VRAM (vs 32GB for standard desktop version)
- All-in-One Interface: No complex ComfyUI workflows or error-prone nodes
- Features: T2V, I2V, start/end frames, lip-sync, video enhancement, image generation, LoRA support
- Multi-Frame Insertion: Two modes for generating long videos
- Easy Setup: No third-party software required, just install LTX desktop
Downloads & Resources:
LTX-2 models are available in various formats including full weights, transformers-only, and GGUF quantizations for efficient inference.
- Lightricks/LTX-2 - Official repository.
- Lightricks/LTX-2.3 - Official repository (latest version).
- Drbaph - Quantization
Quantized to fp8_e5m2 to support older Triton with older Pytorch on 30 series GPUs. For WangGP in Pinokio
| Ver | Name | Precision | Size | Download |
|---|---|---|---|---|
| 2 | ltx-2-19b dev |
27.1 GB |
Note: The mxfp8mixed quantization requires a custom fork of ComfyUI-Kitchen with mxfp8 support. Standard ComfyUI installations may not support this quantization format.
| Model | Quant | Size | Download |
|---|---|---|---|
ltx-2.3-22b-dev |
29.2 GB | ||
ltx-2.3-22b-distilled |
29.1 GB | ||
ltx-2.3-22b-distilled |
29.2 GB | ||
ltx-2.3-22b-distilled |
29.7 GB |
| Ver | Rank | Precision | Size | Download |
|---|---|---|---|---|
| 2.3 | 384 |
7.61 GB | ||
| 2.3 | 208 |
4.97 GB | ||
| 2.3 | 159 |
3.83 GB | ||
| 2.3 | 111 |
2.74 GB | ||
| 2.3 | 105 |
2.59 GB | ||
| 2 | 384 |
7.67 GB | ||
| 2 | 242 |
4.88 GB | ||
| 2 | 175 |
3.58 GB | ||
| 2 | 175 |
1.79 GB |
Experimental distilled LoRAs optimized for finetunes and I2V workflows. These LoRAs avoid the issues of the massive rank 384 official LoRA which can be counterproductive with conditioned inputs and finetunes.
| LoRA | Rank | Size | Description |
|---|---|---|---|
ltx-2.3-22b-distilled-lora-1.1_fro90_ceil36 |
36 | 739 MB | Compact LoRA with dynamic ceiling at 36 |
ltx-2.3-22b-distilled-lora-1.1_fro90_ceil72_condsafe |
72 | 662 MB | Cond-safe version with cross-attention bridges, adaln/scale-shift tables, gate logits, and prompt scale-shift zeroed. Much better suited for I2V and input conditioned workflows. Can use 1.0 strength safely on first pass I2V. |
ltx-2.3-22b-distilled-lora-fro90_ceil72 |
72 | 1.4 GB | Standard version with higher dynamic ceiling |
Notes:
- Lower rank LoRAs (72 and below) can be used at 1.0 strength safely for I2V first pass, with upscale pass at 0.4-0.5 strength
_ceilsuffix indicates the dynamic ceiling during reranking_condsafesuffix indicates cross-attention and other conditioning layers have been zeroed for better I2V compatibility- The official rank 384 LoRA can actively dampen conditioning signals in I2V workflows; cond_safe versions work much better
Required for current two-stage pipeline implementations in this repository. Download to COMFYUI_ROOT_FOLDER/models/latent_upscale_models folder.
| Ver | Name | Size | Download |
|---|---|---|---|
| 2.3 | spatial-upscaler x2 1.0 |
996 MB | |
| 2.3 | spatial-upscaler x1.5 1.0 |
1.09 GB | |
| 2 | spatial-upscaler x2 1.0 |
1.05 GB |
Required for current two-stage pipeline implementations in this repository. Download to COMFYUI_ROOT_FOLDER/models/latent_upscale_models folder.
| Ver | Name | Size | Download |
|---|---|---|---|
| 2.3 | temporal-upscaler x2 1.0 |
262 MB | |
| 2 | temporal-upscaler x2 1.0 |
262 MB |
Custom merged models combining multiple control signals or specialized configurations.
| Ver | Name | Description | Download |
|---|---|---|---|
| 2.3 | ltx-2.3-22b-distilled-1.1-fused-union-control |
Merged model combining Canny, Depth, and Pose control signals for unified control |
══════════════════════════════════
Community finetuned models based on LTX-2.3 with specialized improvements and optimizations.
══════════════════════════════════
These models are optimized for lower memory usage. Note that in ComfyUI, these are typically loaded as transformer-only models.
QuantStack
| Model | Quant | Size | Download |
|---|---|---|---|
| ltx-2.3-22b | 12.4 GB | dev ┊ distilled ┊ distilled-1.1 | |
| ltx-2.3-22b | 14.7 GB | dev ┊ distilled ┊ distilled-1.1 | |
| ltx-2.3-22b | 14 GB | dev ┊ distilled ┊ distilled-1.1 | |
| ltx-2.3-22b | 17.8 GB | dev ┊ distilled ┊ distilled-1.1 | |
| ltx-2.3-22b | 16.7 GB | dev ┊ distilled ┊ distilled-1.1 | |
| ltx-2.3-22b | 19.4 GB | dev ┊ distilled ┊ distilled-1.1 | |
| ltx-2.3-22b | 18.5 GB | dev ┊ distilled ┊ distilled-1.1 | |
| ltx-2.3-22b | 21 GB | dev ┊ distilled ┊ distilled-1.1 | |
| ltx-2.3-22b | 25.5 GB | dev ┊ distilled ┊ distilled-1.1 |
| Model | Quant | Size | Download |
|---|---|---|---|
| LTX-2-dev | 8.03 GB | ||
| LTX-2-dev | 10.3 GB | ||
| LTX-2-dev | 9.57 GB | ||
| LTX-2-dev | 13.4 GB | ||
| LTX-2-dev | 12.3 GB | ||
| LTX-2-dev | 15 GB | ||
| LTX-2-dev | 14.2 GB | ||
| LTX-2-dev | 16.6 GB | ||
| LTX-2-dev | 21.1 GB |
Unsloth
| Model | Quant | Size | Download |
|---|---|---|---|
| ltx-2.3-22b | 42 GB | dev ┊ distilled | |
| ltx-2.3-22b | 42 GB | dev ┊ distilled | |
| ltx-2.3-22b | 8.28 GB | dev ┊ distilled | |
| ltx-2.3-22b | 10.8 GB | dev ┊ distilled | |
| ltx-2.3-22b | 9.95 GB | dev ┊ distilled | |
| ltx-2.3-22b | 12.7 GB | dev ┊ distilled | |
| ltx-2.3-22b | 13.8 GB | dev ┊ distilled | |
| ltx-2.3-22b | 14.3 GB | dev ┊ distilled | |
| ltx-2.3-22b | 13.1 GB | dev ┊ distilled | |
| ltx-2.3-22b | 15.3 GB | dev ┊ distilled | |
| ltx-2.3-22b | 16.3 GB | dev ┊ distilled | |
| ltx-2.3-22b | 16.1 GB | dev ┊ distilled | |
| ltx-2.3-22b | 15.2 GB | dev ┊ distilled | |
| ltx-2.3-22b | 17.8 GB | dev ┊ distilled | |
| ltx-2.3-22b | 22.8 GB | dev ┊ distilled | |
| ltx-2.3-22b | 9.5 GB | dev ┊ distilled | |
| ltx-2.3-22b | 13.5 GB | dev ┊ distilled | |
| ltx-2.3-22b | 11.4 GB | dev ┊ distilled | |
| ltx-2.3-22b | 16.5 GB | dev ┊ distilled | |
| ltx-2.3-22b | 14.2 GB | dev ┊ distilled | |
| ltx-2.3-22b | 18.3 GB | dev ┊ distilled | |
| ltx-2.3-22b | 16.3 GB | dev ┊ distilled |
| Model | Quant | Size | Download |
|---|---|---|---|
| ltx-2.3-22b | 42 GB | distilled-1.1 | |
| ltx-2.3-22b | 42 GB | distilled-1.1 | |
| ltx-2.3-22b | 7.94 GB | distilled-1.1 | |
| ltx-2.3-22b | 10.6 GB | distilled-1.1 | |
| ltx-2.3-22b | 9.74 GB | distilled-1.1 | |
| ltx-2.3-22b | 14.2 GB | distilled-1.1 | |
| ltx-2.3-22b | 13 GB | distilled-1.1 | |
| ltx-2.3-22b | 15.9 GB | distilled-1.1 | |
| ltx-2.3-22b | 15 GB | distilled-1.1 | |
| ltx-2.3-22b | 17.8 GB | distilled-1.1 | |
| ltx-2.3-22b | 22.8 GB | distilled-1.1 | |
| ltx-2.3-22b | 10.9 GB | distilled-1.1 | |
| ltx-2.3-22b | 13.4 GB | distilled-1.1 | |
| ltx-2.3-22b | 16.4 GB | distilled-1.1 | |
| ltx-2.3-22b | 14.1 GB | distilled-1.1 | |
| ltx-2.3-22b | 18.2 GB | distilled-1.1 |
Vantage
LTX-2.3 (22B) — PolarQuant Q5 is a bit-packed quantization method using Hadamard-Rotated Lloyd-Max Quantization. It achieves optimal Gaussian weight quantization via Hadamard rotation, delivering near-lossless quality with significant size reduction.
Specification
| Specification | Value |
|---|---|
| Parameters | 22B |
| Transformer Blocks | 48 |
| Hidden Dimension | 4096 |
| Layers Quantized | 1,347 (of 5,947 total tensors) |
Compression Statistics:
| Component | Original Size | PQ5 Packed | Reduction |
|---|---|---|---|
| Transformer (1,347 layers) | 37 GB | 4.6 GB | -88% |
| VAE + Skip (4,600 layers) | 9.1 GB | 9.1 GB | BF16 kept |
| Upscalers | 1.3 GB | 1.3 GB | BF16 kept |
| Total | 46.2 GB | 15 GB | -68% |
Quality Metrics:
- Cosine Similarity: 0.9986 (near-lossless)
- Download Size: 15 GB
- Beats torchao INT4 on perplexity (PPL)
Hardware Requirements:
| GPU | VRAM | Status |
|---|---|---|
| A100 (80 GB) | 80 GB | Full speed |
| A100 (40 GB) | 40 GB | Recommended |
| RTX 4090 (24 GB) | 24 GB | With offloading |
Key Features:
- Mixed precision approach: transformer heavily quantized (-88%) while VAE remains BF16
- 5-bit bit-packed representation (Q5)
- 50-65% smaller than original with zero quality loss
- One-command setup with easy generation wrapper
| Model | Size | Download |
|---|---|---|
LTX-2.3-22B-PolarQuant-Q5 |
15 GB |
Installation: pip install safetensors huggingface_hub scipy
ArXiv Reference: 2603.29078
◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆
LTX-2 requires Gemma-3-12b variants. LTX-2.3 uses text projection layers.
Official and optimized versions for ComfyUI.
gemma_3_12B_it_fpmixed: Experimental quant. Should be better than the fp8 scaledgemma_3_12B_it_fp4_mixed: 90% fp4 layers
Note: The mxfp8mixed quantization requires a custom fork of ComfyUI-Kitchen with mxfp8 support. Standard ComfyUI installations may not support this quantization format.
Standard Gemma models often incorporate safety alignment that "sanitizes" or weakens specific concepts within prompt embeddings. Even when the model doesn't explicitly refuse a request, this internal filtering can dilute creative intent. For LTX-2 video generation, using a standard encoder often results in:
- Reduced Prompt Adherence: Key stylistic or descriptive terms may be ignored or weakened.
- Visual Softening: Visual intensity and fine details are often "muted" to fit generic safety profiles.
- Concept Dilution: Complex or niche creative requests are subtly altered, leading to less faithful representations of your vision.
Abliteration bypasses these restrictive alignment layers, allowing the encoder to translate your prompts into embeddings with maximum fidelity. This ensures LTX-2 receives the most accurate and un-filtered instructions possible.
Gemma-3-12b-Abliterated
Fixed versions of the abliterated Gemma-3-12b-it model by FusionCow, modified specifically for compatibility with LTX-2. The original model
| Model | Precision | Size | Download |
|---|---|---|---|
Gemma ablit fixed |
23.5 GB | ||
Gemma ablit fixed |
13.8 GB |
Gemma 3 12B IT Heretic
Models by DreamFast
| Model | Precision | Size | Download |
|---|---|---|---|
Gemma_3_12B_it Heretic |
23.5 GB | ||
Gemma_3_12B_it Heretic |
12.8 GB |
Sikaworld1990 Gemma-3-12b Abliterated
NVFP4 quantization variants by Sikaworld1990 optimized for Blackwell GPUs.
| Model | Precision | Size | Download |
|---|---|---|---|
Gemma-3-12b QAT Abliterated FP4 |
12.1 GB | ||
Gemma-3-12b QAT Abliterated FP4 |
8.91 GB | ||
Gemma-3-12b HereticX Abliterated |
15 GB | ||
Gemma-3-12b High-Fidelity Abliterated |
14.1 GB |
- FP4-HF: High-fidelity mixed precision calibration
- FP4-Pure: Pure FP4 quantization for maximum compression
- HereticX: Uncensored variant with maximum prompt fidelity
- High-Fidelity: Optimized for quality with better detail preservation
◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆
Separated LTX2 checkpoint by Kijai and Kijai for LTX-2.3. For alternative way to load the models in Comfy.
Note
input_scaled additionally have activation scaling, and are set to run with fp8 matmuls on supported hardware (roughly 40xx and later Nvidia GPUs).
| Ver | Component | Precision | Size | Download |
|---|---|---|---|---|
| 2.3 | Video VAE |
1.45 GB | ||
| 2.3 | Audio VAE |
365 MB | ||
| 2 | Video VAE |
2.45 GB | ||
| 2 | Audio VAE |
218 MB |
| Ver | Name | Precision | Size | Download |
|---|---|---|---|---|
| 2.3 | Embeddings Connectors dev |
2.31 GB | ||
| 2.3 | Embeddings Connectors distilled |
2.31 GB | ||
| 2 | Connector dev |
2.86 GB | ||
| 2 | Connector distilled |
2.86 GB |
◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆
-
LTX-2.3-IC-LoRA-Colorizer by DoctorDiffusion (331 MB) - Colorize black and white videos
-
- Original by MachineDelusions
- siraxe variant - Stripped audio layers + rank64 compressed (2.62 GB, 655 MB rank64 bf16)
-
Lightricks LTX-2.3
- HDR - Enables 16-bit HDR video generation and converts SDR video to HDR using LogC3 transform for extended dynamic range
- Union Control - Unified IC-LoRA combining Canny + Depth + Pose control signals for multi-signal video generation conditioning
- Motion Track Control - Guides object motion using sparse point trajectories via colored spline overlays on reference videos
-
vrgamedevgirl84
-
oumoumad
- IC luminance map
- LTX-2 IC-LoRA-Ungrade - Removes color grading and contrast from footage, returning neutral ungraded appearance
- LTX-2.3 IC-LoRA-Ungrade - LTX-2.3 version of color grading removal IC-LoRA
- IC-LoRA-Outpaint - Extends video canvas by generating new content in black regions (letterbox areas), filling with temporally consistent content
- IC-LoRA-ReFocus - Removes lens blur and restores focus to out-of-focus footage (lens blur only)
- IC-LoRA-Uncompress - Removes MP4 compression artifacts (blocking, banding, mosquito noise) and restores clean video
- IC-LoRA-MotionDeblur - Removes motion blur from footage
- IC-LoRA-Deinterlace - Removes interlacing artifacts from video
- FXIC LTX2 IC-LoRA - Flux-inspired IC-LoRA for LTX video transformation with multiple optimizer variants (adamw, prodigy, masked) at various training steps
- DeArchive LTX-2.3 - In-Context LoRA for restoring archive video (old B&W footage, low-res web rips, sepia-toned silent-era prints) into colored, high-definition modern cinematography (Rank 128, 5,000 steps)
-
Kijai
- LTX2-IC-LoRAs - IC-LoRA trained with the realisdance set
-
Cseti
- IC-LoRA-Cameraman v1 - Transfers camera movements (zoom, pan, tilt, orbit) from reference video to generated output
- IC-LoRA-EditRefVid v1 - Edit reference video IC-LoRA for editing existing videos using reference guidance
-
100percentrobot
- Audio-Reactive LORA - Generates audio-reactive videos with motion synchronized to musical elements (beats, rhythm)
-
LiconStudio
- VBVR-lora-I2V - Enhances video generation for complex reasoning tasks including multi-object interactions, physical causality, and spatial relationships
- VBVR-lora-I2V Special
-
TheBurgstall
- LTX-2.3-Skin-Hair - Refines skin texture and hair rendering, reduces plastic skin artifacts, improves specular highlights
- VR-360-Outpaint IC-LoRA - Outpaints standard widescreen footage into a full 360° equirectangular projection for immersive/VR viewing.
-
Nightfury16
- Staging IC-lora 512 - Staging IC-LoRA for video composition control (512 latent scale)
-
siraxe
- MergeGreen IC-lora - Maintains motion at start/end frames, use middle frames with RGB 0,191,0 (75% green fill) in IC-LoRA workflow
- TTM IC-lora - Makes cutouts cartoony and adds cartoony characters to video scenes, based on the TTM approach (use with Img To Video bypass + Add Video IC-LoRA Guide node)
-
Lightricks LTX-2
- Canny Control - Edge detection control for structural guidance
- Depth Control - Depth map conditioning for 3D spatial control
- Detailer - Enhances fine details and textures in generated videos
- Pose Control - Human pose estimation control for motion guidance
Upscaler LoRAs:
- LTX 2.3 Upscale IC-LoRA by Zlikwid
- Generative refinement LoRA for upscaling lower-res or soft videos
- Works by bicubic upscaling first, then running through LTX 2.3 with this LoRA
- Use prompt:
upscale
- LTX2.3-ICEdit-Insight by JoyFox Lab
- Task-aware video restoration and editing model family
- Supports: Video Restoration, HD Enhancement, Watermark Removal, Subtitle Removal
- Singularity LTX-2.3 OmniCine by WarmBloodAban
- Comprehensive optimizer for LTX2.3 I2V and First/Last Frame workflows
- Features: Limb Evolution, Shot Injection, Natural Expression, Physical Integrity, Cross-Style Potential
- Uses "Singularity" prompting framework with 7-block bilingual structure
- Cseti
- Arcane-Jinx v1 - Style LoRA inspired by Arcane's Jinx character design
- ReStyle IC-LoRA - Image-guided style transfer IC-LoRA that re-renders videos in a target style while preserving original content and motion
- lopho
- Gantz O v1.0.0 - Movie-style LoRA (654 MB, 10000 steps)
- bionicman69
- Arnold Style - Arnold Style LoRA for LTX 2.3. Get to the choppa!
- Star Trek TNG Style - Star Trek: The Next Generation style LoRA for LTX 2.3
- oumoumad
- kabachuha
- Hydraulic press
- Cakeify
- Big Anime Breasts
- Eat
- Squish – One Hand Only
- POP! Inflatable Animation - Comically inflate and pop cartoon/anime characters into confetti and fabric scraps (I2V focused)
- CRT Animation Terminal by lovis93 - Real late-80s/early-90s CRT monitor look with scanlines, phosphor glow, chromatic aberration, and dithering. Trigger word:
crtanim,. Available in 4000 and 10000 training steps variants - vrgamedevgirl84 Style LoRAs
- ClayMationStyle - Clay animation style LoRA for LTX-2.3
- Wild West Style
- Paper Cut Out Style
- Post Apocalyptic Style
- Pixar Toon Style
- Luxe Sensual Style
- Soft Enhance Style
- Crisp Enhance Style
- Fantasy Puppet Style
- Fantasy Realism Style
- Fantasy Painterly Style
- Fantasy Anime Style
- Cozy Felt Style
- Clay Mation Style
- 90s Animation Style
- Alissonerdx LTX-LoRAs Collection - Comprehensive collection including:
- Anime2Half-Real - Converts anime-style content to half-realistic aesthetic (4500 steps, rank64)
- Edit-Anything Global - Global editing LoRA variants (6000-9000 steps, rank128)
- Inpaint Masked R2V/T2V - Region-based inpainting LoRAs for masked video editing
- Real2Anime/Anime2Real - Style conversion LoRAs (rank64)
- Nebsh
- Squish
- Yoshiaki Kawajiri Retro Anime - LoRA trained on Yoshiaki Kawajiri's distinctive retro anime art style
- Playtime-AI
- DonaldTrump
- Rick_and_Morty - BETA LoRA for Rick and Morty animated style
- LTX-2.3-Wednesday_Addams
- LTX-2.3-Kermit_the_Frog
- LTX-2.3-Jenna_Coleman
- TheBurgstall
- TheBurgstall (LTX-2)
- Black Venom
- Lightricks
- Wan2.1 VAE Adapter
- Latent space adapter for converting between LTX-2 and Wan2.1 VAE representations
latent_adapter_final.pt(447 MB)
ID-LoRA is a method that enables identity-preserving audio-video generation in a single model. It jointly generates a subject's appearance and voice, letting a text prompt, a reference image, and a short audio clip govern both modalities together. Built on top of LTX-2.3 (22B), it is the first method to personalize visual appearance and voice within a single generative pass.
Unlike cascaded pipelines that treat audio and video separately, ID-LoRA operates in a unified latent space where a single text prompt can simultaneously dictate the scene's visual content, environmental acoustics, and speaking style—while preserving the subject's vocal identity and visual likeness.
Key Features:
- Text prompt controls the scene and content
- Reference image preserves the subject's visual likeness
- Short audio clip preserves the subject's vocal identity
- Single unified generation pass for both appearance and voice
Available LoRAs for LTX-2.3:
| LoRA | LoRA Rank | Size | Download |
|---|---|---|---|
| ID-LoRA-TalkVid-3K | 128 | 1.1 GB | |
| ID-LoRA-CelebVHQ-3K | 128 | 1.1 GB |
Resources:
◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆
-
10S-Comfy-nodes by TenStrip - Custom ComfyUI nodes for improving motion quality when working with LTX 2.3's combined audio/video latent pipeline. Includes Latent Cross Fade Auto Concat, Audio Latent Stretch, Latent Motion Sharpener, Latent Temporal Upsampler, Latent Motion Retime, and Latent Temporal Inpainter for clean 30fps output from 24fps sampled models.
-
Deno Custom Nodes by Deno2026 - Practical ComfyUI custom nodes focused on fast real-world workflow improvements including (Deno) Resize Box, Multi Image Loader, LTX Sequencer, LTX Model Loader, Easy Model Download Helper, LTX Multi LoRA Loader, and LTX Prompt Guide.
-
PromptRelay by kijai - Enables consistent multilingual lip-sync while maintaining voice consistency across languages. Distributes video latent frames across segments with smart prompt node supporting inline and block syntax styles.
-
WhatDreamsCost ComfyUI by WhatDreamsCost - A variety of custom ComfyUI nodes and workflows for creating AI-generated video content including Multi Image Loader, LTX Sequencer, LTX Keyframer, Speech Length Calculator, Load Video UI, and Load Audio UI.
-
ComfyUI-Sapiens2 by kijai - ComfyUI nodes for Sapiens2 computer vision models from Facebook Research. Supports pose estimation, body-part segmentation, surface normal estimation, and pointmap estimation with model variants from 400M to 5B parameters.
◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆
For training LTX LoRAs, the community uses a variety of official scripts, community-developed forks, and cloud-based platforms.
- Official LTX-2 Trainer: This is the standard Python-based package for training LoRAs, full fine-tuning, and In-Context (IC) LoRAs. It is designed for Linux and requires CUDA and Triton.
- Musubi-Tuner (AkaneTendo25 Fork): Widely considered the fastest and most efficient local trainer for LTX-2 and 2.3. It features significantly smaller cache sizes (up to 12x smaller than AI Toolkit) and better iteration speeds, reaching up to 2 iterations per second on an RTX 5090.
- AI Toolkit (by Ostris): A popular third-party tool that supports LTX-2 character and image-to-video LoRAs. While beginner-friendly, some users reported issues with audio training on the main branch.
- AI Toolkit: BIG-DADDY-VERSION (ArtDesignAwesome Fork): This specific fork was created to fix broken audio and voice training in the original AI Toolkit. It is optimized for hardware like the RTX 5090.
- rs-nodes (richservo): A collection of nodes that includes a full LTX Lora trainer directly within ComfyUI. It is designed to be memory-efficient, allowing training on cards with as little as 11GB-12GB of VRAM by using ComfyUI's native weight loaders.
- Link: rs-nodes ComfyUI Trainer
- SimpleTuner: A highly optimized trainer for Linux that supports LTX-2 and is noted for its ability to handle larger datasets on limited VRAM via block swapping.
- Link: SimpleTuner Repository
- Fal.ai: Provides a dedicated cloud trainer for custom styles and effects, though it is primarily limited to image-based training datasets.
- RunComfy: A cloud service that offers a pre-configured AI Toolkit setup specifically for LTX-2 training.
- Link: RunComfy LTX-2 Training
- Taz's Ultimate Captioning Tool: A Hugging Face space frequently used by the community to generate the long, detailed, cinematographic prompts (around 200 words) that LTX-2 requires for high-quality training.
- AI Video Clipper & LoRA Captioner: A modular pipeline designed to automate local dataset creation using WhisperX and Qwen2-VL, including support for RTX 5090 Blackwell cards.
- Dataset: Videos should typically be cut to 121 frames (exactly 4.84 seconds) to align with the model's architectural "8n+1" rule.
- Hardware: While 16GB VRAM is possible with extreme offloading in tools like rs-nodes, 24GB is the practical minimum for quantized training. For best results and speed, 48GB to 80GB (H100 or RTX 6000) is preferred.
- Precision: It is now officially recommended to train on the full BF16 model for LTX 2.3 rather than FP8 for superior quality.
◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆
- Text to Video Full
- Text to Video Distilled
- Image to Video Full
- Image to Video Distilled
- ICLoRA
- Video to Video
- Video to Video Detailer
vrgamedevgirl84 LTX 2.3 Music Video Creator:
- Music Video Creator Workflow
- Prompt Creator Workflow - Audio upload, beat detection, scene timing, lyrics analysis, style selection, prompt generation
- Text-to-Video Workflow - LoRA integration, advanced prompt controls, Remake Mode, video stitching
- Image-to-Video Workflow - Uses Z-Image Turbo and LTX 2.3
- Requirements: ComfyUI, LTX 2.3 models, Z-Image Turbo model, FFmpeg, vrgamedevgirl custom nodes
- Text-to-video
- Text-to-video Distilled (faster, 8 steps)
- Image-to-video
- Image-to-video Distilled (faster, 8 steps)
- Depth control
- Canny control
- Pose control
RuneXX LTX-2.3 Workflows:
- I2V T2V Basic
- I2V T2V Basic GGUF
- I2V T2V Dev Full-Steps
- I2V T2V Simple Single Pass
- T2V Basic
- T2V Simple Single Pass
Movie-Maker:
- I2V Short-Story PromptRelay-Timeline multi-image multi-sequence
- I2V Short-Story PromptRelay multi-image multi-sequence
- I2V T2V Short-Story PromptRelay-Timeline multi-sequence
- I2V T2V Short-Story PromptRelay multi-sequence
Talking-Avatar-TTS:
- I2V T2V Talking Avatar (Qwen-TTS)
- I2V T2V Talking Avatar (Fish-Audio-Pro)
- I2V T2V Talking Avatar (OmniVoice-TTS)
Video-2-Video:
- V2V Just Talk Prompt Lipsynced Voice
- V2V Just Talk Prompt Lipsynced Voice Sam3
- V2V Just Talk Custom Audio Lip-synced To Any Video
- V2V Dub It lip-synced dubbing multilanguage
- V2V Extend Any Video
- V2V Extend Any Video Multi-Extend Long Video
- V2V Extend Any Video towards Last-Frame-image
- V2V Remove Watermark Subtitles ICEdit-Insight-lora
- V2V Expand Any Video IC-Lora-Outpaint
- V2V Foley Add Sound To Any Video
- V2V ReTake recreate any section of any video
- V2V Video-Edit remove add replace restyle EditAnything-Lora
- V2V High Dynamic Range IC-HDR-lora
Others:
Custom-Audio:
First-Last-Frame:
- First-Last-Frame Workflows
- FLF2V First Last Frame
- FLF2V First Last Frame Custom Audio
- FLF2V First Last Frame Transition LoRA
- FML2V First Middle Last Frame Guider
- FML2V First Middle Last Frame Injection
- FML2V Guider Custom Audio
Long-Video-Experimental:
- I2V T2V Long Video Custom Audio Loop
- I2V T2V Long Video Custom Audio
- I2V T2V Long Video Custom Audio singlepass loop
3-Pass-Experimental:
Control-reference:
- I2V TV2V Transfer Camera Movements IC-Cameraman LoRA
- I2V TV2V Transfer Body Movements IC-Union-Control-lora DWPose
- I2V TV2V Transfer Body Movements IC-Union-Control-lora SDPose
- I2V TV2V Transfer Body Movements IC-RealisDance-lora
Music-Video-Creator:
- I2V T2V Music-Video-Creator Multi-Scene Custom Audio
- I2V T2V Music-Video-Creator Multi-Scene Custom Audio Low RAM
Helper-Workflows:
- AceStep-XL Create Music From a Prompt
- Flux-Klein Transform Firstframe
- Qwen-Image Transform Firstframe Next Scene or Different Angle LoRA
Other-examples:
RuneXX LTX-2 Workflows old pre_feb2026
- First Last Frame (guide node)
- First Last Frame (in-place node)
- First Middle Last Frame (guide node)
- I2V Basic (GGUF)
- I2V Basic
- I2V IC-Control (pose)
- I2V Simple First Middle Last Frame (1-pass K-Sampler)
- I2V Talking Avatar (voice clone Qwen-TTS)
- I2V and T2V (beta test sampler previews)
- I2V and T2V Basic (Custom Audio)
- I2V and T2V IC-Control (All-In-One Pose Canny Depth)
- I2V and T2V Simple (1-pass K-Sampler)
- I2V and T2V Simple (1-pass)
- T2V Basic (GGUF)
- T2V Basic (low vram)
- T2V Basic
- T2V Talking Avatar (voice clone Qwen-TTS)
- V2A Foley (add sound to any video)
- V2V (extend any video)
- V2V Head Swap Experimental (BFS lora)
- V2V Just Dub It (experimental)(translate speech auto dubbing)
- V2V Just Dub It (with voice clone)(auto dubbing translation)(experimental)
