Skip to content

wildminder/awesome-ltx2

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

77 Commits
 
 
 
 
 
 

Repository files navigation

Awesome LTX-2

A curated list of models, text encoders, and tools for the LTX-2 video generation suite.

ltx-logo

Stargazers Telegram X

Intro

▓ Apps & Tools

LTX2.3-Multifunctional

LTX2.3-Multifunctional is a desktop-optimized version of LTX that lowers GPU requirements and simplifies usage. It integrates all features including image-to-video, text-to-video, start/end frames, lip-sync, video enhancement, and image generation into a single application.

Key Features:

  • Lower GPU Requirements: Only needs 24GB VRAM (vs 32GB for standard desktop version)
  • All-in-One Interface: No complex ComfyUI workflows or error-prone nodes
  • Features: T2V, I2V, start/end frames, lip-sync, video enhancement, image generation, LoRA support
  • Multi-Frame Insertion: Two modes for generating long videos
  • Easy Setup: No third-party software required, just install LTX desktop

Downloads & Resources:

▓ Models

LTX-2 models are available in various formats including full weights, transformers-only, and GGUF quantizations for efficient inference.

▣ Checkpoints

Ver Name Precision Size Download
2.3 ltx-2.3-22b dev bf16 46.1 GB
2.3 ltx-2.3-22b dev fp8 29.1 GB
2.3 ltx-2.3-22b dev fp8 29.9 GB
2.3 ltx-2.3-22b dev int8 29.1 GB
2.3 ltx-2.3-22b dev nvfp4 21.7 GB
2.3 ltx-2.3-22b dev fp8 29.1 GB
2.3 ltx-2.3-22b distilled bf16 46.1 GB
2.3 ltx-2.3-22b distilled fp8 29.5 GB
2.3 ltx-2.3-22b distilled fp8 29.9 GB
2.3 ltx-2.3-22b distilled int8tensormixed 29.1 GB
2.3 ltx-2.3-22b distilled nvfp4 17.6 GB
2.3 ltx-2.3-22b distilled mxfp8mixed 29.7 GB
2.3 ltx-2.3-22b distilled 1.1 bf16 46.1 GB
2 ltx-2-19b dev bf16 43.3 GB
2 ltx-2-19b dev fp8 27.1 GB
2 ltx-2-19b dev fp4 20 GB
2 ltx-2-19b distilled bf16 43.3 GB
2 ltx-2-19b distilled fp8 27.1 GB
2 ltx-2-19b distilled nvfp4 20 GB

Quantized to fp8_e5m2 to support older Triton with older Pytorch on 30 series GPUs. For WangGP in Pinokio

Ver Name Precision Size Download
2 ltx-2-19b dev fp8_e5m2 27.1 GB

silveroxides Quantizations (mxfp8)

Note: The mxfp8mixed quantization requires a custom fork of ComfyUI-Kitchen with mxfp8 support. Standard ComfyUI installations may not support this quantization format.

Model Quant Size Download
ltx-2.3-22b-dev int8mixedtensorwise 29.2 GB
ltx-2.3-22b-distilled int8tensormixed 29.1 GB
ltx-2.3-22b-distilled int8mixedtensorwise 29.2 GB
ltx-2.3-22b-distilled mxfp8mixed 29.7 GB

Distilled LoRA

Ver Rank Precision Size Download
2.3 384 bf16 7.61 GB
2.3 208 bf16 4.97 GB
2.3 159 bf16 3.83 GB
2.3 111 bf16 2.74 GB
2.3 105 bf16 2.59 GB
2 384 bf16 7.67 GB
2 242 bf16 4.88 GB
2 175 bf16 3.58 GB
2 175 fp8 1.79 GB

▣ TenStrip Distilled LoRA Experiments

Experimental distilled LoRAs optimized for finetunes and I2V workflows. These LoRAs avoid the issues of the massive rank 384 official LoRA which can be counterproductive with conditioned inputs and finetunes.

LoRA Rank Size Description
ltx-2.3-22b-distilled-lora-1.1_fro90_ceil36 36 739 MB Compact LoRA with dynamic ceiling at 36
ltx-2.3-22b-distilled-lora-1.1_fro90_ceil72_condsafe 72 662 MB Cond-safe version with cross-attention bridges, adaln/scale-shift tables, gate logits, and prompt scale-shift zeroed. Much better suited for I2V and input conditioned workflows. Can use 1.0 strength safely on first pass I2V.
ltx-2.3-22b-distilled-lora-fro90_ceil72 72 1.4 GB Standard version with higher dynamic ceiling

Notes:

  • Lower rank LoRAs (72 and below) can be used at 1.0 strength safely for I2V first pass, with upscale pass at 0.4-0.5 strength
  • _ceil suffix indicates the dynamic ceiling during reranking
  • _condsafe suffix indicates cross-attention and other conditioning layers have been zeroed for better I2V compatibility
  • The official rank 384 LoRA can actively dampen conditioning signals in I2V workflows; cond_safe versions work much better

Download All LoRAs

Spatial Upscaler

Required for current two-stage pipeline implementations in this repository. Download to COMFYUI_ROOT_FOLDER/models/latent_upscale_models folder.

Ver Name Size Download
2.3 spatial-upscaler x2 1.0 996 MB
2.3 spatial-upscaler x1.5 1.0 1.09 GB
2 spatial-upscaler x2 1.0 1.05 GB

Temporal Upscaler

Required for current two-stage pipeline implementations in this repository. Download to COMFYUI_ROOT_FOLDER/models/latent_upscale_models folder.

Ver Name Size Download
2.3 temporal-upscaler x2 1.0 262 MB
2 temporal-upscaler x2 1.0 262 MB

▣ Merges

Custom merged models combining multiple control signals or specialized configurations.

Ver Name Description Download
2.3 ltx-2.3-22b-distilled-1.1-fused-union-control Merged model combining Canny, Depth, and Pose control signals for unified control

══════════════════════════════════

▣ Finetunes

Community finetuned models based on LTX-2.3 with specialized improvements and optimizations.

Model Description
High-performance LoRA-integrated checkpoint family based on LTX 2.3. Includes both distilled (4-step) and non-distilled variants (20-30 steps). Recommended sampler: Euler + Simple/Normal/Linear_Quadratic.
I2V-optimized merge using layer scaled merges at different steps. Not a straight weight merge - behaves much nicer than standard LoRA loading and respects prompts better. Includes BF16 full checkpoint and fp8_mixed_learned quantized versions.
Uncensored video generation model based on LTX 2.3 supporting T2V and I2V natively. Includes a built-in prompt enhancer. Merge base for 10Eros. Supports GGUF format.

══════════════════════════════════

▣ GGUF Quantized Models

These models are optimized for lower memory usage. Note that in ComfyUI, these are typically loaded as transformer-only models.

QuantStack
Model Quant Size Download
ltx-2.3-22b Q2_K 12.4 GB devdistilleddistilled-1.1
ltx-2.3-22b Q3_K_M 14.7 GB devdistilleddistilled-1.1
ltx-2.3-22b Q3_K_S 14 GB devdistilleddistilled-1.1
ltx-2.3-22b Q4_K_M 17.8 GB devdistilleddistilled-1.1
ltx-2.3-22b Q4_K_S 16.7 GB devdistilleddistilled-1.1
ltx-2.3-22b Q5_K_M 19.4 GB devdistilleddistilled-1.1
ltx-2.3-22b Q5_K_S 18.5 GB devdistilleddistilled-1.1
ltx-2.3-22b Q6_K 21 GB devdistilleddistilled-1.1
ltx-2.3-22b Q8_0 25.5 GB devdistilleddistilled-1.1
Model Quant Size Download
LTX-2-dev Q2_K 8.03 GB
LTX-2-dev Q3_K_M 10.3 GB
LTX-2-dev Q3_K_S 9.57 GB
LTX-2-dev Q4_K_M 13.4 GB
LTX-2-dev Q4_K_S 12.3 GB
LTX-2-dev Q5_K_M 15 GB
LTX-2-dev Q5_K_S 14.2 GB
LTX-2-dev Q6_K 16.6 GB
LTX-2-dev Q8_0 21.1 GB
Unsloth
Model Quant Size Download
ltx-2.3-22b BF16 42 GB devdistilled
ltx-2.3-22b F16 42 GB devdistilled
ltx-2.3-22b Q2_K 8.28 GB devdistilled
ltx-2.3-22b Q3_K_M 10.8 GB devdistilled
ltx-2.3-22b Q3_K_S 9.95 GB devdistilled
ltx-2.3-22b Q4_0 12.7 GB devdistilled
ltx-2.3-22b Q4_1 13.8 GB devdistilled
ltx-2.3-22b Q4_K_M 14.3 GB devdistilled
ltx-2.3-22b Q4_K_S 13.1 GB devdistilled
ltx-2.3-22b Q5_0 15.3 GB devdistilled
ltx-2.3-22b Q5_1 16.3 GB devdistilled
ltx-2.3-22b Q5_K_M 16.1 GB devdistilled
ltx-2.3-22b Q5_K_S 15.2 GB devdistilled
ltx-2.3-22b Q6_K 17.8 GB devdistilled
ltx-2.3-22b Q8_0 22.8 GB devdistilled
ltx-2.3-22b UD-Q2_K 9.5 GB devdistilled
ltx-2.3-22b UD-Q3_K_M 13.5 GB devdistilled
ltx-2.3-22b UD-Q3_K_S 11.4 GB devdistilled
ltx-2.3-22b UD-Q4_K_M 16.5 GB devdistilled
ltx-2.3-22b UD-Q4_K_S 14.2 GB devdistilled
ltx-2.3-22b UD-Q5_K_M 18.3 GB devdistilled
ltx-2.3-22b UD-Q5_K_S 16.3 GB devdistilled
Model Quant Size Download
ltx-2.3-22b BF16 42 GB distilled-1.1
ltx-2.3-22b F16 42 GB distilled-1.1
ltx-2.3-22b Q2_K 7.94 GB distilled-1.1
ltx-2.3-22b Q3_K_M 10.6 GB distilled-1.1
ltx-2.3-22b Q3_K_S 9.74 GB distilled-1.1
ltx-2.3-22b Q4_K_M 14.2 GB distilled-1.1
ltx-2.3-22b Q4_K_S 13 GB distilled-1.1
ltx-2.3-22b Q5_K_M 15.9 GB distilled-1.1
ltx-2.3-22b Q5_K_S 15 GB distilled-1.1
ltx-2.3-22b Q6_K 17.8 GB distilled-1.1
ltx-2.3-22b Q8_0 22.8 GB distilled-1.1
ltx-2.3-22b UD-Q2_K 10.9 GB distilled-1.1
ltx-2.3-22b UD-Q3_K_M 13.4 GB distilled-1.1
ltx-2.3-22b UD-Q4_K_M 16.4 GB distilled-1.1
ltx-2.3-22b UD-Q4_K_S 14.1 GB distilled-1.1
ltx-2.3-22b UD-Q5_K_M 18.2 GB distilled-1.1
Model Quant Size Download
ltx-2-19b-dev BF16 37.8 GB
ltx-2-19b-dev F16 37.8 GB
ltx-2-19b-dev UD-Q2_K_L 10.1 GB
ltx-2-19b-dev UD-Q2_K_XL 11.6 GB
ltx-2-19b-dev Q2_K 8.1 GB
ltx-2-19b-dev Q3_K_L 10.7 GB
ltx-2-19b-dev Q3_K_M 10.1 GB
ltx-2-19b-dev Q3_K_S 9.47 GB
ltx-2-19b-dev Q4_0 11.3 GB
ltx-2-19b-dev Q4_1 12.3 GB
ltx-2-19b-dev Q4_K_M 12.8 GB
ltx-2-19b-dev Q4_K_S 11.9 GB
ltx-2-19b-dev Q5_0 13.7 GB
ltx-2-19b-dev Q5_1 14.6 GB
ltx-2-19b-dev Q5_K_M 14.3 GB
ltx-2-19b-dev Q5_K_S 13.6 GB
ltx-2-19b-dev Q6_K 16 GB
ltx-2-19b-dev Q8_0 20.4 GB
Vantage
Model Quant Size Download
ltx-2-19b-dev Q3_K_M 9.96 GB
ltx-2-19b-dev Q3_K_S 9.28 GB
ltx-2-19b-dev Q4_0 11.6 GB
ltx-2-19b-dev Q4_1 12.4 GB
ltx-2-19b-dev Q4_K_M 12.8 GB
ltx-2-19b-dev Q4_K_S 11.8 GB
ltx-2-19b-dev Q5_0 13.6 GB
ltx-2-19b-dev Q5_1 14.5 GB
ltx-2-19b-dev Q5_K_M 14.4 GB
ltx-2-19b-dev Q5_K_S 13.5 GB
ltx-2-19b-dev Q6_K 15.9 GB
ltx-2-19b-dev Q8_0 20.4 GB
ltx-2-19b-distilled Q3_K_M 9.96 GB
ltx-2-19b-distilled Q3_K_S 9.28 GB
ltx-2-19b-distilled Q4_0 11.6 GB
ltx-2-19b-distilled Q4_1 12.4 GB
ltx-2-19b-distilled Q4_K_M 12.8 GB
ltx-2-19b-distilled Q4_K_S 11.8 GB
ltx-2-19b-distilled Q5_0 13.6 GB
ltx-2-19b-distilled Q5_1 14.5 GB
ltx-2-19b-distilled Q5_K_M 14.4 GB
ltx-2-19b-distilled Q5_K_S 13.5 GB
ltx-2-19b-distilled Q6_K 15.9 GB
ltx-2-19b-distilled Q8_0 20.4 GB

Special Quantization: PolarQuant Q5

LTX-2.3 (22B) — PolarQuant Q5 is a bit-packed quantization method using Hadamard-Rotated Lloyd-Max Quantization. It achieves optimal Gaussian weight quantization via Hadamard rotation, delivering near-lossless quality with significant size reduction.

Specification image
Specification Value
Parameters 22B
Transformer Blocks 48
Hidden Dimension 4096
Layers Quantized 1,347 (of 5,947 total tensors)

Compression Statistics:

Component Original Size PQ5 Packed Reduction
Transformer (1,347 layers) 37 GB 4.6 GB -88%
VAE + Skip (4,600 layers) 9.1 GB 9.1 GB BF16 kept
Upscalers 1.3 GB 1.3 GB BF16 kept
Total 46.2 GB 15 GB -68%
image

Quality Metrics:

  • Cosine Similarity: 0.9986 (near-lossless)
  • Download Size: 15 GB
  • Beats torchao INT4 on perplexity (PPL)

Hardware Requirements:

GPU VRAM Status
A100 (80 GB) 80 GB Full speed
A100 (40 GB) 40 GB Recommended
RTX 4090 (24 GB) 24 GB With offloading

Key Features:

  • Mixed precision approach: transformer heavily quantized (-88%) while VAE remains BF16
  • 5-bit bit-packed representation (Q5)
  • 50-65% smaller than original with zero quality loss
  • One-command setup with easy generation wrapper
Model Size Download
LTX-2.3-22B-PolarQuant-Q5 15 GB

Installation: pip install safetensors huggingface_hub scipy ArXiv Reference: 2603.29078

◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆

▓ Text Encoders

LTX-2 requires Gemma-3-12b variants. LTX-2.3 uses text projection layers.

Comfy-Org Optimized Encoders

Official and optimized versions for ComfyUI.

Model Name Size Download
gemma_3_12B_it 24.4 GB
gemma_3_12B_it_fpmixed 13.7 GB
gemma_3_12B_it_fp8_scaled 13.2 GB
gemma_3_12B_it_fp4_mixed 9.5 GB
gemma_3_12B_it-int8tensormixed 13.2 GB
gemma_3_12B_it-int8mixedblockwise 13.6 GB
gemma_3_12B_it-int8mixedtensorwise 14.1 GB
gemma_3_12B_it-int8tensormixed 13.2 GB
text_projection_fp8 1.16 GB
  • gemma_3_12B_it_fpmixed: Experimental quant. Should be better than the fp8 scaled
  • gemma_3_12B_it_fp4_mixed: 90% fp4 layers

Note: The mxfp8mixed quantization requires a custom fork of ComfyUI-Kitchen with mxfp8 support. Standard ComfyUI installations may not support this quantization format.

Gemma-3-12b Abliterated

Why Choose Abliterated Encoders?

Standard Gemma models often incorporate safety alignment that "sanitizes" or weakens specific concepts within prompt embeddings. Even when the model doesn't explicitly refuse a request, this internal filtering can dilute creative intent. For LTX-2 video generation, using a standard encoder often results in:

  • Reduced Prompt Adherence: Key stylistic or descriptive terms may be ignored or weakened.
  • Visual Softening: Visual intensity and fine details are often "muted" to fit generic safety profiles.
  • Concept Dilution: Complex or niche creative requests are subtly altered, leading to less faithful representations of your vision.

Abliteration bypasses these restrictive alignment layers, allowing the encoder to translate your prompts into embeddings with maximum fidelity. This ensures LTX-2 receives the most accurate and un-filtered instructions possible.

Gemma-3-12b-Abliterated

Fixed versions of the abliterated Gemma-3-12b-it model by FusionCow, modified specifically for compatibility with LTX-2. The original model

Model Precision Size Download
Gemma ablit fixed bf16 23.5 GB
Gemma ablit fixed fp8 13.8 GB
Gemma 3 12B IT Heretic

Models by DreamFast

Safetensors

Model Precision Size Download
Gemma_3_12B_it Heretic bf16 23.5 GB
Gemma_3_12B_it Heretic fp8 12.8 GB

GGUF

Quant Size Quality Recommendation Download
F16 22GB Lossless Reference, same as original
Q8_0 12GB Excellent Best quality quantization
Q6_K 9.0GB Very Good High quality, good compression
Q5_K_M 7.9GB Good Balanced quality/size
Q5_K_S 7.7GB Good Slightly smaller Q5
Q4_K_M 6.8GB Good Still useful
Q4_K_S 6.5GB Decent Smaller Q4 variant
Q3_K_M 5.6GB Acceptable For very low VRAM only
Sikaworld1990 Gemma-3-12b Abliterated

NVFP4 quantization variants by Sikaworld1990 optimized for Blackwell GPUs.

Model Precision Size Download
Gemma-3-12b QAT Abliterated FP4 NVFP4-HF 12.1 GB
Gemma-3-12b QAT Abliterated FP4 NVFP4-Pure 8.91 GB
Gemma-3-12b HereticX Abliterated bf16 15 GB
Gemma-3-12b High-Fidelity Abliterated bf16 14.1 GB
  • FP4-HF: High-fidelity mixed precision calibration
  • FP4-Pure: Pure FP4 quantization for maximum compression
  • HereticX: Uncensored variant with maximum prompt fidelity
  • High-Fidelity: Optimized for quality with better detail preservation

◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆

▓ Separated Components

Separated LTX2 checkpoint by Kijai and Kijai for LTX-2.3. For alternative way to load the models in Comfy.

▣ Diffusion Models (Transformer Only)

Ver Name Precision Size Download
2.3 ltx-2.3-22b dev bf16 42 GB
2.3 ltx-2.3-22b dev fp8 23.5 GB
2.3 ltx-2.3-22b dev mxfp8_block32 24.1 GB
2.3 ltx-2.3-22b dev fp8_input_scaled 25 GB
2.3 ltx-2.3-22b distilled bf16 42 GB
2.3 ltx-2.3-22b distilled fp8_input_scaled 23.5 GB
2.3 ltx-2.3-22b distilled v2 fp8_input_scaled v2 23.2 GB
2.3 ltx-2.3-22b distilled fp8 23.5 GB
2.3 ltx-2.3-22b distilled (experimental) mxfp8 24.1 GB
2.3 ltx-2.3-22b distilled 1.1 bf16 42 GB
2.3 ltx-2.3-22b distilled 1.1 fp8 25.2 GB
2.3 ltx-2.3-22b distilled 1.1 (experimental) mxfp8 24.1 GB
2 ltx-2-19b dev bf16 37.8 GB
2 ltx-2-19b dev fp8 21.6 GB
2 ltx-2-19b dev fp4 14.5 GB
2 ltx-2-19b distilled bf16 37.8 GB
2 ltx-2-19b distilled fp8 21.6 GB

Note

input_scaled additionally have activation scaling, and are set to run with fp8 matmuls on supported hardware (roughly 40xx and later Nvidia GPUs).

▣ VAE (Video & Audio)

Ver Component Precision Size Download
2.3 Video VAE BF16 1.45 GB
2.3 Audio VAE BF16 365 MB
2 Video VAE BF16 2.45 GB
2 Audio VAE BF16 218 MB

▣ Embedding Connectors & Text Projection

Ver Name Precision Size Download
2.3 Embeddings Connectors dev bf16 2.31 GB
2.3 Embeddings Connectors distilled bf16 2.31 GB
2 Connector dev bf16 2.86 GB
2 Connector distilled bf16 2.86 GB

◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆

▓ LoRA

▣ Enchancer, special

  • LTX-2.3-IC-LoRA-Colorizer by DoctorDiffusion (331 MB) - Colorize black and white videos

  • JUST-DUB-IT

  • Best-Face-Swap-Video

  • Image-to-Video Adapter LoRA

    • Original by MachineDelusions
    • siraxe variant - Stripped audio layers + rank64 compressed (2.62 GB, 655 MB rank64 bf16)
  • Lightricks LTX-2.3

    • HDR - Enables 16-bit HDR video generation and converts SDR video to HDR using LogC3 transform for extended dynamic range
    • Union Control - Unified IC-LoRA combining Canny + Depth + Pose control signals for multi-signal video generation conditioning
    • Motion Track Control - Guides object motion using sparse point trajectories via colored spline overlays on reference videos
  • vrgamedevgirl84

  • oumoumad

    • IC luminance map
    • LTX-2 IC-LoRA-Ungrade - Removes color grading and contrast from footage, returning neutral ungraded appearance
    • LTX-2.3 IC-LoRA-Ungrade - LTX-2.3 version of color grading removal IC-LoRA
    • IC-LoRA-Outpaint - Extends video canvas by generating new content in black regions (letterbox areas), filling with temporally consistent content
    • IC-LoRA-ReFocus - Removes lens blur and restores focus to out-of-focus footage (lens blur only)
    • IC-LoRA-Uncompress - Removes MP4 compression artifacts (blocking, banding, mosquito noise) and restores clean video
    • IC-LoRA-MotionDeblur - Removes motion blur from footage
    • IC-LoRA-Deinterlace - Removes interlacing artifacts from video
    • FXIC LTX2 IC-LoRA - Flux-inspired IC-LoRA for LTX video transformation with multiple optimizer variants (adamw, prodigy, masked) at various training steps
    • DeArchive LTX-2.3 - In-Context LoRA for restoring archive video (old B&W footage, low-res web rips, sepia-toned silent-era prints) into colored, high-definition modern cinematography (Rank 128, 5,000 steps)
  • Kijai

  • Cseti

    • IC-LoRA-Cameraman v1 - Transfers camera movements (zoom, pan, tilt, orbit) from reference video to generated output
    • IC-LoRA-EditRefVid v1 - Edit reference video IC-LoRA for editing existing videos using reference guidance
  • 100percentrobot

    • Audio-Reactive LORA - Generates audio-reactive videos with motion synchronized to musical elements (beats, rhythm)
  • LiconStudio

    • VBVR-lora-I2V - Enhances video generation for complex reasoning tasks including multi-object interactions, physical causality, and spatial relationships
    • VBVR-lora-I2V Special
  • TheBurgstall

    • LTX-2.3-Skin-Hair - Refines skin texture and hair rendering, reduces plastic skin artifacts, improves specular highlights
    • VR-360-Outpaint IC-LoRA - Outpaints standard widescreen footage into a full 360° equirectangular projection for immersive/VR viewing.
  • Nightfury16

  • siraxe

    • MergeGreen IC-lora - Maintains motion at start/end frames, use middle frames with RGB 0,191,0 (75% green fill) in IC-LoRA workflow
    • TTM IC-lora - Makes cutouts cartoony and adds cartoony characters to video scenes, based on the TTM approach (use with Img To Video bypass + Add Video IC-LoRA Guide node)
  • Lightricks LTX-2

    • Canny Control - Edge detection control for structural guidance
    • Depth Control - Depth map conditioning for 3D spatial control
    • Detailer - Enhances fine details and textures in generated videos
    • Pose Control - Human pose estimation control for motion guidance

Upscaler LoRAs:

  • LTX 2.3 Upscale IC-LoRA by Zlikwid
    • Generative refinement LoRA for upscaling lower-res or soft videos
    • Works by bicubic upscaling first, then running through LTX 2.3 with this LoRA
    • Use prompt: upscale
  • LTX2.3-ICEdit-Insight by JoyFox Lab
    • Task-aware video restoration and editing model family
    • Supports: Video Restoration, HD Enhancement, Watermark Removal, Subtitle Removal
  • Singularity LTX-2.3 OmniCine by WarmBloodAban
    • Comprehensive optimizer for LTX2.3 I2V and First/Last Frame workflows
    • Features: Limb Evolution, Shot Injection, Natural Expression, Physical Integrity, Cross-Style Potential
    • Uses "Singularity" prompting framework with 7-block bilingual structure

▣ Styles

▣ Special

  • Wan2.1 VAE Adapter
    • Latent space adapter for converting between LTX-2 and Wan2.1 VAE representations
    • latent_adapter_final.pt (447 MB)

▣ ID-LoRA (Identity-Driven In-Context LoRA)

ID-LoRA is a method that enables identity-preserving audio-video generation in a single model. It jointly generates a subject's appearance and voice, letting a text prompt, a reference image, and a short audio clip govern both modalities together. Built on top of LTX-2.3 (22B), it is the first method to personalize visual appearance and voice within a single generative pass.

Unlike cascaded pipelines that treat audio and video separately, ID-LoRA operates in a unified latent space where a single text prompt can simultaneously dictate the scene's visual content, environmental acoustics, and speaking style—while preserving the subject's vocal identity and visual likeness.

Key Features:

  • Text prompt controls the scene and content
  • Reference image preserves the subject's visual likeness
  • Short audio clip preserves the subject's vocal identity
  • Single unified generation pass for both appearance and voice

Available LoRAs for LTX-2.3:

LoRA LoRA Rank Size Download
ID-LoRA-TalkVid-3K 128 1.1 GB
ID-LoRA-CelebVHQ-3K 128 1.1 GB

Resources:

◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆

▓ ComfyUI Nodes

▣ Custom Node Collections

  • 10S-Comfy-nodes by TenStrip - Custom ComfyUI nodes for improving motion quality when working with LTX 2.3's combined audio/video latent pipeline. Includes Latent Cross Fade Auto Concat, Audio Latent Stretch, Latent Motion Sharpener, Latent Temporal Upsampler, Latent Motion Retime, and Latent Temporal Inpainter for clean 30fps output from 24fps sampled models.

  • Deno Custom Nodes by Deno2026 - Practical ComfyUI custom nodes focused on fast real-world workflow improvements including (Deno) Resize Box, Multi Image Loader, LTX Sequencer, LTX Model Loader, Easy Model Download Helper, LTX Multi LoRA Loader, and LTX Prompt Guide.

  • PromptRelay by kijai - Enables consistent multilingual lip-sync while maintaining voice consistency across languages. Distributes video latent frames across segments with smart prompt node supporting inline and block syntax styles.

  • WhatDreamsCost ComfyUI by WhatDreamsCost - A variety of custom ComfyUI nodes and workflows for creating AI-generated video content including Multi Image Loader, LTX Sequencer, LTX Keyframer, Speech Length Calculator, Load Video UI, and Load Audio UI.

  • ComfyUI-Sapiens2 by kijai - ComfyUI nodes for Sapiens2 computer vision models from Facebook Research. Supports pose estimation, body-part segmentation, surface normal estimation, and pointmap estimation with model variants from 400M to 5B parameters.

◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆

▓ LoRA Training

For training LTX LoRAs, the community uses a variety of official scripts, community-developed forks, and cloud-based platforms.

Primary Local Training Tools

  • Official LTX-2 Trainer: This is the standard Python-based package for training LoRAs, full fine-tuning, and In-Context (IC) LoRAs. It is designed for Linux and requires CUDA and Triton.
  • Musubi-Tuner (AkaneTendo25 Fork): Widely considered the fastest and most efficient local trainer for LTX-2 and 2.3. It features significantly smaller cache sizes (up to 12x smaller than AI Toolkit) and better iteration speeds, reaching up to 2 iterations per second on an RTX 5090.
  • AI Toolkit (by Ostris): A popular third-party tool that supports LTX-2 character and image-to-video LoRAs. While beginner-friendly, some users reported issues with audio training on the main branch.
  • AI Toolkit: BIG-DADDY-VERSION (ArtDesignAwesome Fork): This specific fork was created to fix broken audio and voice training in the original AI Toolkit. It is optimized for hardware like the RTX 5090.
  • rs-nodes (richservo): A collection of nodes that includes a full LTX Lora trainer directly within ComfyUI. It is designed to be memory-efficient, allowing training on cards with as little as 11GB-12GB of VRAM by using ComfyUI's native weight loaders.
  • SimpleTuner: A highly optimized trainer for Linux that supports LTX-2 and is noted for its ability to handle larger datasets on limited VRAM via block swapping.

Cloud Training Platforms

  • Fal.ai: Provides a dedicated cloud trainer for custom styles and effects, though it is primarily limited to image-based training datasets.
  • RunComfy: A cloud service that offers a pre-configured AI Toolkit setup specifically for LTX-2 training.

Essential Dataset & Captioning Tools

  • Taz's Ultimate Captioning Tool: A Hugging Face space frequently used by the community to generate the long, detailed, cinematographic prompts (around 200 words) that LTX-2 requires for high-quality training.
  • AI Video Clipper & LoRA Captioner: A modular pipeline designed to automate local dataset creation using WhisperX and Qwen2-VL, including support for RTX 5090 Blackwell cards.

Training Requirements Summary

  • Dataset: Videos should typically be cut to 121 frames (exactly 4.84 seconds) to align with the model's architectural "8n+1" rule.
  • Hardware: While 16GB VRAM is possible with extreme offloading in tools like rs-nodes, 24GB is the practical minimum for quantized training. For best results and speed, 48GB to 80GB (H100 or RTX 6000) is preferred.
  • Precision: It is now officially recommended to train on the full BF16 model for LTX 2.3 rather than FP8 for superior quality.

◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆

▓ Workflow & Technical Notes

❖ Lightricks

LTX-2.3:

LTX-2:

❖ vrgamedevgirl84

vrgamedevgirl84 LTX 2.3 Music Video Creator:

  • Music Video Creator Workflow
    • Prompt Creator Workflow - Audio upload, beat detection, scene timing, lyrics analysis, style selection, prompt generation
    • Text-to-Video Workflow - LoRA integration, advanced prompt controls, Remake Mode, video stitching
    • Image-to-Video Workflow - Uses Z-Image Turbo and LTX 2.3
    • Requirements: ComfyUI, LTX 2.3 models, Z-Image Turbo model, FFmpeg, vrgamedevgirl custom nodes

❖ ComfyUI

❖ RuneXX

RuneXX LTX-2.3 Workflows:

Movie-Maker:

Talking-Avatar-TTS:

Video-2-Video:

Others:

Custom-Audio:

First-Last-Frame:

Long-Video-Experimental:

3-Pass-Experimental:

Control-reference:

Music-Video-Creator:

Helper-Workflows:

Other-examples:

RuneXX LTX-2 Workflows old pre_feb2026

About

All available LTX-2 models, encoders, workflows, LoRAs for ComfyUI

Topics

Resources

Stars

Watchers

Forks

Contributors