-
Notifications
You must be signed in to change notification settings - Fork 6k
Insights: huggingface/diffusers
Overview
20 Pull requests merged by 11 people
-
Support Wan AccVideo lora
#11704 merged
Jun 13, 2025 -
[docs] mention fp8 benefits on supported hardware.
#11699 merged
Jun 13, 2025 -
swap out token for style bot.
#11701 merged
Jun 13, 2025 -
[docs] add compilation bits to the bitsandbytes docs.
#11693 merged
Jun 12, 2025 -
Apply Occam's Razor in position embedding calculation
#11562 merged
Jun 11, 2025 -
Avoid DtoH sync from access of nonzero() item in scheduler
#11696 merged
Jun 11, 2025 -
Set _torch_version to N/A if torch is disabled.
#11645 merged
Jun 11, 2025 -
Improve Wan docstrings
#11689 merged
Jun 11, 2025 -
[tests] model-level
device_map
clarifications#11681 merged
Jun 11, 2025 -
[tests] tests for compilation + quantization (bnb)
#11672 merged
Jun 11, 2025 -
enable torchao test cases on XPU and switch to device agnostic APIs for test cases
#11654 merged
Jun 11, 2025 -
[
Wan
] Standardizevae.encode()
sampling mode inWanVideoToVideoPipeline
#11639 merged
Jun 11, 2025 -
[LoRA] support Flux Control LoRA with bnb 8bit.
#11655 merged
Jun 11, 2025 -
Allow remote code repo names to contain "."
#11652 merged
Jun 10, 2025 -
Update pipeline_flux_inpaint.py to fix padding_mask_crop returning only the inpainted area
#11658 merged
Jun 10, 2025 -
Add community class StableDiffusionXL_T5Pipeline
#11626 merged
Jun 9, 2025 -
Introduce DeprecatedPipelineMixin to simplify pipeline deprecation process
#11596 merged
Jun 9, 2025 -
[tests] Fix how compiler mixin classes are used
#11680 merged
Jun 9, 2025 -
fixed axes_dims_rope init (huggingface#11641)
#11678 merged
Jun 8, 2025 -
Wan VACE
#11582 merged
Jun 6, 2025
13 Pull requests opened by 10 people
-
Support Expert loss for HiDream
#11673 opened
Jun 6, 2025 -
Fix EDM DPM Solver Test and Enhance Test Coverage
#11679 opened
Jun 8, 2025 -
[wip][poc] make group offloading work with disk/nvme transfers
#11682 opened
Jun 9, 2025 -
[GGUF] feat: support loading diffusers format gguf checkpoints.
#11684 opened
Jun 10, 2025 -
[WIP] Refactor Attention Modules
#11685 opened
Jun 10, 2025 -
Bump requests from 2.32.3 to 2.32.4 in /examples/server
#11686 opened
Jun 10, 2025 -
Add Pruna optimization framework documentation
#11688 opened
Jun 10, 2025 -
fix "Expected all tensors to be on the same device, but found at least two devices" error
#11690 opened
Jun 11, 2025 -
Cosmos Predict2
#11695 opened
Jun 11, 2025 -
TorchAO compile + offloading tests
#11697 opened
Jun 11, 2025 -
Chroma Pipeline
#11698 opened
Jun 12, 2025 -
[docs] Quantization + torch.compile + offloading
#11703 opened
Jun 12, 2025 -
[rfc][compile] compile method for DiffusionPipeline
#11705 opened
Jun 13, 2025
12 Issues closed by 4 people
-
AccVid LoRA key error when loading with diffusers
#11702 closed
Jun 13, 2025 -
Convert VAE from latent-diffusion to diffusers
#11694 closed
Jun 11, 2025 -
<spam>
#11692 closed
Jun 11, 2025 -
<spam>
#11691 closed
Jun 11, 2025 -
Add support for ConsisID
#10100 closed
Jun 10, 2025 -
HunyuanVideo with IP2V
#10485 closed
Jun 10, 2025 -
Docs for HunyuanVideo LoRA?
#10796 closed
Jun 10, 2025 -
Need to handle v0.33.0 deprecations
#10895 closed
Jun 10, 2025 -
[BUG] [CleanCode] Tuple[int] = (16, 56, 56) in FluxTransformer2DModel
#11641 closed
Jun 8, 2025 -
Error in loading the pretrained lora weights
#11675 closed
Jun 7, 2025
4 Issues opened by 3 people
-
Add pruna integration for loading model through diffusers.from_pretrained / pipeline
#11700 opened
Jun 12, 2025 -
[DOCS] Add `pruna` as optimization framework
#11687 opened
Jun 10, 2025 -
HunyuanVideoImageToVideoPipeline memory leak
#11676 opened
Jun 7, 2025 -
[FR] Please support ref image and multiple control videos in Wan VACE
#11674 opened
Jun 6, 2025
31 Unresolved conversations
Sometimes conversations happen on old items that aren’t yet closed. Here is a list of all the Issues and Pull Requests with unresolved conversations.
-
Add SkyReels V2: Infinite-Length Film Generative Model
#11518 commented on
Jun 12, 2025 • 44 new comments -
[LoRA] parse metadata from LoRA and save metadata
#11324 commented on
Jun 13, 2025 • 12 new comments -
Attention Dispatcher
#11368 commented on
Jun 8, 2025 • 8 new comments -
[benchmarks] overhaul benchmarks
#11565 commented on
Jun 12, 2025 • 6 new comments -
Fix wrong param types, docs, and handles noise=None in scale_noise of FlowMatching schedulers
#11669 commented on
Jun 13, 2025 • 2 new comments -
Add FluxPAGPipeline with support for PAG
#11510 commented on
Jun 11, 2025 • 2 new comments -
enable cpu offloading of new pipelines on XPU & use device agnostic empty to make pipelines work on XPU
#11671 commented on
Jun 13, 2025 • 1 new comment -
⚡️ Speed up method `AutoencoderKLWan.clear_cache` by 886%
#11665 commented on
Jun 9, 2025 • 0 new comments -
[WIP] [LoRA] support omi hidream lora.
#11660 commented on
Jun 10, 2025 • 0 new comments -
Add Finegrained FP8
#11647 commented on
Jun 12, 2025 • 0 new comments -
Added PhotoDoodle Pipeline
#11621 commented on
Jun 12, 2025 • 0 new comments -
Chroma as a FLUX.1 variant
#11566 commented on
Jun 12, 2025 • 0 new comments -
[torch.compile] Make HiDream torch.compile ready
#11477 commented on
Jun 10, 2025 • 0 new comments -
[quant] add __repr__ for better printing of configs.
#11452 commented on
Jun 10, 2025 • 0 new comments -
Add basic implementation of AuraFlowImg2ImgPipeline
#11340 commented on
Jun 11, 2025 • 0 new comments -
Add VidTok AutoEncoders
#11261 commented on
Jun 11, 2025 • 0 new comments -
Request support for MAGI-1
#11519 commented on
Jun 12, 2025 • 0 new comments -
Can't load flux-fill-lora with FluxControl
#11651 commented on
Jun 12, 2025 • 0 new comments -
[Performance] Issue on *SanaLinearAttnProcessor2_0 family. 1.06X speedup can be reached with a simple change.
#11499 commented on
Jun 12, 2025 • 0 new comments -
prompt_embeds_scale in FluxPriorReduxPipeline seems to have no effect.
#11642 commented on
Jun 12, 2025 • 0 new comments -
how to load lora weight with fp8 transfomer model?
#11648 commented on
Jun 11, 2025 • 0 new comments -
torch.compile can't be used with groupoffloading on hunyuanvideo_frampack
#11584 commented on
Jun 11, 2025 • 0 new comments -
torch.compile errors on vae.encode
#10937 commented on
Jun 11, 2025 • 0 new comments -
OMI Format Compatibility
#11631 commented on
Jun 10, 2025 • 0 new comments -
Error in init from pretrained for LTXConditionPipeline
#11644 commented on
Jun 10, 2025 • 0 new comments -
[performance] investigating FluxPipeline for recompilations on resolution changes
#11360 commented on
Jun 10, 2025 • 0 new comments -
max_shard_size
#11650 commented on
Jun 10, 2025 • 0 new comments -
The density_for_timestep_sampling and loss_weighting for SD3 Training!!!
#9056 commented on
Jun 10, 2025 • 0 new comments -
Add SUPIR Upscaler
#7219 commented on
Jun 10, 2025 • 0 new comments -
Sage Attention for diffuser library
#11168 commented on
Jun 9, 2025 • 0 new comments -
LoRA load issue
#11659 commented on
Jun 6, 2025 • 0 new comments