Skip to content

Conversation

@EnragedAntelope
Copy link

I know you said you are not taking PRs currently - but I spent some time with Claude, and tested this pretty thoroughly, and LOVE your node pack and really wanted this functionality! You are welcome to scrutinize and merge as-is, or just pick out what you do or do not like. I think it is a great addition so I hope you can integrate it!
Thank you for all of your work, and Merry Christmas.

LoRA Cycler Node - Enhancement Request (Issue #316)

Adds a node to cycle/randomize through LoRAs on each workflow execution.

Files added:

py/nodes/lora_cycler.py - Node implementation
py/utils/lora_cycler_utils.py - Shared utilities
web/comfyui/lora_cycler.js - Frontend (preview, display widget)
docs/LORA_CYCLER.md - User docs

Files modified:

py/routes/lora_routes.py - Added 1 API endpoint for real-time preview
__init__.py - Node registration

Usage:

Connect LORA_STACK → Lora Loader
Connect trigger_words → TriggerWord Toggle (for real-time preview)

Features:

Selection modes: fixed, increment, decrement, random
Filters: folder, base model, tags, name
"First trigger word only" option for multi-character LoRAs
Live preview display in node

Limitations:

Counter resets on ComfyUI restart (in-memory)

claude and others added 7 commits December 25, 2025 22:59
Implements enhancement request willmiao#316 - adds ability to automatically cycle
through a list of LoRAs on successive workflow runs.

New nodes:
- LoraCycler: Cycles through LoRAs with modes: fixed, increment, decrement, random
- LoraRandomizer: Convenience alias with random mode as default

Features:
- Filter LoRAs by folder, base model, tags, or name
- Accept lora_stack input for manual LoRA list control
- Output correct trigger words for selected LoRA only
- Uses IS_CHANGED method to trigger re-execution for cycling modes
- Compatible with existing LORA_STACK type for downstream loaders

Includes comprehensive test coverage (15 tests).
Changes:
- Add web/comfyui/lora_cycler.js frontend extension that listens for
  execution events and updates connected TriggerWord Toggle nodes
- Remove redundant LoraRandomizer node (use LoraCycler with random mode)
- Improve tooltips with detailed examples for all filter options
- Add OUTPUT_TOOLTIPS for clearer connection guidance

The frontend extension:
- Listens for ComfyUI 'executed' events
- Extracts selected LoRA name from node output
- Calls updateConnectedTriggerWords() to update TriggerWord Toggle nodes
- Handles node mode changes (bypass/enable) and connection changes

This fixes the issue where trigger words weren't populating in the
TriggerWord Toggle node when using the Cycler.
…or LoraCycler

- Add `/api/lm/loras/cycler_preview` API endpoint for real-time trigger word
  updates before workflow execution
- Update frontend to call preview API when widgets change, enabling trigger
  words to populate immediately like the main LoRA Manager node
- Add base model dropdown that fetches available base models from existing API
- Add `first_trigger_word_only` option for LoRAs with multiple trigger words
  (e.g., multi-character LoRAs like Disney princesses)
- Debounce widget changes to avoid excessive API calls
- Update tests to cover new functionality

Addresses user feedback about trigger words not populating before execution.
Maintainability improvements:
- Extract shared filtering and selection logic into py/utils/lora_cycler_utils.py
- API endpoint now uses shared utilities for consistent behavior with node
- Shared execution counters between node and API

New features:
- Add "next up" display widget showing currently selected LoRA in node
- Display format: "[index/total] LoRA Name"

Documentation:
- Add comprehensive docs/LORA_CYCLER.md with:
  - Connection patterns (Cycler -> Lora Loader + Cycler -> TriggerWord Toggle)
  - Explanation of why trigger_words should connect directly to TriggerWord Toggle
  - Filter options and examples
  - Selection modes documentation

This refactor ensures the codebase is easier to maintain by:
1. Single source of truth for filtering logic
2. Shared counter state between preview and execution
3. Clear documentation for users and maintainers
- Remove duplicate lora_stack input from frontend (already defined in backend)
- Remove unused imports from lora_cycler.py (select_lora_index, format_trigger_words)
- Remove unused Tuple import from lora_cycler_utils.py

The lora_stack input is defined in INPUT_TYPES in the Python node,
so adding it again in the frontend JS created a duplicate.
@willmiao
Copy link
Owner

Thank you so much for this PR! I really appreciate the time you put into it.

I’ve taken it for a spin, and apart from some minor code issues, it works very well. It's a great feature.

To be transparent, I have a similar feature currently in development. However, your approach has given me several new ideas that I plan to reference and incorporate into the official release.

I won't be merging this specific PR, mostly due to time constraints. When features involve complex logic, aligning the design via text can be difficult. Since English isn't my first language, the detailed communication required to refine the PR often takes me longer than writing the code myself.

I hope you understand that this isn't a reflection on the quality of your work, but just a time-management choice on my end. Thanks again for the great work—stay tuned for the update!

@EnragedAntelope
Copy link
Author

Thank you so much for this PR! I really appreciate the time you put into it.

I’ve taken it for a spin, and apart from some minor code issues, it works very well. It's a great feature.

To be transparent, I have a similar feature currently in development. However, your approach has given me several new ideas that I plan to reference and incorporate into the official release.

I won't be merging this specific PR, mostly due to time constraints. When features involve complex logic, aligning the design via text can be difficult. Since English isn't my first language, the detailed communication required to refine the PR often takes me longer than writing the code myself.

I hope you understand that this isn't a reflection on the quality of your work, but just a time-management choice on my end. Thanks again for the great work—stay tuned for the update!

Wonderful! Thank you. I completely understand and take no offense. I have been using my fork since I made it and really get a lot of usage from the enhancement.

I will leave the PR open for your reference until you close it in case you need anything from it.

I look forward to the update! Thank you again and happy new year!

@willmiao
Copy link
Owner

I've added the Lora Randomizer feature (see discussion #767 for details). To some extent, this can also function as a cycler, so feel free to give it a try. I'll evaluate whether to implement dedicated cycle support in the future.

@willmiao
Copy link
Owner

Hey! The Lora Cycler node has just been added to the nightly build. Feel free to update and give it a try — feedback is very welcome.

@EnragedAntelope
Copy link
Author

Hey! The Lora Cycler node has just been added to the nightly build. Feel free to update and give it a try — feedback is very welcome.

Hi, Thank you for these! I am enjoying. You did a good job overall with how you've implemented these.

One thing I think is not intended behavior from the cycler:
I have about 300 LoRAs in one folder, and I want to cycle through all of them.
ComfyUI only allows queuing 100 jobs at a time.
This is what happens:
• I select the folder and set the cycler index to 1.
• I click “Queue 100”.
• ComfyUI starts adding jobs to the queue. It usually gets to around 30 jobs.
• When the first job finishes, the cycler automatically moves to index 2.
• But ComfyUI is still trying to finish queuing the original 100 jobs.
• Because of this, I end up with LoRAs 1–30 queued, then 2–31, then 3–32, and so on.
(So some LoRAs repeat, and some are skipped.)
Is that expected behavior or no? What I was looking for is a simple way to cycle through every LoRA that matches my filter exactly once, without any repeats.

@EnragedAntelope
Copy link
Author

PS my only other idea is to add "lora batch" inputs to randomizer and lora cycler nodes. That way i could chain things however i want - like maybe have the randomizer loading style loras, that feeds into lora cycler which is cycling through character loras, and that feeds into lora manager which outputs triggers for anything loaded.

Thank you again for all of your work! Definitely one of my favorite node packs, I use it constantly.

@willmiao
Copy link
Owner

willmiao commented Jan 25, 2026

Thanks for the detailed feedback! I’ll look into the cycler behavior you described.

As a quick note: although ComfyUI defaults to 100 jobs, you can raise the limit via
ComfyUI Settings → Comfy → Queue Button → Batch count limit.

Regarding the “LoRA batch” idea, I’m not fully clear on the use case yet. My current view of the LoRA Cycler is mainly for testing/comparison (iterating with fixed weights). Do you mean something like randomizing style LoRAs each time, then apply a character LoRA? If you could elaborate, that’d help.

Also, in case it helps: this kind of setup is already possible by chaining loaders—e.g. one LoRA Loader fed by a LoRA Randomizer, followed by another LoRA Loader fed by a LoRA Cycler. Please refer to the two newly added template workflows.

Thanks again for the kind words and suggestions!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants