Skip to content

Comments

feat: LoKr adapter support, LoRA status fix, and training docs update#649

Merged
ChuxiJ merged 1 commit intomainfrom
feat/lokr-ui-fixes-and-training-docs
Feb 20, 2026
Merged

feat: LoKr adapter support, LoRA status fix, and training docs update#649
ChuxiJ merged 1 commit intomainfrom
feat/lokr-ui-fixes-and-training-docs

Conversation

@ChuxiJ
Copy link
Contributor

@ChuxiJ ChuxiJ commented Feb 20, 2026

  • Fix LoRA Status textbox visibility in Gradio UI (CSS override for tooltip conflict)
  • Add LoKr adapter toggle support in set_use_lora/set_lora_scale (LyCORIS set_multiplier)
  • Add unit tests for LoKr controls (controls_test.py)
  • Add LoKr training help button and i18n entries (en/zh/ja)
  • Add LoKr recommendation to LoRA Training Tutorial docs (en/zh/ja/ko)

Summary by CodeRabbit

Release Notes

  • New Features

    • Added LoKr (Low-rank Kronecker) adapter support with runtime controls for faster training workflows.
    • Enhanced LoRA UI with improved status display and help documentation.
  • Documentation

    • Expanded training guides with comprehensive LoKr tutorials and setup instructions across all languages.
    • Added LoKr vs LoRA performance comparisons and recommendations.
  • Tests

    • Added unit tests for LoKr and LoRA adapter controls.

- Fix LoRA Status textbox visibility in Gradio UI (CSS override for tooltip conflict)
- Add LoKr adapter toggle support in set_use_lora/set_lora_scale (LyCORIS set_multiplier)
- Add unit tests for LoKr controls (controls_test.py)
- Add LoKr training help button and i18n entries (en/zh/ja)
- Add LoKr recommendation to LoRA Training Tutorial docs (en/zh/ja/ko)
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 20, 2026

📝 Walkthrough

Walkthrough

This PR introduces LoKr (Low-rank Kronecker) adapter support alongside existing LoRA adapters. Core changes include a new _toggle_lokr helper for LyCORIS adapter toggling, refactored set_use_lora and set_lora_scale methods with dual adapter-type branches, UI styling updates, and comprehensive localization and documentation expansions.

Changes

Cohort / File(s) Summary
LoKr Runtime Controls
acestep/core/generation/handler/lora/controls.py, acestep/core/generation/handler/lora/controls_test.py
Introduces _toggle_lokr helper and dual adapter-path logic in set_use_lora and set_lora_scale for LoKr vs PEFT LoRA. Comprehensive test coverage for enable/disable, scale application, state persistence, and error handling across both paths.
UI Styling & Interface
acestep/ui/gradio/interfaces/__init__.py, acestep/ui/gradio/interfaces/generation.py, acestep/ui/gradio/interfaces/training.py
CSS overrides for tooltip behavior via no-tooltip class; LoRA status textbox reconfigured to single-line with tooltip suppression; help button added to Train LoKr tab.
Localization & Documentation
acestep/ui/gradio/i18n/en.json, acestep/ui/gradio/i18n/ja.json, acestep/ui/gradio/i18n/zh.json, docs/en/LoRA_Training_Tutorial.md, docs/ja/LoRA_Training_Tutorial.md, docs/ko/LoRA_Training_Tutorial.md, docs/zh/LoRA_Training_Tutorial.md
I18n strings and tutorial documentation expanded with LoKr-specific sections, performance claims, setup guidance, and LoKr vs LoRA comparison tables across all supported languages.

Sequence Diagram(s)

sequenceDiagram
    participant UI as UI/Handler
    participant Ctrl as set_use_lora/<br/>set_lora_scale
    participant Decoder as Decoder
    participant LyCoRIS as LyCoRIS Net
    participant PEFT as PEFT Adapter

    UI->>Ctrl: adapter_type == "lokr"?
    alt LoKr Path
        Ctrl->>Decoder: Check _lycoris_net
        alt _lycoris_net present
            Ctrl->>LyCoRIS: _toggle_lokr(enable, scale)
            LyCoRIS->>LyCoRIS: set_multiplier(scale)
            LyCoRIS-->>Ctrl: success (True)
        else _lycoris_net missing
            Ctrl-->>Ctrl: Log warning, return False
        end
    else PEFT LoRA Path
        Ctrl->>Decoder: enable/disable adapter
        Ctrl->>PEFT: setAdapter (if available)
        PEFT-->>Ctrl: Adapter state updated
    end
    Ctrl-->>UI: Status message<br/>(adapter_label + state)
Loading

Estimated Code Review Effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly Related PRs

  • PR #571: Modifies the same LoRA control functions (set_use_lora, set_lora_scale) and per-adapter scaling state management (_active_loras), directly extending multi-adapter infrastructure.
  • PR #527: Overhauls Gradio tooltip styling and CSS behavior, with this PR's no-tooltip class override directly related to the tooltip system refactor.

Poem

🐰 Lo, LoKr hops onto the scene with grace,
Faster training speeds in every place!
Controls toggle swift through adapter's dance,
From PEFT to LyCoRIS, enhanced in stance.
Tests weave the tapestry, docs show the way—
A speedy training future, hooray! 🎉

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 75.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately captures the three main changes: LoKr adapter support in the runtime controls, the LoRA status UI fix, and training documentation updates with LoKr guidance.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/lokr-ui-fixes-and-training-docs

Comment @coderabbitai help to get the list of available commands and usage tips.

@ChuxiJ ChuxiJ merged commit ce23166 into main Feb 20, 2026
2 of 3 checks passed
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
acestep/core/generation/handler/lora/controls.py (1)

63-66: Log swallowed exceptions for debuggability.

The bare try-except-pass silently swallows set_adapter failures, which can make debugging difficult. Consider logging at debug level to aid troubleshooting while keeping the intentional fallthrough behavior.

💡 Proposed fix
                     if active and hasattr(decoder, "set_adapter"):
                         try:
                             decoder.set_adapter(active)
-                        except Exception:
-                            pass
+                        except Exception as e:
+                            logger.debug(f"set_adapter({active}) failed (non-fatal): {e}")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@acestep/core/generation/handler/lora/controls.py` around lines 63 - 66, The
try/except around decoder.set_adapter(active) is swallowing errors; change it to
catch Exception as e and log the exception at debug level (e.g.,
logger.debug("Failed to set adapter %s on decoder %s: %s", active, decoder, e,
exc_info=True)) before continuing to preserve behavior; ensure you reference the
existing module/class logger or add one if missing so failures in
set_adapter(active) are recorded for debugging while still allowing the code to
fall through.
acestep/core/generation/handler/lora/controls_test.py (1)

166-194: Consider adding a test for missing LyCORIS net during scale application.

The current tests cover the happy path and disabled state. Adding a test for when _lycoris_net is missing during set_lora_scale would improve coverage of the warning path at lines 128-129 in controls.py.

💡 Suggested additional test
def test_scale_lokr_without_lycoris_net_returns_warning(self):
    """Setting scale on LoKr without LyCORIS net should return warning."""
    handler = _DummyHandler(adapter_type="lokr")
    # No _lycoris_net on decoder

    result = set_lora_scale(handler, 0.6)

    self.assertIn("⚠️", result)
    self.assertIn("no LyCORIS net", result)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@acestep/core/generation/handler/lora/controls_test.py` around lines 166 -
194, Add a unit test to controls_test.py that verifies set_lora_scale(handler,
value) returns the warning message when the LoKr adapter is selected but
handler.model.decoder lacks the _lycoris_net attribute: create a
_DummyHandler(adapter_type="lokr") without setting model.decoder._lycoris_net,
call set_lora_scale(handler, 0.6), and assert the returned string contains the
warning emoji (e.g. "⚠️") and the "no LyCORIS net" text (and optionally assert
handler.lora_scale was updated to 0.6); this covers the warning path in
set_lora_scale when _lycoris_net is missing.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@acestep/ui/gradio/i18n/zh.json`:
- Line 443: The text under the "training_lokr" key incorrectly references "LoRA"
in step 9; update step 9 to refer to LoKr (or explicitly state that the same
export/load controls are reused for LoKr) so users aren't misled—edit the string
value for "training_lokr" to replace "LoRA 路径 → 加载 LoRA → 启用使用 LoRA" with
wording like "LoKr 路径 → 加载 LoKr → 启用使用 LoKr" or a clear note that the LoRA
control is reused for LoKr.

---

Nitpick comments:
In `@acestep/core/generation/handler/lora/controls_test.py`:
- Around line 166-194: Add a unit test to controls_test.py that verifies
set_lora_scale(handler, value) returns the warning message when the LoKr adapter
is selected but handler.model.decoder lacks the _lycoris_net attribute: create a
_DummyHandler(adapter_type="lokr") without setting model.decoder._lycoris_net,
call set_lora_scale(handler, 0.6), and assert the returned string contains the
warning emoji (e.g. "⚠️") and the "no LyCORIS net" text (and optionally assert
handler.lora_scale was updated to 0.6); this covers the warning path in
set_lora_scale when _lycoris_net is missing.

In `@acestep/core/generation/handler/lora/controls.py`:
- Around line 63-66: The try/except around decoder.set_adapter(active) is
swallowing errors; change it to catch Exception as e and log the exception at
debug level (e.g., logger.debug("Failed to set adapter %s on decoder %s: %s",
active, decoder, e, exc_info=True)) before continuing to preserve behavior;
ensure you reference the existing module/class logger or add one if missing so
failures in set_adapter(active) are recorded for debugging while still allowing
the code to fall through.

"training_dataset": "## 数据集构建教程\n\n### 步骤 1:加载或扫描\n- **加载**:输入现有数据集 JSON 路径 → 点击加载\n- **扫描**:输入音频文件夹路径 → 点击扫描\n - 支持:wav、mp3、flac、ogg、opus\n\n### 步骤 2:配置\n- 设置**数据集名称**\n- 勾选**全部为纯音乐**(如果没有人声)\n- 设置**自定义激活标签**(LoRA 的唯一触发词)\n- 选择**标签位置**:前置、后置或替换\n\n### 步骤 3:自动标注\n- 点击**自动标注全部**生成描述、BPM、调性、拍号\n- 使用**跳过元数据**跳过 BPM/调性/拍号(更快)\n\n### 步骤 4:预览与编辑\n- 使用滑块浏览样本\n- 手动编辑描述、歌词、BPM、调性\n- 每个样本点击**保存更改**\n\n### 步骤 5:保存\n- 输入保存路径 → 点击**保存数据集**\n\n### 步骤 6:预处理\n- 设置张量输出目录 → 点击**预处理**\n- 将音频/文本编码为张量用于训练\n\n### 📖 文档\n- [LoRA 训练教程](https://github.com/ACE-Step/ACE-Step-1.5/blob/main/docs/zh/LoRA_Training_Tutorial.md) — 完整分步指南\n- [Side-Step 高级训练](https://github.com/ACE-Step/ACE-Step-1.5/blob/main/docs/sidestep/Getting%20Started.md) — 命令行训练,支持高级功能",
"training_train": "## LoRA 训练教程\n\n### 设置\n1. 输入**预处理张量目录** → 点击**加载数据集**\n2. 配置 LoRA:\n - **秩** (r):默认 64。越高 = 容量越大\n - **Alpha**:通常为秩的 2 倍(128)\n - **Dropout**:0.1 用于正则化\n\n### 训练\n3. 设置**学习率**(从 1e-4 开始)\n4. 设置**最大轮数**(默认 500)\n5. 点击**开始训练**\n6. 监控损失曲线 — 应该随时间下降\n7. 满意时点击**停止训练**\n\n### 导出\n8. 输入导出路径 → 点击**导出 LoRA**\n9. 在设置中加载:设置 LoRA 路径 → 加载 LoRA → 启用使用 LoRA\n\n### 提示\n- 显存有限时使用小批量(1)\n- 梯度累积增加有效批量大小\n- 频繁保存检查点(每 200 轮)\n\n### 📖 文档\n- [LoRA 训练教程](https://github.com/ACE-Step/ACE-Step-1.5/blob/main/docs/zh/LoRA_Training_Tutorial.md) — 完整分步指南\n- [Side-Step 高级训练](https://github.com/ACE-Step/ACE-Step-1.5/blob/main/docs/sidestep/Getting%20Started.md) — 命令行训练,修正时间步采样、LoKR、显存优化",
"training_train": "## LoRA 训练教程\n\n### 设置\n1. 输入**预处理张量目录** → 点击**加载数据集**\n2. 配置 LoRA:\n - **秩** (r):默认 64。越高 = 容量越大\n - **Alpha**:通常为秩的 2 倍(128)\n - **Dropout**:0.1 用于正则化\n\n### 训练\n3. 设置**学习率**(从 1e-4 开始)\n4. 设置**最大轮数**(默认 500)\n5. 点击**开始训练**\n6. 监控损失曲线 — 应该随时间下降\n7. 满意时点击**停止训练**\n\n### 导出\n8. 输入导出路径 → 点击**导出 LoRA**\n9. 在设置中加载:设置 LoRA 路径 → 加载 LoRA → 启用使用 LoRA\n\n### 🚀 推荐使用 LoKr 加速训练\nLoKr 大幅提升了训练效率,原来需要一小时的训练现在只需 5 分钟——**速度提升超过 10 倍**。这对于在消费级 GPU 上训练至关重要。切换到 **Train LoKr** 标签页即可开始。\n\n### 提示\n- 显存有限时使用小批量(1)\n- 梯度累积增加有效批量大小\n- 频繁保存检查点(每 200 轮)\n\n### 📖 文档\n- [LoRA 训练教程](https://github.com/ACE-Step/ACE-Step-1.5/blob/main/docs/zh/LoRA_Training_Tutorial.md) — 完整分步指南\n- [Side-Step 高级训练](https://github.com/ACE-Step/ACE-Step-1.5/blob/main/docs/sidestep/Getting%20Started.md) — 命令行训练,修正时间步采样、LoKR、显存优化",
"training_lokr": "## 🚀 LoKr 训练教程\n\nLoKr(低秩 Kronecker 积)大幅提升了训练效率,原来使用 LoRA 需要一小时的训练现在只需 5 分钟——**速度提升超过 10 倍**。这对于在消费级 GPU 上训练至关重要。\n\n### 设置\n1. 输入**预处理张量目录** → 点击**加载数据集**\n2. 配置 LoKr:\n - **Linear Dim**:默认 64(类似 LoRA 的秩)\n - **Linear Alpha**:默认 128(缩放因子)\n - **Weight Decompose (DoRA)**:默认启用,质量更好\n\n### 训练\n3. 设置**学习率**(LoKr 通常使用更高的学习率,从 1e-3 开始)\n4. 设置**最大轮数**(默认 500)\n5. 点击**开始 LoKr 训练**\n6. 监控损失曲线 — 应该随时间下降\n7. 满意时点击**停止训练**\n\n### 导出\n8. 输入导出路径 → 点击**导出 LoKr**\n9. 在设置中加载:设置 LoRA 路径 → 加载 LoRA → 启用使用 LoRA\n\n### LoKr vs LoRA 对比\n| | LoKr | LoRA |\n|---|---|---|\n| 速度 | ⚡ 快约 10 倍 | 较慢 |\n| 显存 | 更低 | 更高 |\n| 质量 | 相当 | 基准 |\n| 适合 | 消费级 GPU、快速迭代 | 追求最高保真度 |\n\n### 提示\n- LoKr 使用 Kronecker 分解实现极致效率\n- 启用 **DoRA**(Weight Decompose)可提升质量\n- 使用 **Tucker 分解** 可进一步压缩\n- 更高的学习率(1e-3)通常比 LoRA 的 1e-4 效果更好",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

LoKr 教程第 9 步仍指向 LoRA 控件,可能误导。

此段是 LoKr 训练教程,但第 9 步仍写 LoRA 路径/加载/启用。请改为 LoKr,或明确两者复用同一控件以避免混淆。

💬 Suggested text fix
- 9. 在设置中加载:设置 LoRA 路径 → 加载 LoRA → 启用使用 LoRA
+ 9. 在设置中加载:设置 LoKr 路径 → 加载 LoKr → 启用使用 LoKr
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"training_lokr": "## 🚀 LoKr 训练教程\n\nLoKr(低秩 Kronecker 积)大幅提升了训练效率,原来使用 LoRA 需要一小时的训练现在只需 5 分钟——**速度提升超过 10 倍**。这对于在消费级 GPU 上训练至关重要。\n\n### 设置\n1. 输入**预处理张量目录** → 点击**加载数据集**\n2. 配置 LoKr:\n - **Linear Dim**:默认 64(类似 LoRA 的秩)\n - **Linear Alpha**:默认 128(缩放因子)\n - **Weight Decompose (DoRA)**:默认启用,质量更好\n\n### 训练\n3. 设置**学习率**(LoKr 通常使用更高的学习率,从 1e-3 开始)\n4. 设置**最大轮数**(默认 500)\n5. 点击**开始 LoKr 训练**\n6. 监控损失曲线 — 应该随时间下降\n7. 满意时点击**停止训练**\n\n### 导出\n8. 输入导出路径 → 点击**导出 LoKr**\n9. 在设置中加载:设置 LoRA 路径 → 加载 LoRA → 启用使用 LoRA\n\n### LoKr vs LoRA 对比\n| | LoKr | LoRA |\n|---|---|---|\n| 速度 | ⚡ 快约 10 倍 | 较慢 |\n| 显存 | 更低 | 更高 |\n| 质量 | 相当 | 基准 |\n| 适合 | 消费级 GPU、快速迭代 | 追求最高保真度 |\n\n### 提示\n- LoKr 使用 Kronecker 分解实现极致效率\n- 启用 **DoRA**(Weight Decompose)可提升质量\n- 使用 **Tucker 分解** 可进一步压缩\n- 更高的学习率(1e-3)通常比 LoRA 的 1e-4 效果更好",
"training_lokr": "## 🚀 LoKr 训练教程\n\nLoKr(低秩 Kronecker 积)大幅提升了训练效率,原来使用 LoRA 需要一小时的训练现在只需 5 分钟——**速度提升超过 10 倍**。这对于在消费级 GPU 上训练至关重要。\n\n### 设置\n1. 输入**预处理张量目录** → 点击**加载数据集**\n2. 配置 LoKr:\n - **Linear Dim**:默认 64(类似 LoRA 的秩)\n - **Linear Alpha**:默认 128(缩放因子)\n - **Weight Decompose (DoRA)**:默认启用,质量更好\n\n### 训练\n3. 设置**学习率**(LoKr 通常使用更高的学习率,从 1e-3 开始)\n4. 设置**最大轮数**(默认 500)\n5. 点击**开始 LoKr 训练**\n6. 监控损失曲线 — 应该随时间下降\n7. 满意时点击**停止训练**\n\n### 导出\n8. 输入导出路径 → 点击**导出 LoKr**\n9. 在设置中加载:设置 LoKr 路径 → 加载 LoKr → 启用使用 LoKr\n\n### LoKr vs LoRA 对比\n| | LoKr | LoRA |\n|---|---|---|\n| 速度 | ⚡ 快约 10 倍 | 较慢 |\n| 显存 | 更低 | 更高 |\n| 质量 | 相当 | 基准 |\n| 适合 | 消费级 GPU、快速迭代 | 追求最高保真度 |\n\n### 提示\n- LoKr 使用 Kronecker 分解实现极致效率\n- 启用 **DoRA**(Weight Decompose)可提升质量\n- 使用 **Tucker 分解** 可进一步压缩\n- 更高的学习率(1e-3)通常比 LoRA 的 1e-4 效果更好",
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@acestep/ui/gradio/i18n/zh.json` at line 443, The text under the
"training_lokr" key incorrectly references "LoRA" in step 9; update step 9 to
refer to LoKr (or explicitly state that the same export/load controls are reused
for LoKr) so users aren't misled—edit the string value for "training_lokr" to
replace "LoRA 路径 → 加载 LoRA → 启用使用 LoRA" with wording like "LoKr 路径 → 加载 LoKr →
启用使用 LoKr" or a clear note that the LoRA control is reused for LoKr.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant