-
|
From the Mar 31 NemoClaw Livestream — Multi‑agent, scaling, and long‑running claws |
Beta Was this translation helpful? Give feedback.
Answered by
zNeill
Apr 2, 2026
Replies: 1 comment
-
|
NemoClaw doesn’t directly perform distributed fine‑tuning; it orchestrates agents that can use toolchains which do. You can absolutely build workflows where a claw coordinates multi‑node LoRA or other training jobs, but the actual training will be handled by your ML stack (e.g., NeMo, PyTorch, etc.) outside NemoClaw itself. |
Beta Was this translation helpful? Give feedback.
0 replies
Answer selected by
zNeill
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
NemoClaw doesn’t directly perform distributed fine‑tuning; it orchestrates agents that can use toolchains which do. You can absolutely build workflows where a claw coordinates multi‑node LoRA or other training jobs, but the actual training will be handled by your ML stack (e.g., NeMo, PyTorch, etc.) outside NemoClaw itself.