-
Notifications
You must be signed in to change notification settings - Fork 369
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LoRA PEFT #50
LoRA PEFT #50
Conversation
commit dd8de48 Author: Neil Tan <[email protected]> Date: Mon Mar 24 11:47:51 2025 +0800 Modified README.md moved lora peft code to gr00t/utils/peft.py added peft dependency added lora_rank argument to scripts/gr00t_petf_finetune.py commit 569ff64 Author: Neil Tan <[email protected]> Date: Mon Mar 24 05:08:02 2025 +0800 commandline arg r=32 commit 4c8067b Author: Neil Tan <[email protected]> Date: Sun Mar 23 13:39:04 2025 +0800 only wrapping forward() commit 4fa6051 Author: Neil Tan <[email protected]> Date: Sun Mar 23 13:08:34 2025 +0800 seems to run
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR! This is very helpful for efficient finetuning on comsumer grade GPU.
First could you run a styling with black .
?
Also i noticed this error when running inference after merging #39 which set_pretrained(..., torch_dtype='auto')
Other than that it works well!
Model not found or avail in the huggingface hub. Loading from local path: /tmp/gr00t/checkpoint-20/
Traceback (most recent call last):
File "/home/youliang/gear/clean_repo/Isaac-GR00T/scripts/inference_service.py", line 73, in <module>
policy = Gr00tPolicy(
File "/home/youliang/gear/clean_repo/Isaac-GR00T/gr00t/model/policy.py", line 106, in __init__
self._load_model(model_path)
File "/home/youliang/gear/clean_repo/Isaac-GR00T/gr00t/model/policy.py", line 232, in _load_model
model = GR00T_N1.from_pretrained(model_path, torch_dtype="auto")
File "/home/youliang/gear/clean_repo/Isaac-GR00T/gr00t/model/gr00t_n1.py", line 224, in from_pretrained
pretrained_model = super().from_pretrained(
File "/home/youliang/miniconda3/envs/groot-release/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3848, in from_pretrained
dtype_orig = cls._set_default_torch_dtype(torch_dtype)
File "/home/youliang/miniconda3/envs/groot-release/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1617, in _set_default_torch_dtype
if not dtype.is_floating_point:
AttributeError: 'str' object has no attribute 'is_floating_point'
@youliangtan |
- removed verbose lora module print outs - updated lora fine-tune hyperparameters in the README.md
All good now. Thanks for the PR |
This PR enable LoRA PEFT to be enabled via an additional command line argument for the fine-tuning example.
Changes proposed in this pull request:
- Updated README.md
- moved lora peft code to gr00t/utils/peft.py
- added peft installation dependency
- added lora_rank argument to scripts/gr00t_petf_finetune.py