Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LoRA PEFT #50

Merged
merged 7 commits into from
Mar 24, 2025
Merged

LoRA PEFT #50

merged 7 commits into from
Mar 24, 2025

Conversation

neil-tan
Copy link
Contributor

This PR enable LoRA PEFT to be enabled via an additional command line argument for the fine-tuning example.

Changes proposed in this pull request:
- Updated README.md
- moved lora peft code to gr00t/utils/peft.py
- added peft installation dependency
- added lora_rank argument to scripts/gr00t_petf_finetune.py

commit dd8de48
Author: Neil Tan <[email protected]>
Date:   Mon Mar 24 11:47:51 2025 +0800

    Modified README.md
    moved lora peft code to gr00t/utils/peft.py
    added peft dependency
    added lora_rank argument to scripts/gr00t_petf_finetune.py

commit 569ff64
Author: Neil Tan <[email protected]>
Date:   Mon Mar 24 05:08:02 2025 +0800

    commandline arg r=32

commit 4c8067b
Author: Neil Tan <[email protected]>
Date:   Sun Mar 23 13:39:04 2025 +0800

    only wrapping forward()

commit 4fa6051
Author: Neil Tan <[email protected]>
Date:   Sun Mar 23 13:08:34 2025 +0800

    seems to run
Copy link
Member

@youliangtan youliangtan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR! This is very helpful for efficient finetuning on comsumer grade GPU.

First could you run a styling with black .?

Also i noticed this error when running inference after merging #39 which set_pretrained(..., torch_dtype='auto')


Other than that it works well!
Model not found or avail in the huggingface hub. Loading from local path: /tmp/gr00t/checkpoint-20/
Traceback (most recent call last):
  File "/home/youliang/gear/clean_repo/Isaac-GR00T/scripts/inference_service.py", line 73, in <module>
    policy = Gr00tPolicy(
  File "/home/youliang/gear/clean_repo/Isaac-GR00T/gr00t/model/policy.py", line 106, in __init__
    self._load_model(model_path)
  File "/home/youliang/gear/clean_repo/Isaac-GR00T/gr00t/model/policy.py", line 232, in _load_model
    model = GR00T_N1.from_pretrained(model_path, torch_dtype="auto")
  File "/home/youliang/gear/clean_repo/Isaac-GR00T/gr00t/model/gr00t_n1.py", line 224, in from_pretrained
    pretrained_model = super().from_pretrained(
  File "/home/youliang/miniconda3/envs/groot-release/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3848, in from_pretrained
    dtype_orig = cls._set_default_torch_dtype(torch_dtype)
  File "/home/youliang/miniconda3/envs/groot-release/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1617, in _set_default_torch_dtype
    if not dtype.is_floating_point:
AttributeError: 'str' object has no attribute 'is_floating_point'

@youliangtan
Copy link
Member

@neil-tan #42 should resolve the error above, please merge in thanks!

you could simply test the inference by:

python scripts/inference_service.py --model_path /tmp/gr00t/checkpoint-40/ --server --embodiment_tag new_embodiment

@neil-tan
Copy link
Contributor Author

@youliangtan
Thanks for the pointers.
LoRA alpha and dropout now can be set with the utility function and command arguments.
It is merged with the latest main and formatted. The inference service seems to run.
Let me know if there's anything else needs to be touched up.

- removed verbose lora module print outs
- updated lora fine-tune hyperparameters in the README.md
@youliangtan youliangtan self-requested a review March 24, 2025 07:58
@youliangtan youliangtan dismissed their stale review March 24, 2025 07:59

changes are adressed

@youliangtan youliangtan merged commit 153ee6f into NVIDIA:main Mar 24, 2025
3 checks passed
@youliangtan
Copy link
Member

All good now. Thanks for the PR

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants