Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to convert the weights of pyllama 7B into lit-llama? #26

Open
mmdrahmani opened this issue Jan 28, 2025 · 3 comments
Open

How to convert the weights of pyllama 7B into lit-llama? #26

mmdrahmani opened this issue Jan 28, 2025 · 3 comments

Comments

@mmdrahmani
Copy link

Hi. Thanks for the great work.
You provided this instruction below:
For using the LLaMa model weight, follow [pyllama](https://github.com/juncongmoo/pyllama) to download the original LLaMA model, and then follow [Lit-LLaMA](https://github.com/Lightning-AI/lit-llama) to convert the weights to the Lit-LLaMA format. After this process, please move the lit-llama/ directory under the checkpoints/ directory.
However, when I check the Lit-LLaMA repository, I cannot find any explanation for how to convert the weights of pyllama 7B (consolidated.00.pth) into lit-llama format. Could you share the link/instructions?
Thank you
Mohammad

@41xu
Copy link

41xu commented Jan 31, 2025

see this: https://github.com/Lightning-AI/lit-llama/blob/main/howto/download_weights.md
convert the weights to the lit-llama format part
in lit-llama:

python scripts/convert_checkpoint.py --model_size 7B

@mmdrahmani

@mmdrahmani
Copy link
Author

mmdrahmani commented Feb 7, 2025

@41xu Thanks.
The downloaded llama 7B consolidated.00.pth is around 12GB. When I convert it to lit-llama format, it becomes 26GB.
Is this correct?

I download the model from huggingface, shown below, because the python -m llama.download --model_size 7B gets stuck.
wget https://huggingface.co/nyanko7/LLaMA-7B/resolve/main/consolidated.00.pth

@41xu
Copy link

41xu commented Feb 7, 2025

HI @mmdrahmani
for me, the situation is the same. the process of downloading and converting weights made me wonder if my network was interrupted. it needs a long time (if i remember correctly)

@41xu Thanks. The downloaded llama 7B consolidated.00.pth is around 12GB. When I convert it to lit-llama format, it becomes 26GB. Is this correct?

I download the model from huggingface, shown below, because of the python -m llama.download --model_size 7B gets stuck. wget https://huggingface.co/nyanko7/LLaMA-7B/resolve/main/consolidated.00.pth

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants