Fine tune LLAMA 3.2 1B on Python dataset #532
dtdo90
started this conversation in
Show and tell
Replies: 1 comment 2 replies
-
Nice, thanks for sharing @dtdo90 . I noticed you were using HF transformers. This is totally reasonable for real-world applications. But in case you are curious how this model looks like when implemented from scratch, I do have some bonus material on it here :) https://github.com/rasbt/LLMs-from-scratch/tree/main/ch05/07_gpt_to_llama |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
After reading Sebastian's book, I’m more excited than ever to explore real-world applications of LLMs! I am the type who needs to understand the model's structure before I could confidently dive into any type of project, and this book made everything crystal clear!
My first project is to fine-tune LLAMA 3.2 on a Python dataset. The code is available in my repo for anyone interested https://github.com/dtdo90/Llama3.2_python_dataset.
Beta Was this translation helpful? Give feedback.
All reactions