forked from karpathy/nanoGPT
-
Notifications
You must be signed in to change notification settings - Fork 27
Open
Description
Consider adding this as a separate option inside the attention block:
https://huggingface.co/blog/gemma2#soft-capping-and-attention-implementations
From the above
Putting it all together, the logits are calculated by: logits ← soft_cap ∗ tanh(logits/soft_cap)
Gemma 2 employs soft capping for the final layer and for every attention layer. The attention logits are capped at 50.0, and the final logits at 30.0.
Metadata
Metadata
Assignees
Labels
No labels