You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/llm_recipes.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,4 @@
1
-
LLM Quantization Models and Recipes
1
+
LLMs Quantization Recipes
2
2
---
3
3
4
4
Intel® Neural Compressor supported advanced large language models (LLMs) quantization technologies including SmoothQuant (SQ) and Weight-Only Quant (WOQ),
@@ -21,7 +21,7 @@ This document aims to publish the specific recipes we achieved for the popular L
21
21
| meta-llama/Llama-2-70b-hf | ✔ | ✔ | ✔ |
22
22
| tiiuae/falcon-40b | ✔ | ✔ | ✔ |
23
23
24
-
**Detail recipes can be found [HERE](https://github.com/intel/intel-extension-for-transformers/examples/huggingface/pytorch/text-generation/quantization/llm_quantization_recipes.md).**
24
+
**Detail recipes can be found [HERE](https://github.com/intel/intel-extension-for-transformers/blob/main/examples/huggingface/pytorch/text-generation/quantization/llm_quantization_recipes.md).**
25
25
> Notes:
26
26
> - This model list comes from [IPEX](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/llm.html).
0 commit comments