We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent d662ac7 commit a0004eeCopy full SHA for a0004ee
keras/src/quantizers/gptq_test.py
@@ -25,7 +25,7 @@
25
W_BITS = 4
26
NUM_CLASSES = 32
27
28
-CALIBRATION_TEXT = """
+CALIBRATION_TEXT = r"""
29
GPTQ (Generative Pre-trained Transformer Quantization) is an advanced
30
post-training quantization (PTQ) algorithm designed to compress large
31
language models with minimal accuracy degradation. It addresses the
0 commit comments