Skip to content

Add an nvfp4 PTQ convert script#536

Open
zianglih wants to merge 5 commits intoradixark:mainfrom
zianglih:nvfp4-convert
Open

Add an nvfp4 PTQ convert script#536
zianglih wants to merge 5 commits intoradixark:mainfrom
zianglih:nvfp4-convert

Conversation

@zianglih
Copy link

@zianglih zianglih commented Jan 31, 2026

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @zianglih, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a new utility script to facilitate the conversion of Hugging Face safetensors checkpoints to the NVFP4 format. The primary objective is to enhance the efficiency of large language models, particularly those utilizing Mixture of Experts architectures, by reducing their memory footprint and potentially accelerating inference. The script intelligently applies quantization to specific model components (MoE expert GEMMs) and ensures that the model's configuration files are appropriately updated to reflect these changes, streamlining the deployment of optimized models.

Highlights

  • New NVFP4 Conversion Script: Introduces tools/convert_hf_to_nvfp4.py, a new Python script designed for quantizing Hugging Face models to the NVFP4 format.
  • Targeted Quantization for MoE: The script specifically targets and quantizes Mixture of Experts (MoE) expert GEMMs to NVFP4 (E2M1), while leaving dense linear layers in their original precision.
  • 1D Block Scaling Implementation: Implements NVFP4 1D block scaling with a group size of 16, adhering to Transformer Engine's reference quantization methodology.
  • Automated Configuration Updates: Automatically updates config.json and hf_quant_config.json files with the new NVFP4 quantization settings and an ignore list for modules that were not quantized.
  • Layer Skipping Capability: Provides a --keep-last-n command-line option to prevent quantization for the last N transformer layers, enabling the creation of mixed-precision models.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a script for converting Hugging Face models to the NVFP4 format, with a specific focus on Mixture-of-Experts (MoE) layers. The script is well-organized, handling file operations, model configuration updates, and memory management effectively. My review highlights a potential correctness issue in the quantization logic related to inconsistent boundary handling and suggests improving the robustness of file I/O operations by using with statements.

Comment on lines +135 to +152
result = torch.zeros_like(x, dtype=torch.uint8)
result[(x >= 0.0) & (x <= 0.25)] = 0
result[(x > 0.25) & (x < 0.75)] = 1
result[(x >= 0.75) & (x <= 1.25)] = 2
result[(x > 1.25) & (x < 1.75)] = 3
result[(x >= 1.75) & (x <= 2.5)] = 4
result[(x > 2.5) & (x < 3.5)] = 5
result[(x >= 3.5) & (x <= 5.0)] = 6
result[x > 5.0] = 7

result[(x >= -0.25) & (x < -0.0)] = 8
result[(x < -0.25) & (x > -0.75)] = 9
result[(x <= -0.75) & (x >= -1.25)] = 10
result[(x < -1.25) & (x > -1.75)] = 11
result[(x <= -1.75) & (x >= -2.5)] = 12
result[(x < -2.5) & (x > -3.5)] = 13
result[(x <= -3.5) & (x >= -5.0)] = 14
result[x < -5.0] = 15
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The boundary conditions for quantization bins are inconsistent. Some bins are inclusive on both ends (e.g., [0.75, 1.25]), while others are exclusive (e.g., (0.25, 0.75)). This can lead to incorrect quantization for values that fall exactly on a boundary and inconsistent tie-breaking. For correctness and predictability, it's better to use a uniform convention for intervals, such as closed on the left and open on the right ([lower, upper)).

Suggested change
result = torch.zeros_like(x, dtype=torch.uint8)
result[(x >= 0.0) & (x <= 0.25)] = 0
result[(x > 0.25) & (x < 0.75)] = 1
result[(x >= 0.75) & (x <= 1.25)] = 2
result[(x > 1.25) & (x < 1.75)] = 3
result[(x >= 1.75) & (x <= 2.5)] = 4
result[(x > 2.5) & (x < 3.5)] = 5
result[(x >= 3.5) & (x <= 5.0)] = 6
result[x > 5.0] = 7
result[(x >= -0.25) & (x < -0.0)] = 8
result[(x < -0.25) & (x > -0.75)] = 9
result[(x <= -0.75) & (x >= -1.25)] = 10
result[(x < -1.25) & (x > -1.75)] = 11
result[(x <= -1.75) & (x >= -2.5)] = 12
result[(x < -2.5) & (x > -3.5)] = 13
result[(x <= -3.5) & (x >= -5.0)] = 14
result[x < -5.0] = 15
result = torch.zeros_like(x, dtype=torch.uint8)
# Positive values
result[(x >= 0.0) & (x < 0.25)] = 0
result[(x >= 0.25) & (x < 0.75)] = 1
result[(x >= 0.75) & (x < 1.25)] = 2
result[(x >= 1.25) & (x < 1.75)] = 3
result[(x >= 1.75) & (x < 2.5)] = 4
result[(x >= 2.5) & (x < 3.5)] = 5
result[(x >= 3.5) & (x < 5.0)] = 6
result[x >= 5.0] = 7
# Negative values
result[(x >= -0.25) & (x < 0.0)] = 8
result[(x >= -0.75) & (x < -0.25)] = 9
result[(x >= -1.25) & (x < -0.75)] = 10
result[(x >= -1.75) & (x < -1.25)] = 11
result[(x >= -2.5) & (x < -1.75)] = 12
result[(x >= -3.5) & (x < -2.5)] = 13
result[(x >= -5.0) & (x < -3.5)] = 14
result[x < -5.0] = 15

config_path = os.path.join(model_dir, "config.json")
if not os.path.exists(config_path):
raise ValueError("config.json is required to use --keep-last-n.")
cfg = json.load(open(config_path))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

It's a best practice to use a with statement when opening files. This ensures that the file is properly closed even if an exception occurs. This pattern of json.load(open(...)) or json.dump(..., open(...)) is used in a few places in this file (e.g., lines 408, 410, 418) and should be updated for robustness.

Suggested change
cfg = json.load(open(config_path))
with open(config_path) as f:
cfg = json.load(f)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant