-
Notifications
You must be signed in to change notification settings - Fork 316
Add MSE vs MinMax observer comparison tests #2110
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Summary of ChangesHello @GOavi101, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a robust set of tests designed to evaluate and compare the quantization error produced by MSE and MinMax observers. The primary objective is to ensure that the MSE observer consistently delivers equal or superior quantization quality across a wide range of data distributions, from synthetic tensors to real-world model weights. This enhancement is crucial for validating the effectiveness of the MSE observer in optimizing model compression. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
- Add comprehensive test suite comparing MSE and MinMax observers - Test on random tensors with various distributions - Test on real model weights from transformers - Add 'slow' pytest marker to pyproject.toml for long-running tests Signed-off-by: Avishek Goswami <[email protected]>
5421f26 to
5352b1c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a comprehensive test suite to compare the performance of MSE and MinMax observers for quantization. The tests cover random tensors with various distributions, different tensor shapes, extreme values, and real model weights, which is great for ensuring robustness.
I've found a couple of critical issues related to shape mismatches when assigning weights to test modules, which would cause tests to fail. I've also pointed out a minor case of dead code. After addressing these points, the PR should be in good shape.
- Fix tensor shape mismatch: use tensor directly instead of tensor.T - Fix weight_tensor shape mismatch: use weight_tensor directly instead of weight_tensor.T - Remove unused weights variable in test_mse_vs_minmax_extreme_values Signed-off-by: Avishek Goswami <[email protected]>
|
👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review. Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed. |
Signed-off-by: Avishek Goswami <[email protected]>
Total: 33 test cases - All showing MSE observer produces quantization error ≤ MinMax observer's quantization error. |
kylesayrs
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great, thank you for your contribution!
#2094