-
Notifications
You must be signed in to change notification settings - Fork 751
Adding Test To Ensure All Future Quantizers Are Tested #16099
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/16099
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (1 Unrelated Failure)As of commit 34d9597 with merge base 56e131b ( UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR establishes a comprehensive testing framework for Cadence quantizers to ensure all quantizers are tested. It introduces a parameterized test structure that verifies quantizer annotation behavior and includes a meta-test (test_all_quantizers_have_annotation_tests) that automatically discovers all CadenceQuantizer subclasses and fails if any are neither tested nor explicitly excluded.
Key changes:
- Refactored test structure to use parameterized testing with
QUANTIZER_ANNOTATION_TEST_CASES - Added helper methods to build test graphs for matmul and linear operations
- Implemented automated discovery test to enforce coverage of all future quantizers
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 7 comments.
| File | Description |
|---|---|
| backends/cadence/aot/tests/test_quantizer_ops.py | Adds parameterized quantizer annotation tests, graph builder methods, and comprehensive coverage enforcement via test_all_quantizers_have_annotation_tests |
| backends/cadence/aot/TARGETS | Adds new dependencies: parameterized, graph_builder, pass_base, and torchao |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| if ( | ||
| issubclass(obj, CadenceQuantizer) | ||
| and obj is not CadenceQuantizer | ||
| and obj.__module__ == quantizer_module.__name__ | ||
| ): | ||
| all_quantizers.add(obj) |
Copilot
AI
Dec 5, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The issubclass check on line 194 could raise a TypeError if obj is not a class. While this is unlikely given the inspect.isclass filter, it's safer to wrap this in a try-except or add an additional check. Consider:
for _, obj in inspect.getmembers(quantizer_module, inspect.isclass):
try:
if (
issubclass(obj, CadenceQuantizer)
and obj is not CadenceQuantizer
and obj.__module__ == quantizer_module.__name__
):
all_quantizers.add(obj)
except TypeError:
# Not a proper class, skip
pass| if ( | |
| issubclass(obj, CadenceQuantizer) | |
| and obj is not CadenceQuantizer | |
| and obj.__module__ == quantizer_module.__name__ | |
| ): | |
| all_quantizers.add(obj) | |
| try: | |
| if ( | |
| issubclass(obj, CadenceQuantizer) | |
| and obj is not CadenceQuantizer | |
| and obj.__module__ == quantizer_module.__name__ | |
| ): | |
| all_quantizers.add(obj) | |
| except TypeError: | |
| # Not a proper class, skip | |
| pass |
| """Unit tests for verifying quantizer annotations are correctly applied.""" | ||
|
|
||
| def _build_matmul_graph(self) -> tuple[torch.fx.GraphModule, torch.fx.Node]: | ||
| """Build a simple graph with a matmul operation.""" |
Copilot
AI
Dec 5, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The docstring says "Build a simple graph with a matmul operation" but doesn't document the return values. Consider adding:
"""Build a simple graph with a matmul operation.
Returns:
tuple: (GraphModule, matmul_node) where matmul_node is the target operation node.
"""| """Build a simple graph with a matmul operation.""" | |
| """Build a simple graph with a matmul operation. | |
| Returns: | |
| tuple: (GraphModule, matmul_node) where matmul_node is the target operation node. | |
| """ |
| return gm, matmul_nodes[0] | ||
|
|
||
| def _build_linear_graph(self) -> tuple[torch.fx.GraphModule, torch.fx.Node]: | ||
| """Build a simple graph with a linear operation (no bias).""" |
Copilot
AI
Dec 5, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar to the matmul graph builder, the docstring should document the return values:
"""Build a simple graph with a linear operation (no bias).
Returns:
tuple: (GraphModule, linear_node) where linear_node is the target operation node.
"""| """Build a simple graph with a linear operation (no bias).""" | |
| """Build a simple graph with a linear operation (no bias). | |
| Returns: | |
| tuple: (GraphModule, linear_node) where linear_node is the target operation node. | |
| """ |
| expected_output_qspec: QuantizationSpec, | ||
| expected_input_qspecs: list[QuantizationSpec], | ||
| ) -> None: | ||
| """Parameterized test for quantizer annotations.""" |
Copilot
AI
Dec 5, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] The docstring is quite minimal. Consider adding more details about what the test verifies:
"""Parameterized test for quantizer annotations.
Verifies that:
1. The quantizer correctly annotates the target operation node
2. The output quantization spec matches expectations
3. All input quantization specs match expectations and are correctly mapped
"""| """Parameterized test for quantizer annotations.""" | |
| """ | |
| Parameterized test for quantizer annotations. | |
| Verifies that: | |
| 1. The quantizer correctly annotates the target operation node. | |
| 2. The output quantization spec matches expectations. | |
| 3. All input quantization specs match expectations and are correctly mapped. | |
| """ |
| for i, (input_node, input_qspec) in enumerate( | ||
| annotation.input_qspec_map.items() | ||
| ): | ||
| self.assertEqual( | ||
| input_node, | ||
| op_node.args[i], | ||
| f"Input node mismatch at index {i}", | ||
| ) | ||
| self.assertEqual( | ||
| input_qspec, | ||
| expected_input_qspecs[i], |
Copilot
AI
Dec 5, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The test assumes that the iteration order of annotation.input_qspec_map.items() matches the order of arguments in op_node.args. While dictionaries in Python 3.7+ maintain insertion order, this assumption could be fragile if the input_qspec_map is populated in a different order than the arguments. Consider using explicit indexing or documenting this assumption. For example:
for i, expected_qspec in enumerate(expected_input_qspecs):
input_arg = op_node.args[i]
self.assertIn(
input_arg,
annotation.input_qspec_map,
f"Missing qspec for input at index {i}",
)
self.assertEqual(
annotation.input_qspec_map[input_arg],
expected_qspec,
f"Input qspec mismatch at index {i}",
)| for i, (input_node, input_qspec) in enumerate( | |
| annotation.input_qspec_map.items() | |
| ): | |
| self.assertEqual( | |
| input_node, | |
| op_node.args[i], | |
| f"Input node mismatch at index {i}", | |
| ) | |
| self.assertEqual( | |
| input_qspec, | |
| expected_input_qspecs[i], | |
| for i, expected_qspec in enumerate(expected_input_qspecs): | |
| input_arg = op_node.args[i] | |
| self.assertIn( | |
| input_arg, | |
| annotation.input_qspec_map, | |
| f"Missing qspec for input at index {i}", | |
| ) | |
| self.assertEqual( | |
| annotation.input_qspec_map[input_arg], | |
| expected_qspec, |
| quantizer.annotate(gm) | ||
|
|
||
| annotation: QuantizationAnnotation = op_node.meta[Q_ANNOTATION_KEY] | ||
| self.assertTrue(annotation._annotated) |
Copilot
AI
Dec 5, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Accessing the private attribute _annotated directly may be fragile. Consider checking if the annotation exists and has the expected properties instead, or verify if there's a public API to check annotation status.
| self.assertTrue(annotation._annotated) | |
| self.assertTrue(getattr(annotation, "_annotated", False)) |
| # Test case definitions for quantizer annotation tests. | ||
| # Format: (name, graph_builder_fn, quantizer_instance, target_op, expected_output_qspec, expected_input_qspecs) | ||
| # Adding a new quantizer test only requires adding a tuple to this list. | ||
| QUANTIZER_ANNOTATION_TEST_CASES: list[ | ||
| tuple[ | ||
| str, | ||
| GraphBuilderFn, | ||
| CadenceQuantizer, | ||
| OpOverload, | ||
| QuantizationSpec, | ||
| list[QuantizationSpec], | ||
| ] | ||
| ] = [ |
Copilot
AI
Dec 5, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment describes the tuple format but the actual tuple structure in the list below has 6 elements, which matches the implementation. However, the description should clarify that each test case is a tuple, not just list the components. Consider reformatting as:
# Test case definitions for quantizer annotation tests.
# Each test case is a tuple with the following elements:
# - name: Test case identifier
# - graph_builder_fn: Function to build the graph
# - quantizer_instance: The quantizer to test
# - target_op: The operation being quantized
# - expected_output_qspec: Expected output quantization spec
# - expected_input_qspecs: List of expected input quantization specsSummary: We first create a list of quantizers that are currently not tested(we'll slowly reduce this to 0), and then we create a test to ensure that all future quantizers get tested using this framework. In order to do this, we needed to refactor how the current test is setup, specifically the parameterization. Reviewed By: mcremon-meta, zonglinpeng, hsharma35 Differential Revision: D88055443
98a2371 to
d9a831b
Compare
Summary: We first create a list of quantizers that are currently not tested(we'll slowly reduce this to 0), and then we create a test to ensure that all future quantizers get tested using this framework. In order to do this, we needed to refactor how the current test is setup, specifically the parameterization. Reviewed By: mcremon-meta, zonglinpeng, hsharma35 Differential Revision: D88055443
d9a831b to
a75ffd1
Compare
Summary: We first create a list of quantizers that are currently not tested(we'll slowly reduce this to 0), and then we create a test to ensure that all future quantizers get tested using this framework. In order to do this, we needed to refactor how the current test is setup, specifically the parameterization. Reviewed By: mcremon-meta, zonglinpeng, hsharma35 Differential Revision: D88055443
a75ffd1 to
b2e71dc
Compare
Summary: We first create a list of quantizers that are currently not tested(we'll slowly reduce this to 0), and then we create a test to ensure that all future quantizers get tested using this framework. In order to do this, we needed to refactor how the current test is setup, specifically the parameterization. Reviewed By: mcremon-meta, zonglinpeng, hsharma35 Differential Revision: D88055443
b2e71dc to
a6a7327
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| name: str, | ||
| graph_builder_fn: GraphBuilderFn, | ||
| quantizer: CadenceQuantizer, | ||
| target: OpOverload, |
Copilot
AI
Dec 8, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The target parameter is defined in the test method signature but is never used in the test implementation (lines 161-186). If this parameter is intended for documentation or future use, consider either:
- Using it to validate that the op_node's target matches the expected target
- Removing it from the test case definition if it's not needed
Example validation could be: self.assertEqual(op_node.target, target, "Operation target mismatch")
a6a7327 to
336e507
Compare
Summary: We first create a list of quantizers that are currently not tested(we'll slowly reduce this to 0), and then we create a test to ensure that all future quantizers get tested using this framework. In order to do this, we needed to refactor how the current test is setup, specifically the parameterization. Reviewed By: mcremon-meta, zonglinpeng, hsharma35 Differential Revision: D88055443
Summary: We first create a list of quantizers that are currently not tested(we'll slowly reduce this to 0), and then we create a test to ensure that all future quantizers get tested using this framework. In order to do this, we needed to refactor how the current test is setup, specifically the parameterization. Reviewed By: mcremon-meta, zonglinpeng, hsharma35 Differential Revision: D88055443
336e507 to
6bd81e9
Compare
…6089) Summary: We test the quantizer we added in D87996796 correctly annotates the graph. We use the graph builder to build the graph with metadata(that's needed for quantizer.annotate to recognize the nodes), and we ensure that the quantization params are as expected. Reviewed By: zonglinpeng, hsharma35 Differential Revision: D88053808
…6097) Summary: We test the CadenceWith16BitLinearActivationQuantizer. We use the graph builder to build the graph with metadata(that's needed for quantizer.annotate to recognize the nodes), and we ensure that the quantization params are as expected. Reviewed By: zonglinpeng, hsharma35 Differential Revision: D88054651
…h#16098) Summary: We consolidate the two tests we created into a single testing function using parameterization. This will make testing future Quantizers much easier, and there will be a lot less code duplication. Reviewed By: hsharma35, zonglinpeng Differential Revision: D88054917
Summary: We first create a list of quantizers that are currently not tested(we'll slowly reduce this to 0), and then we create a test to ensure that all future quantizers get tested using this framework. In order to do this, we needed to refactor how the current test is setup, specifically the parameterization. Reviewed By: mcremon-meta, zonglinpeng, hsharma35 Differential Revision: D88055443
6bd81e9 to
34d9597
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| annotation.input_qspec_map.items() | ||
| ): | ||
| expected_arg = op_node.args[i] | ||
| assert isinstance(expected_arg, torch.fx.Node) |
Copilot
AI
Dec 8, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using a bare assert statement in a test is inconsistent with unittest conventions. Consider using self.assertIsInstance(expected_arg, torch.fx.Node) instead. This provides better error messages and is consistent with the rest of the test framework.
| assert isinstance(expected_arg, torch.fx.Node) | |
| self.assertIsInstance(expected_arg, torch.fx.Node) |
| name: str, | ||
| graph_builder_fn: GraphBuilderFn, | ||
| quantizer: CadenceQuantizer, | ||
| target: OpOverload, |
Copilot
AI
Dec 8, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] The target parameter is defined but not used in the test method body. Consider adding a verification that op_node.target == target to ensure the test is validating the expected operation type. This would make the test more robust by explicitly checking that the graph builder function created the expected operation.
Summary:
We first create a list of quantizers that are currently not tested(we'll slowly reduce this to 0), and then we create a test to ensure that all future quantizers get tested using this framework.
In order to do this, we needed to refactor how the current test is setup, specifically the parameterization.
Reviewed By: hsharma35
Differential Revision: D88055443