Skip to content

Conversation

namgyu-youn
Copy link
Contributor

@namgyu-youn namgyu-youn commented Aug 11, 2025

Summary:
Integrate commonly used single/multi-linear toy models and refactor them across the codebase (src/test/benchmark/tutorial).

Test Plan: CI

Integrates common used toy model and refactor across TorchAO (ao/test/benchmark/tutorial)
- fix: pytorch#2078

Test Plan: CI
Copy link

pytorch-bot bot commented Aug 11, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/2729

Note: Links to docs will display an error until the docs builds have been completed.

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Aug 11, 2025
@jainapurva jainapurva self-requested a review August 11, 2025 16:24
@jainapurva
Copy link
Contributor

@namgyu-youn thanks for taking up this effort

@jainapurva jainapurva added the topic: not user facing Use this tag if you don't want this PR to show up in release notes label Aug 11, 2025
self.linear1 = torch.nn.Linear(k, n, bias=False).to(dtype)
self.linear1 = torch.nn.Linear(m, n, bias=False)
self.linear2 = torch.nn.Linear(n, k, bias=False)
self.linear3 = torch.nn.Linear(k, 1, bias=False)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please create a separate model for two linear layers. This model for single linear layer is used in benchmarking run on CI.

Copy link
Contributor Author

@namgyu-youn namgyu-youn Aug 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jainapurva I prefer to define ToySingleLinearModel and ToyMultiLinearModel for a future update as you mentioned, but how about reverting benchmark_aq.py?

Unit tests (e.g., test_quant_api.py, test_awq.py) are using single/multiple layers in a mixed manner, and using only multiple layers might be an update. If this makes sense, benchmark_aq.py would be the only case using single layers. Let me know which one aligns better.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ToySingleLinearModel and ToyMultiLinearModel sounds good. Please ensure all the tests are running smoothly for it.
For benchmark_aq.py you can add the bias parameter as the last arg in init and set it to False by default. In addition to this, ToySingleLinearModel is used in running .github/workflows/run_microbenchmarks.yml. It uses the create_model_and_input_data, please ensure that method is running smoothly, and is updated as per the new toy models.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for opening the PR without checking it. And I will move into your suggestion; thanks for your leading.

@namgyu-youn namgyu-youn marked this pull request as draft August 11, 2025 22:23
@jainapurva
Copy link
Contributor

@namgyu-youn Please feel free to divide this into multiple PRs if it's too many changes.

@namgyu-youn namgyu-youn marked this pull request as ready for review August 12, 2025 01:04
@namgyu-youn namgyu-youn requested a review from jainapurva August 12, 2025 01:04
Integrate commonly used single/multi-linear toy models and refactor them across the codebase (src/test/benchmark/tutorial).

- fix: pytorch#2078

Test Plan: CI
@jainapurva
Copy link
Contributor

@namgyu-youn There are some merge conflicts in the branch. Please rebase it onto main. If needed, I can help with that.

@namgyu-youn namgyu-youn marked this pull request as draft August 16, 2025 16:08
@namgyu-youn namgyu-youn marked this pull request as ready for review August 17, 2025 05:52
@namgyu-youn
Copy link
Contributor Author

@jainapurva Could you take a look at this PR? It passed CI after resolving the merge conflict.

self.linear3 = torch.nn.Linear(k, 64, bias=has_bias)

def example_inputs(
self, batch_size=1, sequence_length=10, dtype=torch.float32, device="cpu"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: should we move dtype and device to __init__ as well to be consistent?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If there is a plan to expand the toy model like backward, we can consider moving them to __init__. But I am fine to keep this because it has slightly more brevity, and there is no plan to expand it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what I meant is that, linear in init should have dtype and device as well, and it doesn't make sense to define linear modules in one device/dtype but get example_inputs from another dtype/device? so might be easier just to define these in init and not worry about them in example_inputs?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh I missed it. Updating init is much better, thanks.



class ToyMultiLinearModel(torch.nn.Module):
def __init__(self, m=512, n=256, k=128, has_bias=True):
Copy link
Contributor

@jerryzh168 jerryzh168 Aug 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel m, n, k should be required actually (also probably change m, n, k naming a bit, since it's easily confused with the shapes of linear itself)

also do we need 3 linears? can this be 2 linears and rename to TwoLinearModel to make it clearer?

Copy link
Contributor Author

@namgyu-youn namgyu-youn Aug 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That means the user should input m, n, k whenever ToyMultiLinearModel is called, right? In my thought, (m, n, k) can be renamed to (input_dim, hidden_dim, output_dim). Let me know if there is better one.

Also in the old version, there were two (test_awq.py and test_smoothquant.py) scripts related to performance (error range; AWQ and SmoothQuant). But since they are quiet far away from the real benchmark, I am fine to go with 2-linears for more brevity. ToyTwoLinearModel sounds good to me.

@namgyu-youn namgyu-youn requested a review from jerryzh168 August 22, 2025 10:27
x = self.linear1(x)
x = self.linear2(x)
return x
from torchao.testing.model_architectures import ToyTwoLinearModel
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you revert the changes for this? I think it's better to have this tutorial self contained

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes keeping them for tutorial sounds good to me, I will revert it.

@@ -29,19 +29,9 @@ First, let's set up our toy model:

import copy
import torch
from torchao.testing.model_architectures import ToyTwoLinearModel
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also this

"""Single linear for m * k * n problem size"""

def __init__(
self, m=64, n=32, k=64, has_bias=False, dtype=torch.float, device="cuda"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

default dtype should probably be torch.bfloat16 I feel



class ToyTwoLinearModel(torch.nn.Module):
def __init__(self, input_dim, hidden_dim, output_dim, has_bias=False):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dtype and device?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh I missed it, thanks.

@@ -179,7 +220,7 @@ def create_model_and_input_data(
m, k, n (int): dimensions of the model and input data
"""
if model_type == "linear":
model = ToyLinearModel(k, n, high_precision_dtype).to(device)
model = ToyTwoLinearModel(k, n, high_precision_dtype).to(device)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

arg seems to be wrong here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I misunderstood its workflow. This if-else and test_model_architecture.py should be updated using the following:

model, input_data = create_model_and_input_data(
    "linear", 10, 64, 32, device=device
)

@@ -284,7 +265,7 @@ def test_static_quant(target_dtype: torch.dtype, mapping_type: MappingType):
weight_obs = AffineQuantizedMinMaxObserver(
mapping_type,
target_dtype,
granularity=PerAxis(axis=0),
granularity=PerTensor(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is this changed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was my misundertstooding while fixing obersver's input shape. I will revert it with related workflows.

@@ -113,7 +102,7 @@ def test_fp8_linear_variants(
input_tensor = torch.randn(*M, K, dtype=dtype, device="cuda")

# Create a linear layer with bfloat16 dtype
model = ToyLinearModel(K, N).eval().to(dtype).to("cuda")
model = ToyTwoLinearModel(K, 64, N).eval().to(dtype).to("cuda")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

K, N, K?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unlike ToyLinearModel (old), ToyTwoLinearModel uses input_dim (K), hidden_dim (64), output_dim (N); following is the same case.

@@ -222,7 +211,7 @@ def test_kernel_preference_numerical_equivalence(self, granularity, sizes):
dtype = torch.bfloat16
input_tensor = torch.randn(*M, K, dtype=dtype, device="cuda")
# Create a linear layer with bfloat16 dtype
model = ToyLinearModel(K, N).eval().to(dtype).to("cuda")
model = ToyTwoLinearModel(K, 64, N).eval().to(dtype).to("cuda")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here?

@jerryzh168
Copy link
Contributor

please run the changed tests locally as well

@namgyu-youn namgyu-youn requested a review from jerryzh168 August 26, 2025 15:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. topic: not user facing Use this tag if you don't want this PR to show up in release notes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Refactor torchao and tests to use model architectures from torchao.testing.model_architectures
3 participants