-
Notifications
You must be signed in to change notification settings - Fork 321
refactor common used toy model #2729
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Integrates common used toy model and refactor across TorchAO (ao/test/benchmark/tutorial) - fix: pytorch#2078 Test Plan: CI
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/2729
Note: Links to docs will display an error until the docs builds have been completed. This comment was automatically generated by Dr. CI and updates every 15 minutes. |
@namgyu-youn thanks for taking up this effort |
self.linear1 = torch.nn.Linear(k, n, bias=False).to(dtype) | ||
self.linear1 = torch.nn.Linear(m, n, bias=False) | ||
self.linear2 = torch.nn.Linear(n, k, bias=False) | ||
self.linear3 = torch.nn.Linear(k, 1, bias=False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please create a separate model for two linear layers. This model for single linear layer is used in benchmarking run on CI.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jainapurva I prefer to define ToySingleLinearModel
and ToyMultiLinearModel
for a future update as you mentioned, but how about reverting benchmark_aq.py
?
Unit tests (e.g., test_quant_api.py
, test_awq.py
) are using single/multiple layers in a mixed manner, and using only multiple layers might be an update. If this makes sense, benchmark_aq.py
would be the only case using single layers. Let me know which one aligns better.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ToySingleLinearModel
and ToyMultiLinearModel
sounds good. Please ensure all the tests are running smoothly for it.
For benchmark_aq.py
you can add the bias parameter as the last arg in init and set it to False by default. In addition to this, ToySingleLinearModel
is used in running .github/workflows/run_microbenchmarks.yml
. It uses the create_model_and_input_data, please ensure that method is running smoothly, and is updated as per the new toy models.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for opening the PR without checking it. And I will move into your suggestion; thanks for your leading.
@namgyu-youn Please feel free to divide this into multiple PRs if it's too many changes. |
Integrate commonly used single/multi-linear toy models and refactor them across the codebase (src/test/benchmark/tutorial). - fix: pytorch#2078 Test Plan: CI
@namgyu-youn There are some merge conflicts in the branch. Please rebase it onto main. If needed, I can help with that. |
@jainapurva Could you take a look at this PR? It passed CI after resolving the merge conflict. |
self.linear3 = torch.nn.Linear(k, 64, bias=has_bias) | ||
|
||
def example_inputs( | ||
self, batch_size=1, sequence_length=10, dtype=torch.float32, device="cpu" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: should we move dtype
and device
to __init__
as well to be consistent?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If there is a plan to expand the toy model like backward, we can consider moving them to __init__
. But I am fine to keep this because it has slightly more brevity, and there is no plan to expand it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what I meant is that, linear in init should have dtype and device as well, and it doesn't make sense to define linear modules in one device/dtype but get example_inputs from another dtype/device? so might be easier just to define these in init and not worry about them in example_inputs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh I missed it. Updating init is much better, thanks.
|
||
|
||
class ToyMultiLinearModel(torch.nn.Module): | ||
def __init__(self, m=512, n=256, k=128, has_bias=True): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel m
, n
, k
should be required actually (also probably change m
, n
, k
naming a bit, since it's easily confused with the shapes of linear itself)
also do we need 3 linears? can this be 2 linears and rename to TwoLinearModel
to make it clearer?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That means the user should input m
, n
, k
whenever ToyMultiLinearModel
is called, right? In my thought, (m, n, k)
can be renamed to (input_dim, hidden_dim, output_dim)
. Let me know if there is better one.
Also in the old version, there were two (test_awq.py
and test_smoothquant.py
) scripts related to performance (error range; AWQ and SmoothQuant). But since they are quiet far away from the real benchmark, I am fine to go with 2-linears for more brevity. ToyTwoLinearModel
sounds good to me.
docs/source/serialization.rst
Outdated
x = self.linear1(x) | ||
x = self.linear2(x) | ||
return x | ||
from torchao.testing.model_architectures import ToyTwoLinearModel |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you revert the changes for this? I think it's better to have this tutorial self contained
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes keeping them for tutorial sounds good to me, I will revert it.
docs/source/quick_start.rst
Outdated
@@ -29,19 +29,9 @@ First, let's set up our toy model: | |||
|
|||
import copy | |||
import torch | |||
from torchao.testing.model_architectures import ToyTwoLinearModel |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also this
"""Single linear for m * k * n problem size""" | ||
|
||
def __init__( | ||
self, m=64, n=32, k=64, has_bias=False, dtype=torch.float, device="cuda" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
default dtype should probably be torch.bfloat16 I feel
|
||
|
||
class ToyTwoLinearModel(torch.nn.Module): | ||
def __init__(self, input_dim, hidden_dim, output_dim, has_bias=False): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dtype and device?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh I missed it, thanks.
@@ -179,7 +220,7 @@ def create_model_and_input_data( | |||
m, k, n (int): dimensions of the model and input data | |||
""" | |||
if model_type == "linear": | |||
model = ToyLinearModel(k, n, high_precision_dtype).to(device) | |||
model = ToyTwoLinearModel(k, n, high_precision_dtype).to(device) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
arg seems to be wrong here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I misunderstood its workflow. This if-else and test_model_architecture.py
should be updated using the following:
model, input_data = create_model_and_input_data(
"linear", 10, 64, 32, device=device
)
@@ -284,7 +265,7 @@ def test_static_quant(target_dtype: torch.dtype, mapping_type: MappingType): | |||
weight_obs = AffineQuantizedMinMaxObserver( | |||
mapping_type, | |||
target_dtype, | |||
granularity=PerAxis(axis=0), | |||
granularity=PerTensor(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is this changed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It was my misundertstooding while fixing obersver's input shape. I will revert it with related workflows.
@@ -113,7 +102,7 @@ def test_fp8_linear_variants( | |||
input_tensor = torch.randn(*M, K, dtype=dtype, device="cuda") | |||
|
|||
# Create a linear layer with bfloat16 dtype | |||
model = ToyLinearModel(K, N).eval().to(dtype).to("cuda") | |||
model = ToyTwoLinearModel(K, 64, N).eval().to(dtype).to("cuda") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
K, N, K?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unlike ToyLinearModel (old), ToyTwoLinearModel uses input_dim (K), hidden_dim (64), output_dim (N); following is the same case.
@@ -222,7 +211,7 @@ def test_kernel_preference_numerical_equivalence(self, granularity, sizes): | |||
dtype = torch.bfloat16 | |||
input_tensor = torch.randn(*M, K, dtype=dtype, device="cuda") | |||
# Create a linear layer with bfloat16 dtype | |||
model = ToyLinearModel(K, N).eval().to(dtype).to("cuda") | |||
model = ToyTwoLinearModel(K, 64, N).eval().to(dtype).to("cuda") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here?
please run the changed tests locally as well |
Summary:
Integrate commonly used single/multi-linear toy models and refactor them across the codebase (src/test/benchmark/tutorial).
Test Plan: CI