-
Notifications
You must be signed in to change notification settings - Fork 676
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[DO NOT MERGE][Reproducing issue] : View converter , bs >1 #585
base: master
Are you sure you want to change the base?
[DO NOT MERGE][Reproducing issue] : View converter , bs >1 #585
Conversation
removed caffe2 dependency
If ceil_mode is False, the default value of layer.padding_mode is PaddingMode.EXPLICIT_ROUND_DOWN. If ceil_mode is True, padding_mode should be trt.PaddingMode.EXPLICIT_ROUND_UP.
clamp and normalize
consider ceil_mode of torch.nn.MaxPool2d
added mobilenet_v2 to module tests
…actor - adds avg_pool2d - adds max_pool2d - removes AvgPool2d - removes MaxPool2d - adds get_arg(...) - adds torch_dim_to_trt_axes(...) - adds add_trt_constant(...)
- adds ``torch.chunk`` and ``torch.Tensor.chunk`` - adds ``torch.split`` and ``torch.Tensor.split`` - adds tests for ``squeezenet*`` models
added permute
* Remove duplicate filenames which do not work on Windows by merging files * Fix * relu tests Co-authored-by: Koen van de Sande <[email protected]>
…ations (NVIDIA-AI-IOT#505) * Initioal version of ne, floordiv, mod and tensor converters. Extend ops for relu and sigmoid. * Converters for floordiv, mod, ne, and torch::tensor() operations . Extend relu and sigmoid converters to Tensor methods. * Update CHANGELOG.md
…T#482) * added passing of torch2trt_kwargs to conversion context * added passing of torch2trt_kwargs to conversion context
…OT#511) * added filter to floordiv to only enable for pytorch 1.6+ * enabled soft failure for missing torch method
* increment version to 0.2.0 * realse push docs tagfix
* added conv_functional * add Tensor flatten * update changelog for functional conv / flatten * add site to gitignore
… in the file CLA.md of this project. Signed, John Welsh
…onverter Linear functional converter
added converter for torch.roll
Hi @SrivastavaKshitij , Thanks for pointing this out. This likely has to do with this line torch2trt/torch2trt/torch2trt.py Line 517 in 817f937
When we pass the example data into the model, we do it with batch size 1. From my understanding, converters shouldn't modify the batch dimension, so this would be acceptable, but perhaps I'm missing something. Let me know if this helps. Best, |
Hey John, Did some more experiments. when I changed line 517 from
to
I was able to see the batch dimension. So that's good. However, torch2trt/torch2trt/torch2trt.py Line 153 in 817f937
removes the batch dimension again and I dont know why is that. What do you think ? I added a print statement of the tensor shape before and after line 153 and this is what I got
Somehow, batch dim is removed. Now this may not have impacted other ops but it will impact some of the ops such as view because the volume of the tensor will not match
|
@add_module_test(torch.float32, torch.device('cuda'), [(1, 3, 3, 3)]) | ||
#@add_module_test(torch.float32, torch.device('cuda'), [(1, 3)]) | ||
#@add_module_test(torch.float32, torch.device('cuda'), [(1, 3, 3)]) | ||
@add_module_test(torch.float32, torch.device('cuda'), [(2, 3, 3, 3)],max_batch_size=3) | ||
def test_view_1d(): | ||
return View(1, -1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like the test case is hard-fixing the batch dimension to 1 here.
In general, TensorRT engines are broadcast across the batch dimension, so operations that change the batch dimension aren't permitted.
Perhaps adding a test case with View(2, ...) would work for batch size 2. Or maybe even View(-1, ...) with other dimensions specified explicitly.
Hi @jaybdub
I recently came across a use case where the conversion fails for view converter when batch size >1 .
Command:
python -m torch2trt.test --name=view
Some pointers:
[2,3,3,3]
, view converter was still seeing the input size as[1,3,3,3]
. Line 15 in theview
converter. I couldnt de-bug it though.