You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Support torch.int32 as a dtype for quantize and dequantize (#289)
Summary:
Pull Request resolved: #289
The ops like `quantized_decomposed.quantize_per_tensor.default` did not support
an int32 quantized type. Add support for these to the portable and aten runtimes.
This is important for Turing which uses int32 to represent uint16 (as the latter is not a valid
pytorch dtype).
Reviewed By: kimishpatel
Differential Revision: D49202048
fbshipit-source-id: 0faa89ce1d34b60ece443fb02fa14f02abf2d376
0 commit comments