pytorch/ios-demo-app

how to quantize the mobilenet

Closed this issue · 5 comments

I follow the tutorial (https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html#post-training-static-quantization) to quantize the mobilenet, but fail to load(torch::jit::load) the quantized model in cpp. The error is as:

terminate called after throwing an instance of 'c10::Error'
  what():  isTensor() INTERNAL ASSERT FAILED at /home/firefly/pytorch/torch/include/ATen/core/ivalue_inl.h:111, please report a bug to PyTorch. Expected Tensor but got GenericList (toTensor at /home/firefly/pytorch/torch/include/ATen/core/ivalue_inl.h:111)

Please let me know your method to quantize mobilenet.

xta0 commented

@supriyar @jerryzh168 Do you guys have the tutorial regarding how to quantize the mobilenet model?

I follow the tutorial (https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html#post-training-static-quantization) to quantize the mobilenet, but fail to load(torch::jit::load) the quantized model in cpp. The error is as:

terminate called after throwing an instance of 'c10::Error'
  what():  isTensor() INTERNAL ASSERT FAILED at /home/firefly/pytorch/torch/include/ATen/core/ivalue_inl.h:111, please report a bug to PyTorch. Expected Tensor but got GenericList (toTensor at /home/firefly/pytorch/torch/include/ATen/core/ivalue_inl.h:111)

Please let me know your method to quantize mobilenet.

That is the tutorial we use to quantize mobilenet, we also have quantized mobilenet here: https://github.com/pytorch/vision/blob/master/torchvision/models/quantization/mobilenet.py

@jerryzh168
When I throw a quantized mobilenet model into the ios-demo-app and use '1.5.0+cu101' version of torch.
Such an error crashes at the build stage of the project.
However, everything works when I use an older version '1.4.0+cu92'

Maybe I'm doing something wrong
but used the link https://github.com/pytorch/vision/blob/master/torchvision/models/quantization/mobilenet.py
`
Could not find any similar ops to quantized::linear_unpack_fp16. This op may not exist or may not be currently supported in TorchScript.

:
File "/usr/local/lib/python3.6/dist-packages/torch/nn/quantized/modules/linear.py", line 38
return torch.ops.quantized.linear_unpack(self._packed_params)
elif self.dtype == torch.float16:
return torch.ops.quantized.linear_unpack_fp16(self._packed_params)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
else:
raise RuntimeError('Unsupported dtype on dynamic quantized linear!')
Serialized File "code/torch/torch/nn/quantized/modules/linear/___torch_mangle_4026.py", line 23
else:
if torch.eq(self.dtype, 5):
_7, _8 = ops.quantized.linear_unpack_fp16(self._packed_params)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
_6 = (_7, _8)
else:
'LinearPackedParams._weight_bias' is being compiled since it was called from 'LinearPackedParams.getstate'
Serialized File "code/torch/torch/nn/quantized/modules/linear/__torch_mangle_4026.py", line 7
packed_params : Tensor
def getstate
(self: __torch.torch.nn.quantized.modules.linear.___torch_mangle_4026.LinearPackedParams) -> Tuple[Tensor, Optional[Tensor], bool, int]:
qweight, bias, = (self)._weight_bias()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
_0 = (qweight, bias, self.training, self.dtype)`

@ronjian "but got GenericList" msg means that you return multi value in your forward function(in Network).