运行报错
JXFOnestep opened this issue · 3 comments
terminate called after throwing an instance of 'std::runtime_error'
what(): The following operation failed in the TorchScript interpreter.
Traceback of TorchScript, serialized code (most recent call last):
File "code/torch/models/yolo.py", line 33, in forward
_22 = getattr(self.model, "2")
_23 = getattr(self.model, "1")
_24 = (getattr(self.model, "0")).forward(x, )
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
_25 = (_22).forward((_23).forward(_24, ), )
_26 = (_20).forward((_21).forward(_25, ), )
File "code/torch/models/common.py", line 10, in forward
def forward(self: torch.models.common.Conv,
x: Tensor) -> Tensor:
_0 = (self.act).forward((self.conv).forward(x, ), )
~~~~~~~~~~~~~~~~~~ <--- HERE
return _0
class C3(Module):
File "code/torch/torch/nn/modules/conv.py", line 11, in forward
x: Tensor) -> Tensor:
_0 = self.bias
x0 = torch._convolution(x, self.weight, _0, [2, 2], [2, 2], [1, 1], False, [0, 0], 1, False, False, True, True)
~~~~~~~~~~~~~~~~~~ <--- HERE
return x0
models/export.py(65):
RuntimeError: Input type (CPUFloatType) and weight type (CUDAFloatType) should be the same or input should be a MKLDNN tensor and weight is a dense tensor
libtorch是readme里的版本吗
libtorch跟你提供的一致,我单独写了一个libtorch测试案例就不会报错,一旦加到SLAM框架中就报错,不知道是什么原因
这个问题解决了,因为我推理的时候用了GPU,后来用CPU版就没有这个问题了,但是又有新的问题了,
terminate called after throwing an instance of 'c10::Error'
what(): isTuple() INTERNAL ASSERT FAILED at "/home/jxf/environment/libtorch-1.8.0+cpu/include/ATen/core/ivalue_inl.h":1097, please report a bug to PyTorch. Expected Tuple but got GenericList
Exception raised from toTuple at /home/jxf/environment/libtorch-1.8.0+cpu/include/ATen/core/ivalue_inl.h:1097 (most recent call first):