SBR pytorch model to torch jit script model
kadirbeytorun opened this issue · 7 comments
Hello once again,
I am trying to obtain torchscript save file from your pth model, after successfully loading it on pytorch 1.4.0. But when I call
"
traced_script_module = torch.jit.script(net)
traced_script_module.save("net.pt")
"
I receive this error from cpm_vgg16 network code.
"
RuntimeError:
Tried to access nonexistent attribute or method 'stages' of type 'Tuple[str, int, List[int], List[bool], int, int, List[bool]]'.:
File "/home/kadir/SBR_landmarkdetection/pytorch_practice/lib/models/cpm_vgg16.py", line 89
feature = self.features(inputs)
xfeature = self.CPM_feature(feature)
for i in range(self.config.stages):
~~~~~~~~~~~~~~~~~~ <--- HERE
if i == 0: cpm = self.stagesi
else: cpm = self.stages[i]( torch.cat([xfeature, batch_cpms[i-1]], 1) )
"
Do you think this is a bug from torch jit? Or am I making a mistake here?
Regards, Kadir
Maybe not bug just not support some type of classes, the config in my code is defined by namedtuple. You can try to convert it to dict and try again.
Hey thanks for the fast reply.
After reading your comment, I checked pytorch changelogs about torchscript and you are right, they added named tuple support only in the latest master branch.
Also I looked into your config codes to make transfer from tuple to dict, and found that you load your config using this
"self.config = deepcopy(config)" in cpm_vgg16.py
In the python file deepcopy() resides I observed that there is a function
"_deepcopy_dict(x, memo, deepcopy=deepcopy)"
Is this function for transfering config from tuple to dict? Because it looks it wasn't used anywhere
If not, could you shed some more light on the matter, because I really like your model's performance, yet my pytorch model must be in 1.2 version at most due to some restrictions.
Regards
deepcopy is from copy.deepcopy, where copy
is a common python package. You can just use import copy
to use it.
It does not transfer tuple to dict, instead it just creates a copy of the input stance. I use it to avoid change the input config.
Hey,
I will ask another question I hope it's not a problem. Don't want to open another issue and spam your repository.
I am having difficulty understanding what's the input size to your network. I can't see a resizing or anything like that in the code, when I print the input shape, it is "1x3x256x256"
But it's nowhere to be found on the code of model.
Also when I input a different size tensor to the model, such as 1x3x224x224, there is no error and output size is same as the other case. (same happens with 1x3x512x512)
Could you shed some light on the matter?
Regards
Hi @kadirbeytorun , please see the codes here: https://github.com/D-X-Y/landmark-detection/blob/master/SBR/exps/basic_main.py#L57
The size is defined by crop_height and crop_width
It appears default crop size is 256, hense why I got 3x256x256 from calling .shape()
Then network doesn't have a default input size, and will work with any size of image we give to it.
Thank you for your time and knowledge, really appreciate it.
You are welcome~