repeat test of the model with the same input is consistent
Opened this issue · 0 comments
Hi
Thanks for this nice work. I am trying to reproduce this work on my machine.
what I observed is the output of the model is not consistent if I run multiple times of the inference in a while loop using the same input.
while True:
image_cuda = torch.from_numpy(img).float().cuda()
pred = 0
print(pred)
with torch.no_grad():
pred = model(image_cuda)
#np.save('pred.npy', pred.cpu())
print(pred)
The output from the first iteration looks good. But after that, the output of each iteration is different from the output of other iterations even with the same input image (See below pic).
If I kill the thread and execute the code each time at the first iteration I always get the same output.
I print the pred values and find that it does differ from the previous iteration even with the same input image and the same model.
I did more tests about this and find this inconsistent happens at the laye#6 of Mobilenet in class MobileNetSkipAdd(nn.Module).
The #0~5 is always the same with the same input.
Is there anything I missed for using the model? or this a bug?