how to get inference runtime
JeongJaecheol opened this issue · 1 comments
Hi
I checked runtime using code like this:
checked_runtime = 0
for i in range(100):
runtime = runtime_fe()
if i != 0:
checked_runtime += runtime
print('feature_extraction module runtime: %.5f' %(checked_runtime/99))
def runtime_fe():
net = feature_extraction()
net.eval()
test_in = Variable(torch.randn(1,3,544,960))
if torch.cuda.is_available():
net = nn.DataParallel(net)
net.cuda()
test_in.cuda()
with torch.no_grad():
start_time = time.time()
result = net(test_in)
torch.cuda.synchronize()
end_time = time.time()
return end_time-start_time
and I get 40 ms but correct value is 54 ms in your paper.
can you tell me how to get inference runtime?
We ran the model on a single Nvidia-TitanXp GPU to get the inference runtime. Inference time will depend on the type of gpu you use.
Our runtime measurement script was similar to the one shared in this post. You can also check the pytorch profiler.
I will try finding the exact script we used, and upload here.
Thanks !!