Inference time while testing
cheolhwanyoo opened this issue · 5 comments
Hello.
Thank you for sharing your code and paper.
Can i ask how you measured the inference time of other algorithms in your paper?
Thank you
Additional questions.
I am trying to reproduce the inference time of your model in your paper.
Compared to the 124 fps of your paper, I could get close to 1000 fps in my computer environment.
I tested in Window environment with GTX titan xp + tensorflow1.13 + CUDA 10
Can you explain how to measure inference time or share code?
Thank you
set the batch size as 1
change the test code about loop
with tf.Session() as sess: init = tf.global_variables_initializer() sess.run(init) saver.restore(sess,'../../model/crossInfoNet_NYU.ckpt') loopv = test_num // batch_size other = test_data[loopv * batch_size:] a=time.time() for i in xrange(loopv): if i < loopv: start = i * batch_size end = (i + 1) * batch_size feed_dict = {inputs: test_data[start:end]} [pred_]=sess.run([pred_out],feed_dict=feed_dict) pred_norm.append(pred_) b=time.time() print b-a
It is an old time version before rebuttal. I have reduced some parameters so the inference speed should be faster. In fact, for the real-time app, it would work well when the inference time >60fps, because the input device camera usually has a fixed fps like 60 or 30 fps.
Thanks for reply!
With batch size is 1, when I ran test_nyu_cross.py in my computer environment, I checked about 200fps.
You mean that the code released is more up to date version than paper?
It may be that I forget to change the value in the end paper.
The code is corresponding with the paper.