Performance issues in training_api/research/ (by P3)
DLPerf opened this issue · 2 comments
Hello! I've found a performance issue in your program:
tf.Session
being defined repeatedly leads to incremental overhead.
You can make your program more efficient by fixing this bug. Here is the Stack Overflow post to support it.
Below is detailed description about tf.Session being defined repeatedly:
- in object_detection/eval_util.py:
sess = tf.Session(master, graph=tf.get_default_graph())
(line 273) is defined in the function_run_checkpoint_once
(line 211) which is repeatedly called in the loopwhile True:
(line 431). - in slim/datasets/download_and_convert_cifar10.py:
with tf.Session('') as sess:
(line 91) is defined in the function_add_to_tfrecord
(line 64) which is repeatedly called in the loopfor i in range(_NUM_TRAIN_FILES):
(line 184).
tf.Session
being defined repeatedly could lead to incremental overhead. If you define tf.Session
out of the loop and pass tf.Session
as a parameter to the loop, your program would be much more efficient.
Looking forward to your reply. Btw, I am very glad to create a PR to fix it if you are too busy.
Hello, I'm looking forward to your reply~
Hello Sorry for the late reply,
We've updated our GUI including functional and visual changes; But most importantly we are now using Tensorflow Object Detection API V2. We base our training and evaluation workflow on provided functionalities in TF2 OB. that contains various enhancement.