Efficiency issue
LLCF opened this issue · 4 comments
LLCF commented
x_batch, y_batch = next(train_generator)
feed_dict = { img: x_batch,
label: y_batch
}
This way is very slowly.
abin24 commented
It is slow because of the memory leak of the line:
global_step.assign(it).eval()
this line can be deleted!!!!
LLCF commented
I don't think so. When I comment "global_step.assign(it).eval()", most of the time, the GPU-Util is also 0%. I am sure that the GPU is waiting for data.
abin24 commented
Ok, I don't know what happened. I just add line sess.graph.finalize() after the line sess.run(init_op). and comment "global_step.assign(it).eval()" The train time is 45 faster than the original version. BTW, the original version will get 23times slower after about 1-2 hour
duyanfang123 commented
I want to ask where is the train data and test data???I am very urgent