model 'loses' weights after one prediction
baldellimtt opened this issue · 3 comments
I have trained the model on my custom dataset with colab, performance are okay and the test on new data too.
When I run the detection part:
yolo = Create_Yolo(input_size=YOLO_INPUT_SIZE, CLASSES=TRAIN_CLASSES)
yolo.load_weights(checkpoints) # use keras weights
for img in os.listdir(image_path):
yolo.predict(img)
It gives me a strange behaviour: only the first image in the foor loop gives me a detection; the following ones have no detection.
This only happen when I'm using conda env with GPU; when I'm using CPU it's all working well.
any ideas?
thanks
I have experienced the same thing and I figured out that batch normalization layer in Tensorflow conflicted with model.predict().
Solution: instead of using model.predict(), create data as in the training process and use model(data, training=False).
Thanks you a lot!
我经历过同样的事情,我发现 Tensorflow 中的批量归一化层与 model.predict() 冲突。解决方案:不要使用 model.predict(),而是像在训练过程中一样创建数据并使用 model(data, training=False)。
But reasoning gets slow. Do you have any solutions