Evaluation FID in trained model doesn't match the one from the pre-trained model
arturandre opened this issue · 2 comments
Hello, thank you for the code available,
I executed the training script:
bash scripts/train_photosketch_horse_riders.sh
and then I compared the metrics of the trained model with those of the file:
weights/photosketch_horse_riders_aug.pth
using the evaluation script:
run_metrics.py
with the command:
python run_metrics.py --models_list weights/eval_list --output metric_results.csv
I added the trained weights into the weights folder and into the eval_list file so that I could check its FID too.
I very different FIDs for the photosketch_horse_riders_aug.pth (FID ~20.13) and the trained one (FID ~37.07). I assumed that the training script would generate a model similar to the one stored at photosketch_horse_riders_aug.pth.
There is some other different procedure to get a trained model with the same FID as photosketch_horse_riders_aug.pth, which is very close but not quite the same as in the paper (FID ~19.94)?
You would want to find the best iteration recorded in ./checkpoints/<exp_name>/best_iter.txt
and use this iteration to get the FID.
I see, thank you!