Doubts in the evaluation and optimization section
12num opened this issue · 2 comments
Hello, I have a question to interrupt you. I reproduced your results experimentally, using a train-before-evaluation approach. But I found that the results after using do_eval (FDE 1.0513439117392331, MR 0.09578942034860154) are far better than when training(FDE: 3.1969465177600322 MR(2m,4m,6m): (0.49478493944897106, 0.23618785871750297, 0.1306969923570714).I found that running do_eval also running recover --model, is the optimization model? If it is a model, the new model generated is not seen.
And the relevant part of the test doesn't seem to be tested after optimization. Evaluate whether the optimization and testing parts are run together. (I just reproduced the results, I didn't study your code, please forgive me if I misunderstood) If you can answer my question, I will be very grateful
The result produced during training is single trajectory prediction while the result produced after evaluation is 6-trajectory prediction (evaluated by argoverse toolkit)
Thanks for your replies, it is very helpful for me