huang-xx/STGAT

Not able to reproduce the numbers on ETH dataset acc to params given in issue#1

Closed this issue · 1 comments

Given the hyperparameter settings for the 8-timestep and 12-timestep models in this issue, I don't get the exact numbers in the paper. Specifically these are the test set results I get,

Forecasting Numbers ADEmine/ ADEpaper FDEmine/ FDEpaper EPOCHmine/ EPOCHpaper
pred len 8, eth 0.57/0.56 1.08/1.10 356/256
pred len 12, eth 0.74/0.65 1.18/1.12 62/54

These numbers, especially for the 12-step forecaster are very off. I have tried training for longer but the errors don't improve after a point. Is the evaluation protocol in the evaluation script not correct? If not, can you verify what the val ADE and FDE values were during training at the epochs where the test set numbers match in the paper? Nevertheless, it will be helpful if you are able to release the pretrained models for all the five datasets reported in the paper.

Thanks.

@tarashakhurana Here I provide the pre-trained model on the ETH dataset.

Please set the seed in evaluate_model.py to 86 when the prediction length is 12 time-steps.

ETH pre-trained model.zip

In my memory, only the eth dataset needs to modify the hyperparameters separately, and the results on other datasets should be directly available. If you have problems, I will upload other models after the quarantine.