HKUST-Aerial-Robotics/DSP

How can we get the test file?

wuhaoran111 opened this issue · 7 comments

I did not find the file to process the test file for argoverse. Do you have the schedule for publishing it?

@wuhaoran111

the file to process the test file for argoverse

Do you mean the code for generating the h5 file for submission? If so, I will check it and release it in recent few days.

@wuhaoran111

the file to process the test file for argoverse

Do you mean the code for generating the h5 file for submission? If so, I will check it and release it in recent few days.

Yeah, i find the code for generating h5 file is missing. For this question, i have solved it yesterday. But i find the performance in the argoverse competition is lower than the result in your paper. I want to know if my code is wrong. I just add these code in the end of your file 'visualize.py':

    preds = {}  
    pred_probs = {}  
    cities = {}
    for ii, data in enumerate(tqdm(dl_test)):
        with torch.no_grad():
            out = net(data)
            post_out = net.post_process(out)
            results = (torch.matmul(post_out["traj_pred"][:,:,:,:2].detach().cpu(), data['ROT'][0].T) + data['ORIG'][0]).numpy()
            probs = post_out["prob_pred"].detach().cpu().numpy()
        for i, (argo_idx, pred_traj, pred_prob) in enumerate(zip(data["SEQ_ID"], results, probs)):
            preds[argo_idx] = pred_traj.squeeze()
            pred_probs[argo_idx] = pred_prob.squeeze()
            cities[argo_idx] = data['CITY_NAME'][i]
    from argoverse.evaluation.competition_util import generate_forecasting_h5
    generate_forecasting_h5(preds, f"submit.h5", probabilities=pred_probs)

@wuhaoran111 Yes, the released weight is not identical to the one that we used in the benchmark. It will cost more training epochs with smaller lr to get the result. However, the performance gap is pretty small (brier-minFDE (K=6): 1.864; difference ~0.3%).

Btw, what number did you get? The code for submission seems good.

@wuhaoran111 Yes, the released weight is not identical to the one that we used in the benchmark. It will cost more training epochs with smaller lr to get the result. However, the performance gap is pretty small (brier-minFDE (K=6): 1.864; difference ~0.3%).

Btw, what number did you get? The code for submission seems good.

i get
brier-minFDE (K=6): 1.916309053116762
minFDE (K=6): 1.2634568230719045

i think it has a relatively big gap to your result

@wuhaoran111 Thanks for your information. There must be some error (maybe the wrong model is uploaded), I will double-check it ASAP.

@wuhaoran111 Thanks for your information. There must be some error (maybe the wrong model is uploaded), I will double-check it ASAP.

ok, thanks for your reply. I will wait for it.

@wuhaoran111 Sorry for the late reply.
There are indeed some bugs in the released code. We have updated the preprocessing parameters and the pre-trained model. You can pull the code and download the new model now (remember to run the preprocessing code).
The evaluation results of the updated model are
"brier-minFDE (K=6)": 1.8715802577965401, "minFDE (K=6)": 1.2237778395755803, "MR (K=6)": 0.13253906299988483

Moreover, you can get better results by modifying the preprocessing code:

  • In argo_preprocess.py L334:
    idcs_da = self.argo_map.get_raster_layer_points_boolean(pts, city_name, 'driveable_area') -> idcs_da = self.argo_map.get_raster_layer_points_boolean(pts, city_name, 'roi')
  • In (argoverse-api) map_api.py L38:
    ROI_ISOCONTOUR = 5.0 -> ROI_ISOCONTOUR = 1.0

The evaluation results should be:
"brier-minFDE (K=6)": 1.8649483657938417, "minFDE (K=6)": 1.2174491957001548, "MR (K=6)": 0.13220633965934248

Since the original drivable area provided by argoverse-api is not sufficient (a lot of GT trajectories get outside of the drivable area), we dilate the DA region by 1 meter using the Argoverse API function.