HarshayuGirase/Human-Path-Prediction

sdd dataset

Opened this issue · 8 comments

Thank you very much for your work. I recently had some problems reproducing the code for PEC-NET. The original SDD dataset we converted with the code provided by the authors for the dataset transformation yielded worse results(ADE/FDE=12.03/21.94) than the preprocessed data in the source code.Looking forward to your response!

Thank you very much for your work. I recently had some problems reproducing the code for PEC-NET. The original SDD dataset we converted with the code provided by the authors for the dataset transformation yielded worse results(ADE/FDE=12.03/21.94) than the preprocessed data in the source code.Looking forward to your response!

hi,i reproduced,then i got
Test Best ADE Loss So Far (N = 20) 0.10219505759838646
Test Best Min FDE (N = 20) 0.16761741407297298
Is it because the SDD dataset needs to put the result * 100?

i got
Test Best ADE Loss So Far (N = 20) 12.607165760561172
Test Best Min FDE (N = 20) 20.341844265566213
After I reprocess the data,almost the same as yours,did you solve this problem?

Hi. How can I find this data to reproduce the result?

'''Please specify the parent directory of the dataset. In our case data was stored in:
root_path/trajnet_image/train/scene_name.txt
root_path/trajnet_image/test/scene_name.txt
'''

1: I also regenerated a new pkl file based on the original SDD data. When considering only pedestrians, the ADE/FDE is 9.01/14.508. When considering all types in the scene (people, bicycles, etc.), the ADE/FDE is 12.60/20.65.

2: Based on the provided train_trajnet.pkl file and using the data processing method provided by the author, I found two issues. The first one is with the downsample function. The author's code groups the trajectories based on each metaID and then samples them separately under each metaID. However, since the starting frames for different metaIDs are different, it leads to frame confusion, resulting in a significantly larger number of frames compared to the pkl file (1212 -> 14514). Moreover, the number of agents in each frame is very small, which hampers the effective utilization of spatial relationships. Therefore, I rewrote the downsample function to start directly from frame 0 and sample at intervals of 12 frames until the end, aligning it with the pkl file. Additionally, it should be noted that the stride in the sliding_window function should be set to 20 to avoid overlapping sampling data.

3: The author claims to consider all types of data. However, after conducting a thorough comparative analysis and comparing the pkl file with our own generated data, we found that it only considers pedestrian data.

@Dragonbetter I got similar results as yours during training. 9.04/14.58 with only pedestrians, 12.22/20.37 with all agents. I achieved these results at around 200 epochs.

However during testing, because TTST trick is enabled (test time sampling trick), I got 7.94/11.93 with only pedestrians which is close enough to the reported result. With all agents I got 10.68/16.59.

Without TTST the result is just average, but with it Y-net can achieve close to SOTA. From my understanding of the code TTST is to sample a large amount of samples then select the final 20 goal samples using kmeans. The most significant performance boost come from this trick, and it would be great if the author can explain this in the main paper (as I cannot find the supplementary material anywhere).

@centiLinda !Hello, I would like to know how to train and test on ETH/UCY datasets. They only provided Jupyter Notebook files for training and testing on the SSD dataset, without corresponding files for ETH/UCY. What should I do?
I would be greatly appreciative of your assistance. Thank you in advance.

@centiLinda !Hello, I would like to know how to train and test on ETH/UCY datasets. They only provided Jupyter Notebook files for training and testing on the SSD dataset, without corresponding files for ETH/UCY. What should I do? I would be greatly appreciative of your assistance. Thank you in advance.

You may find the ETH/UCY files from PECNet readme: https://github.com/HarshayuGirase/Human-Path-Prediction/tree/master/PECNet

Pls refrain from posting issues in my personal repo (I've deleted your issue).

您好,我想知道如何在 ETH/UCY 数据集上进行训练和测试。他们只提供了用于在 SSD 数据集上训练和测试的 Jupyter Notebook 文件,没有为 ETH/UCY 提供相应的文件。我该怎么办?我将非常感谢您的帮助。先谢谢你。

您好,请问您是如何解决eth-ucy没有训练和测试的相应文件呢