Open3DA/LL3DA

Discrepancy Between Reproduced Model Results and Paper Findings

heyucchen opened this issue · 3 comments

I attempted to reproduce the results of LL3DAl as described in the README. However, upon conducting experiments, I noticed a slight deviation in the results obtained compared to those reported in the paper. I followed the instructions meticulously and ensured that all parameters and settings were consistent with the documentation. Despite this, the performance of the reproduced model falls short of the reported metrics in the paper. I want to ask whether there are hyperparameters changed but not mentioned in paper or README. And I would be grateful, if your group can provide precise hyperparameters or share the pre-trained weights.

This is the result on Dataset ScanRefer in paper after tuning:
屏幕截图 2024-04-16 015702
This is what I got:
屏幕截图 2024-04-16 015637

We have already uploaded the pre-trained weights of the 3D generalist checkpoint at https://huggingface.co/CH3COOK/LL3DA-weight-release/tree/main.

Because of the randomness in point cloud down-sampling (data pre-processing), the overall performance might vary a bit even given the same pre-trained checkpoint. This is also discussed in ch3cook-fdu/Vote2Cap-DETR#12 and https://github.com/facebookresearch/3detr?tab=readme-ov-file#training.

I see. Thank you for your explanation!