jiawen-zhu/ViPT

Query about DepthTrack Evaluation

Closed this issue · 12 comments

Thanks for your excellent work. When I want to evaluate DepthTrack, I just follow your instruction and put the DepthTrack test set to './Depthtrack_workspace/ ' and name it 'sequences'. When I run the bash file, vot toolkit starts to download the VOT-RGBD 2019 dataset for me. There are just 50 videos in DepthTrack testing set, while there are 80 videos in VOT RGBD 2019. I don't know whether vot-toolkit downloading VOT RGBD2019 automatically Is right? Or what is the right behavior for the right evaluation. Besides, I can run VOT RGBD2022 successfully, but it seems to take more than 24 hours on one V100 for me. Is it right? Thanks a lot.

Hi! I don't think it's right that downloading VOT RGBD2019 automatically while evaluating on depthtrack. Does the bash file you ran is eval_rgbd.sh?
By the way, I have some problem in getting rgbd2022 dataset, could you please tell me what the correct stack_name is by running 'vot initialize <stack_name>?

I just run the eval_rgbd.sh the same as that author provided without any modification. And I can evaluate the VOT-RGBD 2022. Or I think you can share how you evaluate on DepthTrack.
For VOT RGBD2022 evaluation, I also just follow the author's instruction. I think author didn't running 'vot initialize <stack_name>' in his bash file. And the stack file is vot2022/rgbd. I wish it can help you.

Thanks a lot.I'm downloading the rgbd2022 dataset now! Is the rgbd2022 same as rgbd2019(cdtb)?
For evaluating depthtrack, I also follow the author's instructions, but I run 'vot evaluate vip_deep' after the depthtrack test dataset is already downloaded and rename as sequences. Maybe you can try to download the test dataset before evaluating, so it can skip the download steps.
I hope it can help you!

I don't know whether rgbd2022 is the same as rgbd2019. But the description.json for the two testing set is different. And for the depthtrack, I also copy the downloaded depthtrack test (there are just 50 videos, and the name is similar to videos in vot rgbd2019) and rename them as sequences before evaluate. And maybe when you run the evaluation for rgbd, I would appreciate it very much if you can share the running time of rgbd evaluation.

I just evaluate depthtrack on my laptop(3060laptop), it took about 1.5h. Training time is about 13h on titan xp(bs32 30ep). I wish it can help you.
By running the command 'vot evaluate --workspace ./ vipt_deep', it would automatically download the vot-rgbd2019 dataset. And I'v realized that the vot-rgbd2022 (127seq) is modified from depthtrack test (50seq) and vot-rgbd2019 (80seq), does the dataset you'v evaluated on rgbd2022 have 127seq? If so, would mind share the rgbd2022 dataset with me? I'm stuck in downloading it for low downloading speed.

Yeah, thanks for your reply. You mentioned that by running 'vot evaluate --workspace ./ vipt_deep', it would download vot-rgbd2019, and how you evaluate depthtrack? Just letting it download the whole vot-rgbd2019 or some other operations? If you could share all your commands to evaluate depthtrack, I would appreciate it very much.
For the rgbd2022 dataset, I have the whole data. But I don't know how to send to you. If you have wechat, maybe you can add my wechat. Or we can communicate through e-mail. My email address is honglyhly@gmail.com. Feel free to contact me. And if you have a wechat account, you can email me with your wechat account.

Thanks for your excellent work. When I want to evaluate DepthTrack, I just follow your instruction and put the DepthTrack test set to './Depthtrack_workspace/ ' and name it 'sequences'. When I run the bash file, vot toolkit starts to download the VOT-RGBD 2019 dataset for me. There are just 50 videos in DepthTrack testing set, while there are 80 videos in VOT RGBD 2019. I don't know whether vot-toolkit downloading VOT RGBD2019 automatically Is right? Or what is the right behavior for the right evaluation. Besides, I can run VOT RGBD2022 successfully, but it seems to take more than 24 hours on one V100 for me. Is it right? Thanks a lot.

Hi, "vot toolkit starts to download the VOT-RGBD 2019 dataset for me"=>This is because the vot-toolkit cann't find the sequences list in your workspace, you can create a list.txt in the sequences dir. It looks like you are not familiar enough with the use of vot-toolkit, we recommend to refer to the official tutorial of VOT.
“I can run VOT RGBD2022 successfully, but it seems to take more than 24 hours on one V100 for me. ”=>The VOT22 adopts multi-start testing, so the test is a bit time consuming. We have not tested it on V100, we tested VOT22-RGBD on a single 2080ti which takes about 10 hours, hope it can give you a reference.

Thanks for your excellent work. When I want to evaluate DepthTrack, I just follow your instruction and put the DepthTrack test set to './Depthtrack_workspace/ ' and name it 'sequences'. When I run the bash file, vot toolkit starts to download the VOT-RGBD 2019 dataset for me. There are just 50 videos in DepthTrack testing set, while there are 80 videos in VOT RGBD 2019. I don't know whether vot-toolkit downloading VOT RGBD2019 automatically Is right? Or what is the right behavior for the right evaluation. Besides, I can run VOT RGBD2022 successfully, but it seems to take more than 24 hours on one V100 for me. Is it right? Thanks a lot.

Hi, "vot toolkit starts to download the VOT-RGBD 2019 dataset for me"=>This is because the vot-toolkit cann't find the sequences list in your workspace, you can create a list.txt in the sequences dir. It looks like you are not familiar enough with the use of vot-toolkit, we recommend to refer to the official tutorial of VOT. “I can run VOT RGBD2022 successfully, but it seems to take more than 24 hours on one V100 for me. ”=>The VOT22 adopts multi-start testing, so the test is a bit time consuming. We have not tested it on V100, we tested VOT22-RGBD on a single 2080ti which takes about 10 hours, hope it can give you a reference.

Thanks a lot for your response. That works for me.

Thanks for your excellent work. When I want to evaluate DepthTrack, I just follow your instruction and put the DepthTrack test set to './Depthtrack_workspace/ ' and name it 'sequences'. When I run the bash file, vot toolkit starts to download the VOT-RGBD 2019 dataset for me. There are just 50 videos in DepthTrack testing set, while there are 80 videos in VOT RGBD 2019. I don't know whether vot-toolkit downloading VOT RGBD2019 automatically Is right? Or what is the right behavior for the right evaluation. Besides, I can run VOT RGBD2022 successfully, but it seems to take more than 24 hours on one V100 for me. Is it right? Thanks a lot.

Hi, "vot toolkit starts to download the VOT-RGBD 2019 dataset for me"=>This is because the vot-toolkit cann't find the sequences list in your workspace, you can create a list.txt in the sequences dir. It looks like you are not familiar enough with the use of vot-toolkit, we recommend to refer to the official tutorial of VOT. “I can run VOT RGBD2022 successfully, but it seems to take more than 24 hours on one V100 for me. ”=>The VOT22 adopts multi-start testing, so the test is a bit time consuming. We have not tested it on V100, we tested VOT22-RGBD on a single 2080ti which takes about 10 hours, hope it can give you a reference.

Thanks a lot for your response. That works for me.

Hi, when I run 'vot evaluate --workspace ./ vipt_deep', I have a error: 'vot.workspace.WorkspaceException: Experiment stack does not exist'. However I have already install vot-toolkit. Could you please offer some help? Thanks a lot.

Could you please tell me the [stack_name], when I want to evaluate DepthTrack dataset. Thank you so much!!

Could you please tell me the [stack_name], when I want to evaluate DepthTrack dataset. Thank you so much!!

I did not modify the stack file and just follow the author's setting.

tdd233 commented

Could you please tell me the [stack_name], when I want to evaluate DepthTrack dataset. Thank you so much!!

Hi, have you solved the problem? I also met the error 'vot.workspace.WorkspaceException: Experiment stack does not exist'.