Use other datasets to train PENet
yuyu19970716 opened this issue · 11 comments
Hello
I now want to use the PENet network architecture to complete depth completion, because I only need the dataset at hand, so I found the HandNet dataset, which contains depth data and image data collected by the realsense series of depth cameras. I especially want to know where do I need to modify the code? Thank you in advance!
Looking forward to your reply!
Thanks for your interest yuyu! Individually I guess you could write your own dataloader for the HandNet dataset.
Thanks for your quick reply! If I write my own data loader for HandNet, if its depth map and rgb image are already aligned, do I need to use the function load_calib() in kitti_loader? I think I should not need the value of k. Very much looking forward to your reply!
Thanks for your quick reply! If I write my own data loader for HandNet, if its depth map and rgb image are already aligned, do I need to use the function load_calib() in kitti_loader? I think I should not need the value of k. Very much looking forward to your reply!
You might need it if you want to use the proposed geometric convolutional layer, otherwise not.
Thank you very much for your reply!
I may need to bother you with another question. Is K in the code the internal parameter matrix of the camera?
Looking forward to your reply, thank you in advance! ! !
Thank you very much for your reply!
I may need to bother you with another question. Is K in the code the internal parameter matrix of the camera?
Looking forward to your reply, thank you in advance! ! !
Exactly.
alright, thank you very much! I trained the network today. I used the NYU-Depth v2 dataset and changed k to the camera's internal parameter matrix in this dataset, but after 100 rounds of running, the RMSE is still very large. My training set is 1000, the evaluation set is 200, and the test Set 200, could the RMSE reach a few thousand because the training set is too small?
alright, thank you very much! I trained the network today. I used the NYU-Depth v2 dataset and changed k to the camera's internal parameter matrix in this dataset, but after 100 rounds of running, the RMSE is still very large. My training set is 1000, the evaluation set is 200, and the test Set 200, could the RMSE reach a few thousand because the training set is too small?
You could refer to other works (e.g. NLSPN) for the training settings on NYU Depth V2 dataset. After a full training procedure, ENet could reach an RMSE of around 104 mm.
Good! Thank you so much! I'm going to try what you said right away!
Hi, I'm sorry to bother you again.
I have now trained PENet with KITTI, but I still have a question: can this network be trained with NYU depth v2? If not, I'll change my mind. Especially looking forward to your reply.
I changed the format of the NYU depth v2 dataset to the image format required by PENet.Can I do this? Because I ran into some other problems.
Very much looking forward to your reply!
Hi, I'm sorry to bother you again. I have now trained PENet with KITTI, but I still have a question: can this network be trained with NYU depth v2? If not, I'll change my mind. Especially looking forward to your reply. I changed the format of the NYU depth v2 dataset to the image format required by PENet.Can I do this? Because I ran into some other problems. Very much looking forward to your reply!
I didn't change the format of NYU depth V2 and directly rendered the NYU depth v2 dataloader from Park's (NLSPN) repositories. It works normally with a common training setting on the indoor dataset. I hope these files could help.
nyu.json.txt
nyu_loader.py.txt
OK! I will take a serious look at what you have to offer!
thank you very much!