About the recurrenting the paper
YihaoX opened this issue · 2 comments
YihaoX commented
Hi dbnet team, thanks for the great paper and here comes some question.
-
It is said that NVIDIA net ([3,3] Convolution size of last two layers) is used in the paper but the network you provided in https://github.com/driving-behavior/DBNet/blob/master/models/nvidia_io.py is not the same ([5,5] Convolution size of last two layers). Could you tell me the reason for that?
-
May I confirm that did you use all the raw datas in http://www.dbehavior.net/download.aspx for the network training in the paper?
Cheers!
wangjksjtu commented
Hi, thanks for the interest in our work!
- I checked the code in
nvidia_io.py
. It is indeed not consistent with nvidia network. I guess it happens because the re-organization of the code. I will adjust the network architecture if time permits, or any pull-requests are welcomed! - The data trained in the paper only contains partial data. We continued collecting data after the submission. Now the data is 10 times larger than KITTI. To get the baseline performance, please refer to the leaderboard in the readme file. Thanks :)
YihaoX commented
Hi, thanks for the interest in our work!
- I checked the code in
nvidia_io.py
. It is indeed not consistent with nvidia network. I guess it happens because the re-organization of the code. I will adjust the network architecture if time permits, or any pull-requests are welcomed!- The data trained in the paper only contains partial data. We continued collecting data after the submission. Now the data is 10 times larger than KITTI. To get the baseline performance, please refer to the leaderboard in the readme file. Thanks :)
Thank you :D!