uzh-rpg/rpg_e2depth

About MVSEC sequence

wl082013 opened this issue · 1 comments

Hi, nice work!

Could you also share the sequences of the MVSEC dataset that you used to train and test the network?
I mean the samples you mentioned in the paper, where the training split was about 8523 samples and the test split was about 1826 samples.
It would be a great help for real-world data test using the sequence you used.

Another question is that you use left or right camera in the MVSEC dataset?

Thanks a lot.

I just added the link to the MVSEC dataset we used. We used DAVIS left camera.
Please have a look at the website: rpg.ifi.uzh.ch/e2depth