zhyever/Monocular-Depth-Estimation-Toolbox

Preparation NYUdepthv2

seb-le opened this issue · 7 comments

Hi. First of all, thank you for sharing your nice work.

I faced an issue when I organize the nyudepthv2 dataset.

As described in dataset_prepare.md, I tried to run follows:

$ git clone https://github.com/cleinc/bts.git
$ cd bts
$ python utils/download_from_gdrive.py 1AysroWpfISmm-yRFGBgFTrLy6FjQwvwP sync.zip
$ unzip sync.zip

Next, what file should I download in link to get the standard test set that you mentioned?

image

Also, where can I get nyu_train.txt and nyu_test.txt?

Thanks.

Thanks for your attention to my work. Please check the splits folder in Monocular-Depth-Estimation-Toolbox. I provide the split files there, following previous depth estimation work.

I remember the test set has already been included in the sync.zip.

Thank you for your reply, and I found the splits txt files.

However, I think the test set is not included in sync.zip; instead, it is included in link and extracted by the BTS source code as follows:

$ cd ~/workspace/bts/utils
### Get official NYU Depth V2 split file
$ wget http://horatio.cs.nyu.edu/mit/silberman/nyu_depth_v2/nyu_depth_v2_labeled.mat
### Convert mat file to image files
$ python extract_official_train_test_set_from_mat.py nyu_depth_v2_labeled.mat splits.mat ../../dataset/nyu_depth_v2/official_splits/

Then, we might cut the test set and paste it into our dataset structure.

Meanwhile, why the number of training pairs of nyudepthv2 is written as 50k in this repo and papers?

In nyu_train.txt, there are only 24,225 pairs for training.

Is nyu_train.txt correct?

Thanks,

Thanks a lot for your report. That's a typo in the paper. We use this provided split file to train our models following previous work. I will zip my nyu dataset and upload it to drive in the future so that there will be no more effort to download or process the dataset using codes from other repos.

Thank you for your comments.

I have really learned a lot of things from your work and paper.

Thanks.

It's nice to hear that. I hope they are helpful to you. Feel free to re-open this issue for discussions.

It's nice to hear that. I hope they are helpful to you. Feel free to re-open this issue for discussions.

I am sorry, but is the test dataset available now?