DLR-RM/SingleViewReconstruction

Which data to use ?

trilokpadhi opened this issue · 4 comments

Hey @themasterlink
Thanks for the nice repo also sorry for not being detailed .
I downloaded the models by running the code with python download_models.py , and all the models are in the required folder and then when I run the predict data_points.py file with python predict_datapoint.py --output OUTPUT --use_pretrained_weights /home/trilok/SVR-final/SingleViewReconstruction/SingleViewReconstruction/model , I am getting an issue and here is the complete traceback , Can you please help me out ?
Thanks
Trilok

/home/trilok/SVR-final/SingleViewReconstruction/SingleViewReconstruction/src/SettingsReader.py:16: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  settings = yaml.load(stream)
Traceback (most recent call last):
  File "predict_datapoint.py", line 200, in <module>
    predict_some_sample_points(hdf5_paths, model_path, args.output, args.use_pretrained_weights, args.use_gen_normal)
  File "predict_datapoint.py", line 90, in predict_some_sample_points
    settings = SettingsReader(settings_file_path, data_folder)
  File "/home/trilok/SVR-final/SingleViewReconstruction/SingleViewReconstruction/src/SettingsReader.py", line 85, in __init__
    raise Exception("No paths were found, please generate data before using this script, check this path: {}".format(self.folder_path))
Exception: No paths were found, please generate data before using this script, check this path: data

P.S :- Also the color_normal_mean.hdf5 file is there in the data folder

Hey,

could you be a bit more specific?

Best,
Max

Hey,

as the exception says, you have the generate the data yourself beforehand. We used SUNCG to generate the training data. However, SUNCG is no longer publicly available, for this reason, we can not share the generated data ourselves. If you already have the dataset SUNCG, you can generate it yourself.

Sorry for the inconvenience. If you don't have access to SUNCG, I would recommend looking into:

https://tianchi.aliyun.com/specials/promotion/alibaba-3d-scene-dataset

They also offer 3D indoor scenes, you might have to adjust the loaders for the SDFGen, but BlenderProc already fully supports 3D Front.

Best,
Max

Hey @themasterlink
Thank you so much your detailed reply , however we just wanted to test the results on a single data sample of yours that is color_normal_mean.hdf5 ? do we need train.tfrecord for it ?
Thanks
Trilok

Hey,

for just one test image not necessarily, but you need to change the DataSetLoader.py:

def _deserialize_tfrecord(self, example_proto):

This function returns the current image, normal image and voxel information. You can change that to always use the same test image here to get a certain result. But please make sure to use a camera with the same camera intrinsic as used during the training. As I do not expect that the approach works well with a camera intrinsic shift.

You can generate a normal image with the UNetNormalGen.

The color_normal_mean.hdf5 is used here:

and here:

The network was trained with images which were mean-shifted before usage, this has to be done on the test images as well.

Best,
Max