hzxie/Pix2Vox

About URNet

Closed this issue · 7 comments

Great work, I'd like to reproduce, just ran into some confusion.

  1. Regarding the SUN3D dataset, I don’t know how to download it. I found many folders on the official website https://sun3d.cs.princeton.edu/, which one should I download?
    image

2.Which dataset is the Pix2Vox++ used in URNet trained on (SUN3D and the final actual demonstration)?

Besides, I don’t know if you have any plans to open source it. If you can, it would be greatly appreciated.

What is URNet?

SUNCG is no longer available due to the law case between Planner 5D and Princeton.
See also:

What is URNet?

URNet is the scene reconstruction network mentioned in Chapter 5 of your paper (3D Scene and Object Reconstruction from Multiple Sources and Viewpoints).
I'm trying to reproduce it, but I don't know which data set the Pix2Vox++ you used was trained on.
I'm thinking it's in Thing3D-Chairs-RFC? Or just go directly to ShapeNet-Chairs.

SUN3D, not SUNCG or Things3D. Because your paper mentioned that this data can facilitate the comparison of multi-view 3D reconstruction of scenes. So I tried to download it, but there were too many folders in it, and I didn’t know how to choose.

In which paper did I mention SUN3D?
Sorry, the work was done more than a half-decade ago, I cannot remember the details.

Hahaha, it is mentioned on page 85 of this paper(3D Scene and Object Reconstruction from Multiple Sources and Viewpoints).Click the link to jump.

image
What should I do to reproduce them?