/3DShapeGen

Code for 3D Reconstruction of Novel Object Shapes from Single Images paper

Primary LanguagePythonMIT LicenseMIT

3D Reconstruction of Novel Object Shapes from Single Images

preview

In this work we present a comprehensive exploration of generalization to unseen shapes in single-view 3D reconstruction. We introduce SDFNet, an architecture combining 2.5D sketch estimation with a continuous shape regressor for signed distance functions of objects. We show new findings on the impact of rendering variability and adopting a 3-DOF VC (3 Degree-of-Freedom Viewer Centered) coordinate representation on generalization to object shapes not seen during training. Our model can generalize to objects of unseen categories, and to objects from a significantly different shape dataset. Link to our paper and link to our project webpage.

This repository consists of the code for rendering, training and evaluating SDFNet as well as baseline method Occupancy Networks. Code to repoduce results for baseline method GenRe can be found here.

Training and evaluating SDFNet and OccNet

Follow instructions in SDFNet README

Training and evaluating GenRe

Follow instruction in GenRe README

Rendering

Follow instructions in Rendering README