/neural_renderer

The code for the paper Neural 3D Mesh Renderer by Hiroharu Kato, Yoshitaka Ushiku, and Tatsuya Harada.

Primary LanguagePythonMIT LicenseMIT

Neural 3D Mesh Renderer

This is the code for the paper Neural 3D Mesh Renderer by Hiroharu Kato, Yoshitaka Ushiku, and Tatsuya Harada.

This repository only contains the core component and simple examples. Other examples are on the project page.

Example 1: Drawing an object from multiple viewpoints

Example 2: Optimizing vertices

Transforming the silhouette of a teapot into a rectangle. The loss function is the difference bettween the rendered image and the reference image.

Reference image, optimization, and the result.

Example 3: Optimizing textures

Matching the color of a teapot with a reference image.

Reference image, result.

Example 4: Finding camera parameters

The derivative of images with respect to camera pose can be computed through this renderer. In this example the position of the camera is optimized by gradient descent.

From left to right: reference image, initial state, and optimization process.

Citation

@article{kato2017renderer,
  title={Neural 3D Mesh Renderer},
  author={Kato, Hiroharu and Ushiku, Yoshitaka and Harada, Tatsuya},
  journal={arXiv:1711.07566},
  year={2017}
}