/pytorch3d-me

A Pytorch3D extension used in AvatarMe++ and FitMe

Primary LanguagePythonOtherNOASSERTION

Pytorch3D-Me

A Pytorch3D 0.6.1 extension with features introduced in FitMe (CVPR 2023) and AvatarMe++ (TPAMI 2021), which introduces additional functionality in texturing and shading. In detail we add:

  • A renderer object for rendering directly in UV-space,
  • A blinn-phong based shader,
  • The option to use multiple reflectance textures with a single mesh, including Diffuse Albedo, Specular Albedo, Diffuse Normals, Specular Normals and Occlusion Shadow,
  • Spatially-varying specular shininess,
  • Subsurface-scattering approximation with spatially-varying translucency,
  • Multiple Point and Directional lights per rendered batch item.



Below we show the skin shading comparison between a) Pytorch3d TexturedSoftPhongShader with the albedo texture and shape normals, b) our Pytorch3d-Me Blinn-Phong shader, with separate textures for diffuse and specular albedo and normals c) previous with additional subsurface scattering approximation and d) previous with additional occlusion shadow. Additional discussion is in included in the AvatarMe++ paper and the qualitative comparison is shown below:

AvatarMe Rendering Comparisons

Rendering with all added features is about 15% slower than the standard pytorch3D SoftPhongShader.

Installation

To install Pytorch3d-Me you need to build this repo from source following the standard installation instructions at INSTALL.md. In short, first install the prerequisites:

conda create -n pytorch3d python=3.9
conda activate pytorch3d
conda install -c pytorch pytorch=1.9.1 torchvision cudatoolkit=10.2
conda install -c fvcore -c iopath -c conda-forge fvcore iopath
conda install -c bottler nvidiacub

# Demos and examples
conda install jupyter
pip install scikit-image matplotlib imageio plotly opencv-python

And then build and install the project:

cd pytorch3d-me
pip install -e .

Getting Started

You can use pytorch3d-me in the same manner as pytorch3d, along with our expanded Textures and Shaders classes and io functions.

To load a set of reflectance textures you can use

from pytorch3d.io import load_objs_and_textures

meshes = load_objs_and_textures(mesh_dir,
                    diffAlbedos=da_dir, specAlbedos=sa_dir,
                    diffNormals=dn_dir, specNormals=sn_dir,
                    shininess=sh_dir, translucency=tr_dir,
                    device=device)

where each _dir path shows to a list of image files of the same dimensions.

To use our Blinn-Phong shader with spatially varying reflectance, pass the MultiTexturedSoftPhongShader shader in the MeshRenderer constructor, with the optional highlight='blinn_phong' argument for Blinn Phong shading, and normal_space='tangent' for tangent-space specular normals, instead of object space:

from pytorch3d.renderer import MeshRenderer, MultiTexturedSoftPhongShader

renderer = MeshRenderer(
    rasterizer=MeshRasterizer(
        cameras=cameras, raster_settings=raster_settings
    ),
    shader=MultiTexturedSoftPhongShader(
        device=device, cameras=cameras, lights=lights,
        highlight='blinn_phong', normal_space='tangent'
    )
)

A detailed example is included at demo/demo.ipynb. For any further questions please raise an Issue or contact us.

Citations

If you find this extension useful in your research consider citing the works below:

@inproceedings{lattas2023fitme,
  title={FitMe: Deep Photorealistic 3D Morphable Model Avatars},
  author={Lattas, Alexandros and Moschoglou, Stylianos and Ploumpis, Stylianos
          and Gecer, Baris and Deng, Jiankang and Zafeiriou, Stefanos},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={8629--8640},
  year={2023}
}

@article{lattas2021avatarme++,
  title={Avatarme++: Facial shape and brdf inference with photorealistic rendering-aware gans},
  author={Lattas, Alexandros and Moschoglou, Stylianos and Ploumpis, Stylianos
          and Gecer, Baris and Ghosh, Abhijeet and Zafeiriou, Stefanos},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  volume={44},
  number={12},
  pages={9269--9284},
  year={2021},
  publisher={IEEE}
}

as well as the main Pytorch3D project:

@article{ravi2020pytorch3d,
    author = {Nikhila Ravi and Jeremy Reizenstein and David Novotny and Taylor Gordon
                  and Wan-Yen Lo and Justin Johnson and Georgia Gkioxari},
    title = {Accelerating 3D Deep Learning with PyTorch3D},
    journal = {arXiv:2007.08501},
    year = {2020},
}

For completion, we copy below the official README of PyTorch3D:








Introduction

PyTorch3D provides efficient, reusable components for 3D Computer Vision research with PyTorch.

Key features include:

  • Data structure for storing and manipulating triangle meshes
  • Efficient operations on triangle meshes (projective transformations, graph convolution, sampling, loss functions)
  • A differentiable mesh renderer

PyTorch3D is designed to integrate smoothly with deep learning methods for predicting and manipulating 3D data. For this reason, all operators in PyTorch3D:

  • Are implemented using PyTorch tensors
  • Can handle minibatches of hetereogenous data
  • Can be differentiated
  • Can utilize GPUs for acceleration

Within FAIR, PyTorch3D has been used to power research projects such as Mesh R-CNN.

Installation

For detailed instructions refer to INSTALL.md.

License

PyTorch3D is released under the BSD License.

Tutorials

Get started with PyTorch3D by trying one of the tutorial notebooks.

Deform a sphere mesh to dolphin Bundle adjustment
Render textured meshes Camera position optimization
Render textured pointclouds Fit a mesh with texture
Render DensePose data Load & Render ShapeNet data
Fit Textured Volume Fit A Simple Neural Radiance Field

Documentation

Learn more about the API by reading the PyTorch3D documentation.

We also have deep dive notes on several API components:

Overview Video

We have created a short (~14 min) video tutorial providing an overview of the PyTorch3D codebase including several code examples. Click on the image below to watch the video on YouTube:

Development

We welcome new contributions to PyTorch3D and we will be actively maintaining this library! Please refer to CONTRIBUTING.md for full instructions on how to run the code, tests and linter, and submit your pull requests.

Development and Compatibility

  • main branch: actively developed, without any guarantee, Anything can be broken at any time
    • REMARK: this includes nightly builds which are built from main
    • HINT: the commit history can help locate regressions or changes
  • backward-compatibility between releases: no guarantee. Best efforts to communicate breaking changes and facilitate migration of code or data (incl. models).

Contributors

PyTorch3D is written and maintained by the Facebook AI Research Computer Vision Team.

In alphabetical order:

  • Amitav Baruah
  • Steve Branson
  • Luya Gao
  • Georgia Gkioxari
  • Taylor Gordon
  • Justin Johnson
  • Patrick Labatut
  • Christoph Lassner
  • Wan-Yen Lo
  • David Novotny
  • Nikhila Ravi
  • Jeremy Reizenstein
  • Dave Schnizlein
  • Roman Shapovalov
  • Olivia Wiles

Citation

If you find PyTorch3D useful in your research, please cite our tech report:

@article{ravi2020pytorch3d,
    author = {Nikhila Ravi and Jeremy Reizenstein and David Novotny and Taylor Gordon
                  and Wan-Yen Lo and Justin Johnson and Georgia Gkioxari},
    title = {Accelerating 3D Deep Learning with PyTorch3D},
    journal = {arXiv:2007.08501},
    year = {2020},
}

If you are using the pulsar backend for sphere-rendering (the PulsarPointRenderer or pytorch3d.renderer.points.pulsar.Renderer), please cite the tech report:

@article{lassner2020pulsar,
    author = {Christoph Lassner and Michael Zollh\"ofer},
    title = {Pulsar: Efficient Sphere-based Neural Rendering},
    journal = {arXiv:2004.07484},
    year = {2020},
}

News

Please see below for a timeline of the codebase updates in reverse chronological order. We are sharing updates on the releases as well as research projects which are built with PyTorch3D. The changelogs for the releases are available under Releases, and the builds can be installed using conda as per the instructions in INSTALL.md.

[Oct 6th 2021]: PyTorch3D v0.6.0 released

[Aug 5th 2021]: PyTorch3D v0.5.0 released

[Feb 9th 2021]: PyTorch3D v0.4.0 released with support for implicit functions, volume rendering and a reimplementation of NeRF.

[November 2nd 2020]: PyTorch3D v0.3.0 released, integrating the pulsar backend.

[Aug 28th 2020]: PyTorch3D v0.2.5 released

[July 17th 2020]: PyTorch3D tech report published on ArXiv: https://arxiv.org/abs/2007.08501

[April 24th 2020]: PyTorch3D v0.2.0 released

[March 25th 2020]: SynSin codebase released using PyTorch3D: https://github.com/facebookresearch/synsin

[March 8th 2020]: PyTorch3D v0.1.1 bug fix release

[Jan 23rd 2020]: PyTorch3D v0.1.0 released. Mesh R-CNN codebase released: https://github.com/facebookresearch/meshrcnn