dariopavllo/convmesh

Which 3D Visualization Tool/Code Do You use?

meijie0401 opened this issue · 3 comments

I'm wondering which visualization tool/code do you use to show those 3D textured meshes in this repo and in the paper? Could you please provide some sources/code/link? Thanks!!!

Most of the visualizations in the paper/repo were rendered using the script itself (e.g. you can modify the code that exports results to Tensorboard). In the repo you can also find the wireframe texture that was used to render the wireframe meshes: https://github.com/dariopavllo/convmesh/blob/master/mesh_templates/wireframe_16rings.png
Tips: render on a white background and at a higher resolution (e.g. 1024x1024 instead of 256x256), then resize the image to 256x256 for anti-aliasing.

Alternatively, you can export the generated meshes as .obj files and open/render them with Blender. The instructions are in the readme.

Thanks for your answer! I see your code can export the rendered 2D images to Tensorbaord. But how to export the predicted textured 3D mesh to Tensorboard using writer.add_mesh function which requires color input for each vertex? I already exported the mesh without texture to Tensorboard. So the remaining question is how to find the mapping between UV texture and vertices so that writer.add_mesh can get the required color input.

Or how do you visualize the predicted 3D textured mesh during training? Do you only use Blender to visualize the generated textured mesh after training?

I was not aware of that feature of Tensorboard. For our visualizations during training, we simply render the meshes from random viewpoints using the renderer. This is done in the function that computes the FID.

It looks like that TensorBoard function doesn't support textures, only vertex colors. This might result in blurry results.