panmari/stanford-shapenet-renderer

cam.location ?

Closed this issue · 2 comments

Hello

Thanks for providing this wonderful repo. Probably it's not a good idea to ask here, but I am wondering how does cam.location works in blender ?

When cam.location = [0, 0, 0], I think the camera is located in the origin and the render result is nothing, because the object is at origin. When cam.location = [1, 1, 1], I believe the angle is 45 degree from each axis and the result rendered image cover the whole object.

I am a bit confused about what dose [1, ,1 ,1] means. Clearly they are not pixel value nor axis unit.

Thanks
Joe

Glad you find it useful!

Coordinates are to be understood in world-space. The script does not set the position of the object and simply relies on the fact that bpy.ops.import_scene.obj initializes objects centered at origin.
The camera however needs to be set relative to the size of the imported object (which might be in ANY coordinate range). Thus you might want to either

  1. Change the position of the camera appropriately.
  2. Normalize the object to fit into the scene.

For my use case this was never a problem, as my models from the shapenet dataset were properly normalized.

I added another parameter for controlling the scaling of the model. If you want to make it more generic, I'm happy to accept a CL :)

Unscaled:
unscaled

with --scale=0.04
scaled