maximeraafat/BlenderNeRF

how to start, please give more detailed documentations?

Closed this issue · 5 comments

  1. what is the input
  2. what is the output
  3. could i add virtual camera through blender, not INGP gui
  4. or just replace colmap to produce transforms.json
  5. ....
    thank you very much

Hi @ihorizons2022, thanks for your interest! Here are some details for each method.

  1. Subset of Frames (SOF) method
    The input is the selected camera (ideally animated) in a 3D scene. The method will then render every n-th frame of the camera. The add-on will save the rendered frames in a train folder, and the camera poses corresponding to the respective rendered frames in a transforms_train.json file. Another transforms_test.json file will be created, containing the camera poses for all the frames of the selected camera (not just every n-th frame).

  2. Train and Test Cameras (TTC) method
    The input consists of two chosen cameras, one used for training and one for testing. This method operates similarly to the SOF method, but here all frames of the training camera are rendered (and stored in a train folder and a transforms_train.json file), while the camera path for the testing camera is stored into the transforms_test.json file.

  3. Camera on Sphere (COS) method
    For this method, you do not need an explicit input, but instead you can play with some parameters for a camera to be randomly placed onto a controllable sphere. The train images and transforms_train.json camera path will then be rendered and extracted from the random views sampled from the controllable sphere, and the transforms_test.json file will be created from the default selected scene camera.

Each method therefore only relies on a 3D scene in Blender, that will then be rendered from a specific camera (using either Cycles or Eevee, depending on the selected rendering engine). The output of the methods are the necessary data for training and evaluating a NeRF (using for example instant NGP).

Regarding your question on whether it is possible to import a camera from Blender into NGP, this is for now only doable in the command line interface, by evaluating your NeRF with the transforms_test.json file. I recently realised that it is possible to import a camera path into the NGP GUI, and will hopefully soon add a functionality to the add-on to support this.

I'll keep you posted for any updates, but feel free to ask for more questions in the meanwhile!

thank you for your reply.
assuming, i take 20 images of one statue, if i want to create camera trajectory in blender instead of INGP GUI , and use it as base_cam.json needed by INGP to render video.
could i use the add-on to realize above request

Yes exactly that's the idea. You will also find more information to your request here :)

Hi, so sorry to possible ask something similar again. However, I'm trying to take a camera path animation from CamTrackAR and convert it in Blender for use in the Nvidia NGP. I've gotten somewhat close, and found your BlenderNeRF program here.. but I'm having trouble figuring out how to properly use it. It doesn't provide a camera path file (like base_cam.json), and the transforms_test.json messes with my already trained NeRF. Is there a way to import only the camera path data into an already train NeRF?

Hi @nebulancex, thanks for your message. Unfortunately, I never managed to get the exact camera pose from Blender into the base_cam.json format, since it follows a somewhat different convention than the transforms.json files. The best I achieved is in the code below, directly copied from this issue (more details there), which yields NeRF renders rotated by 180°. You can copy paste this code in the Blender text editor and run the script, it will create a base_cam.json from the active camera at the specified OUTPUT_PATH location. I hope this can be of help!

# Copyright (c) 2023 Maxime Raafat

import os
import math
import json
import numpy as np
import mathutils
import bpy


# CHANGE OUTPUT PATH
OUTPUT_PATH = os.path.dirname(bpy.data.filepath) # save to where .blend file is located (cannot be /tmp/)

# save dictionary
def save_json(directory, filename, data, indent=4):
        filepath = os.path.join(directory, filename)
        with open(filepath, 'w') as file:
            json.dump(data, file, indent=indent)

# ngp camera test dictionary
def get_ngp_camera_path(scene, camera):
    cam_matrix = np.array(camera.matrix_world)

    cam_matrix[:,1] *= -1 # flip y axis
    cam_matrix[:,2] *= -1 # flip z axis
    cam_matrix[:3,3] *= 0.33 # scale
    cam_matrix[:3,3] += [0.5, 0.5, 0.5] # offset

    cam_matrix = cam_matrix[[1, 2, 0, 3], :] # swap axis
    
    translation, rotation, _ = mathutils.Matrix(cam_matrix).decompose()        

    frame_data = {
        'R': list(rotation),
        'T': list(translation),
        'aperture_size': 0.0,
        'fov': math.degrees(camera.data.angle_x),
        'glow_mode': 0.0,
        'glow_y_cutoff': 0.0,
        'scale': 1.0,
        'slice': 0.0
    }

    return frame_data


# scene and camera
scene = bpy.context.scene
camera = scene.camera

# instantiate ngp camera dictionary
ngp_cam_data = {'loop': False, 'time': 1.0}
ngp_cam_path = []

initFrame = scene.frame_current

# iterate over frames and append to ngp camera path
for frame in range(scene.frame_start, scene.frame_end, scene.frame_step):
    scene.frame_set(frame)
    ngp_cam_path.append(get_ngp_camera_path(scene, camera))

scene.frame_set(initFrame) # set back to initial frame

ngp_cam_data['path'] = ngp_cam_path

# save ngp camera
save_json(OUTPUT_PATH, 'base_cam.json', ngp_cam_data)