NVlabs/instant-ngp

How can i import camera animation/path from other software?

Opened this issue · 28 comments

would it be possible to import a camera path from after effects, or a 3d software like blender?

Yes! Find more info in discussion 153.

thank you for the quick response, i looked through discussion 153 but i couldn't find a answer, let me try to be more clear. lets say i have a 3d camera track that i did inside of after effects, how can i import the path/keyframes inside instant ngp so that i could playback the exact motion from the tracked camera. would this be possible to do with no coding/scripting knowledge?
i didn't really understand the discussion you sent me to but from what i can see they are talking about camera orientation and position and not paths or key frames.

I don't have much knowledge about coordinates defined in AE. I think you should write some code to convert the AE style pose to NeRF pose or COLMAP pose (no code is not possible...)

Here is an example to convert blender to NeRF JSON: https://github.com/not-lob/BlenderInstant-NGPScript,
and another example to convert Agisoft XML files to NeRF JSON: https://github.com/EnricoAhlers/agi2nerf

Good luck!

I just released an add-on for Blender to extract all the transforms from a camera path. Hope it does what you need : https://github.com/maximeraafat/BlenderNeRF :)

I just released an add-on for Blender to extract all the transforms from a camera path. Hope it does what you need : https://github.com/maximeraafat/BlenderNeRF :)

i am not trying to create a dataset using blender i just want to take a animated camera/camera path from blender into instant-ngp, i just need a way to convert a camera move\keyframes made inside blender into a base_cam.json file. i dont know how to articulate this sorry for any confusion

With the add-on, you can simply deactivate the "Train" button and run the "Play TTC" button, with the Test Cam being your animated camera (just set the Train Cam to anything). This will create a json file with the camera keyframes that you need

I've tried. The file don't look like base_cam.json neither it can be read by testbed whatever option I choose in the panel. I always look like a transforms.json from colmap

I am having this same issue, and have tried the same things. Is there any update on this?

I'm unsure of what your issue exactly is, but as far as I know the NGP graphical user interface does not support importing test cameras. This post might help, it provides more details on how to run NGP and get test images in the command line interface, using camera transforms obtained with BlenderNeRF

I'm trying to get the camera motion that I have in blender, and export it to render my nerf in InstantNGP.

With BlenderNeRF you'll get a transforms_train.json file and train folder. You can just drag and drop them into the NGP user interface, like in this video, and the training will start automatically

@maximeraafat Hi I think you might be mis-understanding what people are asking about.

People would like to import a "render" camera path into ingp from blender like the one you can create in ingp by using 'Add from cam'.

For example the render camera path here created by adding 3 cameras from the viewport to create a path;

image
ingp-cam-path.zip

I have attached the ingp-cam-path.json, would your add-on be able to support exporting this?

Hi @henrypearce4D, I see what the issue is now, thanks for clarifying! I do not have regular access to an NVIDIA GPU and therefore didn't get the chance to experiment enough with importing camera paths directly in the GUI. I mainly test BlenderNeRF with instant NGP on Google COLAB, and therefore didn't know it was possible to actually import a camera path in the GUI, as long as it follows a certain formatting style.

I believe I should be able to write a script to convert the BlenderNeRF test camera path output to the NGP GUI formatting style, and then add it to the add-on altogether. I'll tackle this very soon and keep you posted :)

@maximeraafat thanks for the reply, that would be an amazing addition to your add-on!

@iscreamparis did you release the code for this? is it possible to test it?

Hi @iscreamparis thanks for the link, I just tested it and it exports a single .json for each keyframe, for for 250 frames, 250 files.

The cam path for Instant ngp should be one file, is your script working as intended?

Hi @henrypearce4D,

Since I currently don't have access to an NIVIDA GPU, I cannot test my code yet. I would therefore appreciate if you could test whether this simple toy dataset works as expected. The zip file comprises the following data:

  • train folder : train renders for the NeRF
  • transforms_train.json : camera poses for the corresponding train renders
  • transforms_test.json : camera poses for the test data (here, an exact copy of the train camera poses)
  • base_cam.json : test camera poses that should now be readable by the NGP GUI

The base_cam.json should therefore render the same frames as in the train renders. If you (or anyone else) could run this and see whether the base_cam.json camera path file successfully loads a camera and renders the expected views, that would be awesome and I could then soon add a working code to BlenderNeRF :)

video.mp4

command:
~/test_donut# python3 ~/instant-ngp/scripts/run.py --scene ~/test_donut --save_snapshot ~/train/snapshot.ingp --video_camera_path ~/test_donut/base_cam.json --video_spp 2 --video_output ~/train/video.mp4 --video_fps 24 --video_n_seconds 10

@maximeraafat i do not change any parameters

@ihorizons2022 since I can actually test it in the command line interface, I'll debug it tonight or over the weekend in colab, and will then release an update to BlenderNeRF. Hopefully by Sunday evening latest you'll be able to extract a compatible base_cam.json camera path file from the add-on

@maximeraafat Hi thanks for looking into it, here is the base_cam.json cameras in relation to the scene cameras.
image
image
image

Could the script that iscreamparis posted above be useful for the matrix ?
https://pastebin.com/fyRYhcXz

So, after having spent the last few days trying to debug how to match the Blender and instant NGP coordinate system, I couldn’t get things to align perfectly yet. But here are my current results and what I have learned so far (the code to replicate my experiments is provided at the end of this post).

Loading a camera path with a consistent motion into NGP is not difficult, the real challenge is to align the Blender and NGP coordinate systems such that a camera motion in Blender renders the exact same views in NGP. From my understanding, the NGP coordinate system swaps and inverts axes from Blender in the following manner:

(x, y, z)NGP = (y, -z, -x)Blender

This conversion is performed in the following code snippet, called via this function within the scripts/run.py script, to map the camera matrix from the transforms_train.json file to the NGP coordinate system.

auto nerf_matrix_to_ngp(const Eigen::Matrix<float, 3, 4>& nerf_matrix) {
Eigen::Matrix<float, 3, 4> result;
int X=0,Y=1,Z=2;
result.col(0) = Eigen::Vector3f{ nerf_matrix(X,0), nerf_matrix(Y,0), nerf_matrix(Z,0)};
result.col(1) = Eigen::Vector3f{-nerf_matrix(X,1), -nerf_matrix(Y,1), -nerf_matrix(Z,1)};
result.col(2) = Eigen::Vector3f{-nerf_matrix(X,2), -nerf_matrix(Y,2), -nerf_matrix(Z,2)};
result.col(3) = Eigen::Vector3f{ nerf_matrix(X,3), nerf_matrix(Y,3), nerf_matrix(Z,3)} * scale + offset;
if (from_mitsuba) {
result.col(0) *= -1;
result.col(2) *= -1;
} else {
// Cycle axes xyz->yzx
Eigen::Vector4f tmp = result.row(0);
result.row(0) = (Eigen::Vector4f)result.row(1);
result.row(1) = (Eigen::Vector4f)result.row(2);
result.row(2) = tmp;
}
return result;
}

Additionally, NGP’s coordinate system is bounded within a unit cube (between 0 and 1), which is why the coordinates also require scaling by 0.33 and offsetting by 0.5 in x, y and z to be centred at the origin (see the two code snippets below).

static constexpr float NERF_SCALE = 0.33f;

result.scale = NERF_SCALE;
result.offset = {0.5f, 0.5f, 0.5f};

All of this is coherent with post #490, yet for some reason, the final NGP renders (images below, bottom row) are rotated versions of the Blender renders (top row) by 180°. The renders are the three first training images from the dataset provided above. I've tried all possible axes rotations and inversions I could think of, but none of them resolved the issue; in fact, most transformations made it worse and resulted in empty/black renders. I tried to reproduce the code provided by @iscreamparis, but that didn't do it either (again, it successfully loaded a camera into NGP, but it did not match the Blender coordinate system).

donut

The NGP renders also have some strange noise artefacts (on the right of the donut, bottom row), although I believe this is due to the Vertical camera sensor fit used in Blender. I have previously already noticed such issues (exclusively) when using a Vertical camera sensor fit.

I have also noticed that the cameras extracted from a base_cam.json file are interpolated from the camera path provided with the file. In other words, if my base_cam.json file only has three camera poses, I can still reader as many images as I want: NGP will simply interpolate the camera poses between the three provided ones. Because of that, the camera poses at any given frame in my base_cam.json file might not perfectly match the interpolated camera poses at that same frame. This is therefore quite useful for rendering videos, but not for comparing two frames pixel by pixel.


You can replicate my results by copying the code below into a text editor in Blender. The coordinate system mapping is performed in the get_ngp_camera_path function. By executing the code, you will save a base_cam.json file for the current active camera, at the location where your .blend file is stored. You can change that location manually by changing the OUTPUT_PATH string.

Once the rotation issue is fixed, I can then integrate the code into BlenderNeRF. In the meanwhile, feel free to play with this script. I've been testing and debugging everything in COLAB, which is not very convenient. Having no access to the NGP GUI is not helpful either; any help is therefore welcome! :)

# Copyright (c) 2023 Maxime Raafat

import os
import math
import json
import numpy as np
import mathutils
import bpy


# CHANGE OUTPUT PATH
OUTPUT_PATH = os.path.dirname(bpy.data.filepath) # save to where .blend file is located (cannot be /tmp/)

# save dictionary
def save_json(directory, filename, data, indent=4):
        filepath = os.path.join(directory, filename)
        with open(filepath, 'w') as file:
            json.dump(data, file, indent=indent)

# ngp camera test dictionary
def get_ngp_camera_path(scene, camera):
    cam_matrix = np.array(camera.matrix_world)

    cam_matrix[:,1] *= -1 # flip y axis
    cam_matrix[:,2] *= -1 # flip z axis
    cam_matrix[:3,3] *= 0.33 # scale
    cam_matrix[:3,3] += [0.5, 0.5, 0.5] # offset

    cam_matrix = cam_matrix[[1, 2, 0, 3], :] # swap axis
    
    translation, rotation, _ = mathutils.Matrix(cam_matrix).decompose()        

    frame_data = {
        'R': list(rotation),
        'T': list(translation),
        'aperture_size': 0.0,
        'fov': math.degrees(camera.data.angle_x),
        'glow_mode': 0.0,
        'glow_y_cutoff': 0.0,
        'scale': 1.0,
        'slice': 0.0
    }

    return frame_data


# scene and camera
scene = bpy.context.scene
camera = scene.camera

# instantiate ngp camera dictionary
ngp_cam_data = {'loop': False, 'time': 1.0}
ngp_cam_path = []

initFrame = scene.frame_current

# iterate over frames and append to ngp camera path
for frame in range(scene.frame_start, scene.frame_end, scene.frame_step):
    scene.frame_set(frame)
    ngp_cam_path.append(get_ngp_camera_path(scene, camera))

scene.frame_set(initFrame) # set back to initial frame

ngp_cam_data['path'] = ngp_cam_path

# save ngp camera
save_json(OUTPUT_PATH, 'base_cam.json', ngp_cam_data)

@maximeraafat Is it possible that the nerf was just rotated somewhere along being turned into a dataset, then that dataset being trained into a nerf?

@maximeraafat wow thats super close, the path start and cam direction is correct, but as you mention each cam is rotated on the Z 180 it looks like, I'm not sure what the correction for this would be.

image
image
Also for reference this is what I had set the mesh at out of INGP to line things up on this dataset, does your transforms exporter do other scaling to fit the scene or anything?
image

@Ender436, on the BlenderNeRF side I'm very confident that the coordinates are not rotated, but NGP does some coordinate conversion, and it might be that I missed a part there. My comment above however already discusses all my findings on this conversion. I'll have another look again

@henrypearce4D thanks for experimenting with it! I'll play a bit more with coordinate system rotations asap, and will keep you posted about my progress. I'm sure there must be a way to get that final rotation around the z axis