The pre-trained models and data provided are not sufficient to perform tests on the blender dataset
chobao opened this issue · 3 comments
I want to render the albedo, relighting results with pre-trained nerfactor on the blender dataset without further training. However, I find the pre-trained models and data provided are not sufficient to perform tests with test.py
on the blender dataset. It requires shape_ckpt
, brdf_ckpt
and processed data(lvis.npy
, xyz.npy
, alpha.png
, normal.npy
) of each view which are not provided.
So, does it mean I still need to do DataPreparation step and train shapemodel
by myself ? Are pre-trained models provided useless?
The processed data are too large to be released, unfortunately, but your comment is fair; I can try releasing the shape and BRDF checkpoints, with which you will be able to generate lvis.npy, xyz.npy, alpha.png, normal.npy. Will that help?
It doesn't get any better than this. And I still have a question. Is only xyz.npy
required if I want to perform test.py
and render albedo with pre-trained nerfactor(including shape_ckpt
, brdf_ckpt
) ? normal.npy
, lvis.npy
and alpha.png
is not necessarily prepared in advance and should be predicted by normal MLP
and visibility MLP
when test.
Yes, because unless you opt to take the NeRF shape as is (no further optimization on the geometry), normals and light visibility will be predicted by the trained model. Here's the line where the model predicts normals from xyz
:
nerfactor/nerfactor/models/nerfactor.py
Line 207 in 19651eb