gaochen315/DynamicNeRF

run on my own data

Opened this issue · 2 comments

Hello, I tried your great work on my own video record by phone. But I failed on the following colmap's command
colmap mapper \ --database_path $DATASET_PATH/database.db \ --image_path $DATASET_PATH/images_colmap \ --output_path $DATASET_PATH/sparse \ --Mapper.num_threads 16 \ --Mapper.init_min_tri_angle 4 \ --Mapper.multiple_models 0 \ --Mapper.extract_colors 0
like this
355d17c61712147ea7347bda2010a02

firstly, I have some difficult to understand what does these Mapper.init_min_tri_angle Mapper.multiple_models Mapper.extract_colors mean? And what's your purpose to alter those parameters rather than using default ?
Secondly, I removed the last 4 parameters, things go well and successfully got my camera.bin. I wonder how those affect the result. And I'm worry about by remove the last 4 parameters Mapper.num_threads Mapper.init_min_tri_angle Mapper.multiple_models Mapper.extract_colors, whether the extrinsic camera poses I get are still accurate or not ?

Hoping for your reply, thanks!

Hi @zhywanna,

Sorry for my late reply. I adapted these parameters from LLFF. I am not sure about num_threads, init_min_tri_angle and extract_colors , but I think it is quite important to set multiple_models to 0. Can you use COLMAP to visualize the sparse reconstuction? Does it make sense?

The colmap reconstruction result and the video generated look really poor, It seems difficult to get the accurate pose from the moving object. I changed my experiment yet :(