IVRL/VolRecon

Question about Fine-tuning

anonymous-pusher opened this issue · 3 comments

Hello, thank you for sharing the code of your great work.
I was wondering if it's possible to fine-tune the pretrained model on a new scene rather than running inference, similar to how it is done in SparseNeuS. In the paper, there was no mention of this so I just assumed that it is not possible but I might be wrong.

Also, I was trying to run the model on a scene from BlendedMVS, but I could not get any meaningful reconstruction as seen here:
image

when looking at the rendered depth maps, I get something, but the mesh generation does not give good result:
image
I tried tuning the value of self.offset_dist but still no success. Do you have an idea of what could be wrong here ?

Thanks

I encountered the same problem, any solutions?

Hi, sorry for the late reply. For finetuning, we tried finetuning with rendering loss before, but it did not improve the performance. So finetuning needs more geometric supervisions, such as patch warping in neuralwarp. However, when we tried finetuning SparseNeuS, we find that this could be unstable sometimes and finetuning may fail (this is also reported from others: xxlong0/SparseNeuS#17). Therefore, we did not finetune.

I assume your question is answered, feel free to reopen it : )