How can I optimize the parameter for my custom data?
Closed this issue · 3 comments
My goal is to accurately reconstruct an 18650 battery from my image or video, but after several attempts, the mesh is not producing the quality I want. I am very curious about how to train it so that it comes out well.
- process data and get depth,normal with omnidata
2.ns-train neus-facto --trainer.load-dir outputs/-workspace-processdata-modifyvidbat/neus-facto/2024-01-22_022432/sdfstudio_models
--pipeline.datamanager.train-num-rays-per-batch 1024
--pipeline.model.sdf-field.bias 0.3
--pipeline.model.sdf-field.use-grid-feature False
--pipeline.model.mono-depth-loss-mult 0.5 --pipeline.model.mono-normal-loss-mult 0.05 --vis wandb
--pipeline.model.sdf-field.inside-outside False --pipeline.model.background-model grid --trainer.steps_per_save 5000
--trainer.steps-per-eval-image 5000 --trainer.max-num-iterations 300000
--experiment-name nomaskmodifyvidbat sdfstudio-data --data /workspace/processdata/modifyvidbat
--include_mono_prior True
(Because of the large number of iterations, I proceeded by loading the previous progress)
3.ns-extract-mesh --resolution 512 --bounding-box-min -0.6 -0.6 -0.6 --bounding-box-max 0.6 0.6 0.6 --load-config outputs/nomaskmodifyvidbat/neus-facto/2024-01-22_234123/config.yml --output-path meshoutput/nomask200kbat.ply
here is wandb https://wandb.ai/ju805604/sdfstudio/runs/hfb0p7g6?workspace=user-ju805604 looks no problem
https://drive.google.com/file/d/1WV9wOjWhwsOkBC1g9cNBKaaKMMaYgp5G/view?usp=drive_link
here is my proceesd data
Hi, the rendering in your wandb doesn't look good and the rub and depth map is not smooth. I think the foreground region is not well optimised and it is represented by the background NeRF and therefore the extracted mesh not good. I think you could try to disable the background model or use MLP for the background model.
I think the foreground region is not well optimised and it is represented by the background NeRF and therefore the extracted mesh not good. I think you could try to disable the background model or use MLP for the background model.
Thanks Is it better to change my custom dataset?
And I'm wondering how to set the right parameters or options for my dataset, even if it's not this one, when I use something else --pipeline.model.mono-depth-loss-mult 0.5 --pipeline.model.sdf-field.bias 0.3, etc.
I think the foreground region is not well optimised and it is represented by the background NeRF and therefore the extracted mesh not good. I think you could try to disable the background model or use MLP for the background model.
@niujinshuchong Thanks a lot :)
Is it better to change my custom dataset? And I'm wondering how to set the right parameters or options for my dataset, even if it's not this one, when I use something else --pipeline.model.mono-depth-loss-mult 0.5 --pipeline.model.sdf-field.bias 0.3, etc.