bennyguo/instant-nsr-pl

How to evaluate the Chamfer Distance of the DTU dataset?

Ice-Tear opened this issue · 8 comments

Hi,thanks for your great work!
I'm trying to implement exponentially decreasing eps and weight of curv loss (in Neuralangelo) in your code.It seems to work!
dtu24_mesh
dtu24_total
dtu24_psnr

But I got a value of 7.6 when evaluating Chamfer Distance using the evaluate_single_scene.py in monosdf : (
dtu24_eval

I don't think the quality of this mesh should have such a high CD value, so I would like to ask you how to evaluate the Chamfer Distance of the DTU dataset?

Hi,thanks for your great work! I'm trying to implement exponentially decreasing eps and weight of curv loss (in Neuralangelo) in your code.It seems to work! dtu24_mesh dtu24_total dtu24_psnr

But I got a value of 7.6 when evaluating Chamfer Distance using the evaluate_single_scene.py in monosdf : ( dtu24_eval

I don't think the quality of this mesh should have such a high CD value, so I would like to ask you how to evaluate the Chamfer Distance of the DTU dataset?

could you please share the code? My results on dtu_24 is poor, do you change the curv loss warmup step?

@Ice-Tear Could you check whether the extracted mesh and the ground truth mesh are in the same orientation? It's possible that you should rotate the extract mesh to align with the ground truth one.

@adkAurora yes,I change this file systems/neus.py .I add some code between self.log('train/loss_curvature',loss_curvature) and
loss += loss_curvature * self.C(self.config.system.loss.lambda_curvature).
Here is the code.

            # warm up the weight of curv loss
            if self.global_step < self.config.system.loss.curvature_warmup_steps:
                curvature_weight = (self.global_step+1) * self.C(self.config.system.loss.lambda_curvature) / self.config.system.loss.curvature_warmup_steps
            else:
                # decay the weight of curv loss
                growth_ratio = self.config.model.geometry.xyz_encoding_config.per_level_scale
                growth_steps = self.config.model.geometry.xyz_encoding_config.update_steps
                curvature_weight = np.exp(-max(self.global_step - self.config.system.loss.curvature_warmup_steps,0) * np.log(growth_ratio) / growth_steps) * self.C(self.config.system.loss.lambda_curvature)

And I change the config file.
config

@bennyguo Thank you for your reply! I just started learning 3D reconstruction and I don't know how to check orientation yet.But when I open them with meshlab, their orientation looks the same.How do you usually evaluate the Chamfer Distance of DTU?

Hi,thanks for your great work! I'm trying to implement exponentially decreasing eps and weight of curv loss (in Neuralangelo) in your code.It seems to work! dtu24_mesh dtu24_total dtu24_psnr

But I got a value of 7.6 when evaluating Chamfer Distance using the evaluate_single_scene.py in monosdf : ( dtu24_eval

I don't think the quality of this mesh should have such a high CD value, so I would like to ask you how to evaluate the Chamfer Distance of the DTU dataset?

Thank you for your code, can you provide your configuration file, I tried it myself and did not get as good results as you did.

@yizhidecainiao Here is the config.
name: neuralangelo-dtu-wmask-${basename:${dataset.root_dir}}
tag: ""
seed: 42

dataset:
name: dtu
root_dir: ./load/DTU-neus/dtu_scan24
cameras_file: cameras_sphere.npz
img_downscale: 2 # specify training image size by either img_wh or img_downscale
n_test_traj_steps: 60
apply_mask: true

model:
name: neus
radius: 1.0
num_samples_per_ray: 1024
train_num_rays: 512
max_train_num_rays: 2048
grid_prune: false
grid_prune_occ_thre: 0.001
dynamic_ray_sampling: true
batch_image_sampling: true
randomized: true
ray_chunk: 2048
cos_anneal_end: 500000
learned_background: false
background_color: white
variance:
init_val: 0.3
modulate: false
geometry:
name: volume-sdf
radius: 1.0
feature_dim: 13
grad_type: finite_difference
finite_difference_eps: progressive
isosurface:
method: mc
resolution: 512
chunk: 2097152
threshold: 0.
xyz_encoding_config:
otype: ProgressiveBandHashGrid
n_levels: 16
n_features_per_level: 8
log2_hashmap_size: 22
base_resolution: 32
per_level_scale: 1.3195079565048218
include_xyz: true
start_level: 4
start_step: 0
update_steps: 5000
finest_size_ratio: 1.
mlp_network_config:
otype: VanillaMLP
activation: ReLU
output_activation: none
n_neurons: 64
n_hidden_layers: 1
sphere_init: true
sphere_init_radius: 0.5
weight_norm: true
texture:
name: volume-radiance
input_feature_dim: ${add:${model.geometry.feature_dim},3} # surface normal as additional input
dir_encoding_config:
otype: SphericalHarmonics
degree: 4
mlp_network_config:
otype: VanillaMLP
activation: ReLU
output_activation: none
n_neurons: 64
n_hidden_layers: 4
color_activation: sigmoid

system:
name: neus-system
loss:
lambda_rgb_mse: 0.
lambda_rgb_l1: 1.
lambda_mask: 0.1
lambda_eikonal: 0.1
lambda_curvature: 0.0005
curvature_warmup_steps: 5000
lambda_sparsity: 0.0
lambda_distortion: 0.0
lambda_distortion_bg: 0.0
lambda_opaque: 0.0
sparsity_scale: 1.
optimizer:
name: AdamW
args:
lr: 0.01
betas:
- 0.9
- 0.99
eps: 1.e-15
params:
geometry:
lr: 0.01
texture:
lr: 0.01
variance:
lr: 0.001
constant_steps: 5000
scheduler:
name: SequentialLR
interval: step
milestones:
- 5000
schedulers:
- name: ConstantLR
args:
factor: 1.0
total_iters: 5000
- name: ExponentialLR
args:
gamma: 0.9999953483237626

checkpoint:
save_top_k: -1
every_n_train_steps: 250000

export:
chunk_size: 2097152
export_vertex_color: True

trainer:
max_steps: 500000
log_every_n_steps: 100
num_sanity_val_steps: 0
val_check_interval: 5000
limit_train_batches: 1.0
limit_val_batches: 2
enable_progress_bar: true
precision: 16

@yizhidecainiao Here is the config. name: neuralangelo-dtu-wmask-${basename:${dataset.root_dir}} tag: "" seed: 42

dataset: name: dtu root_dir: ./load/DTU-neus/dtu_scan24 cameras_file: cameras_sphere.npz img_downscale: 2 # specify training image size by either img_wh or img_downscale n_test_traj_steps: 60 apply_mask: true

model: name: neus radius: 1.0 num_samples_per_ray: 1024 train_num_rays: 512 max_train_num_rays: 2048 grid_prune: false grid_prune_occ_thre: 0.001 dynamic_ray_sampling: true batch_image_sampling: true randomized: true ray_chunk: 2048 cos_anneal_end: 500000 learned_background: false background_color: white variance: init_val: 0.3 modulate: false geometry: name: volume-sdf radius: 1.0 feature_dim: 13 grad_type: finite_difference finite_difference_eps: progressive isosurface: method: mc resolution: 512 chunk: 2097152 threshold: 0. xyz_encoding_config: otype: ProgressiveBandHashGrid n_levels: 16 n_features_per_level: 8 log2_hashmap_size: 22 base_resolution: 32 per_level_scale: 1.3195079565048218 include_xyz: true start_level: 4 start_step: 0 update_steps: 5000 finest_size_ratio: 1. mlp_network_config: otype: VanillaMLP activation: ReLU output_activation: none n_neurons: 64 n_hidden_layers: 1 sphere_init: true sphere_init_radius: 0.5 weight_norm: true texture: name: volume-radiance input_feature_dim: ${add:${model.geometry.feature_dim},3} # surface normal as additional input dir_encoding_config: otype: SphericalHarmonics degree: 4 mlp_network_config: otype: VanillaMLP activation: ReLU output_activation: none n_neurons: 64 n_hidden_layers: 4 color_activation: sigmoid

system: name: neus-system loss: lambda_rgb_mse: 0. lambda_rgb_l1: 1. lambda_mask: 0.1 lambda_eikonal: 0.1 lambda_curvature: 0.0005 curvature_warmup_steps: 5000 lambda_sparsity: 0.0 lambda_distortion: 0.0 lambda_distortion_bg: 0.0 lambda_opaque: 0.0 sparsity_scale: 1. optimizer: name: AdamW args: lr: 0.01 betas: - 0.9 - 0.99 eps: 1.e-15 params: geometry: lr: 0.01 texture: lr: 0.01 variance: lr: 0.001 constant_steps: 5000 scheduler: name: SequentialLR interval: step milestones: - 5000 schedulers: - name: ConstantLR args: factor: 1.0 total_iters: 5000 - name: ExponentialLR args: gamma: 0.9999953483237626

checkpoint: save_top_k: -1 every_n_train_steps: 250000

export: chunk_size: 2097152 export_vertex_color: True

trainer: max_steps: 500000 log_every_n_steps: 100 num_sanity_val_steps: 0 val_check_interval: 5000 limit_train_batches: 1.0 limit_val_batches: 2 enable_progress_bar: true precision: 16

Thanks for your reply!

@bennyguo Thank you for your reply! I just started learning 3D reconstruction and I don't know how to check orientation yet.But when I open them with meshlab, their orientation looks the same.How do you usually evaluate the Chamfer Distance of DTU?

Hi! I also encountered the same problem. Did you find the correct orientation?