wanmeihuali/taichi_3d_gaussian_splatting

how to match quality of original implementation?

tsaizhenling opened this issue · 1 comments

I have been playing with the truck sample in both this repository and in graphdeco-inria /
gaussian-splatting
from this repo
^this repo
from reference
^ reference, trained with default parameters

I can't seem to replicate the same re-construction quality with this repository, (note that the fence cannot be rendered clearly). I have tried to match the learning rates, making the following changes to the config. what is causing the difference?

--- a/config/tat_truck_every_8_test.yaml
+++ b/config/tat_truck_every_8_test.yaml
@@ -31,8 +31,8 @@ print-metrics-to-console: False
 enable_taichi_kernel_profiler: False
 log_taichi_kernel_profile_interval: 3000
 log_validation_image: False
-feature_learning_rate: 0.005
-position_learning_rateo: 0.00005
+feature_learning_rate: 0.0025
+position_learning_rate: 0.00016
 position_learning_rate_decay_rate: 0.9947
 position_learning_rate_decay_interval: 100
 loss-function-config:
@@ -45,8 +45,11 @@ rasterisation-config:
   depth-to-sort-key-scale: 10.0
   far-plane: 2000.0
   near-plane: 0.4
+  grad_s_factor: 2
+  grad_q_factor: 0.4
+  grad_alpha_factor: 20
 summary-writer-log-dir: logs/tat_truck_every_8_experiment
-output-model-dir: logs/tat_truck_every_8_experiment
+output-model-dir: logs/tat_truck_every_8_experiment_matched_lr
jb-ye commented

Based on my experience, the repo's implementation has a lot difference than the official repo, so does its rendering performance. One notable difference is its gaussian densitification strategy is much conservative than the one by official repo. Directly matching the parameters wouldn't lead to the same performance. But I had found cases that Taichi GS performs better than official one. It is really case by case.