doubleZ0108/GeoMVSNet

revisiting closed inquiry with additional questions…

vietpho opened this issue · 1 comments

Hello, sorry, it's me again.
You closed my issue, so I'm writing anew. I have a few more questions that I haven't received answers to..!

I noticed in the documentation that there was fine-tuning using the blendedMVS for the T&T dataset. Would you be willing to share the fine-tuned pretrained model? I'm eager to test its performance on a room-scale as soon as possible. If it's not possible to share the model, could you at least inform me about the duration of the fine-tuning process on the blendedMVS dataset?
+
The room-scale dataset using your code and the resulting .ply file exceeded 10GB (I got about 400 million points...). I wanted to confirm if such a file size is typical for this scale. The writing process for the .ply file also seemed quite lengthy. Comparing this to the point cloud files you've provided, which are about 500MB at an object-scale, it suggests to me that a 10GB file size for room-scale might be reasonable. Nevertheless, I would like to ensure that this is expected and that there are no errors in the process.

  1. I don't have a fine-tuned pre-trained model here.
  2. ~10h for 4 V100 cards.
  3. 10G is reasonable, but you should do the point cloud filter post-process yourself.