On PyTorch Version & Training Time
Closed this issue · 1 comments
II-Matto commented
Thanks for this great work! I have two quick questions:
- Will PyTorch 1.4.0 be OK for running this code? I notice that the recommended version is 1.1.0.
- How long will it take to train JDACS (w/o MS)? I notice that the README says training JDACS-MS can take several days with 4 GPUs. Is training JDACS less time-consuming?
BTW, is there any Python implementation of the evaluation code, which is currently implemented with Matlab?
Many thanks.
ToughStoneX commented
Hello,
- It is OK to run the code with PyTorch version over 1.1.0. The 1.1.0 is recommended according to the environment of my server.
- I remember that JDACS trained with 4 GPUs requires half a day on my server. Whereas JDACS-MS requires several days on 4 GPUs. It is because the backbones of JDACS and JDACS-MS are different. MVSNet is utilized in JDACS and CVP-MVSNet is used in JDACS-MS. The training time is related to the backbone.
- For evaluation, you can directly run the
test.sh
in JDACS-MS and theeval_dense.sh
in JDACS. These scripts will generate the 3D models in a format of.ply
. The provided Matlab code is from the DTU benchmark, which is used to assess the performance following their official benchmark.