Questions on training
bharadwajdhornala opened this issue · 1 comments
Hi @czq142857 , I have tried the code and its works fine.
1.)The code in IMSVR uses the pre-trained IMAE checkpoints in IMSVR training. Can I get a more detailed explanation of using those checkpoints?
2.) In the IMSVR\data, we have hdf5 files of train, test,only_train,only_train_z.
03001627_hsp_vox_train.hdf5 and 03001627_hsp_vox_test.hdf5 contains
['pixels', 'points_16', 'points_32', 'points_64', 'values_16', 'values_32', 'values_64']. Can you explain from how are these files made?
Thanks!!
This repo is kinda out of date.
You can jump to the following repos for more up-to-date networks and results.
https://github.com/czq142857/IM-NET
https://github.com/czq142857/IM-NET-pytorch
To answer your question:
-
As explained in the paper: "In our experiments, we adopted a more radical approach by only training the ResNET encoder to minimize the mean squared loss between the predicted feature vectors and the ground truth. This performed better than training the image-to-shape translator directly, since one shape can have many different views, leading to ambiguity. Pre-trained decoders provide strong priors that can not only reduce such ambiguity, but also shorten training time, since the decoder was trained on unambiguous data in the autoencoder phase and encoder training was independently from the decoder in SVR phase."
-
Please visit the point_sampling folder for how to sample points from voxels and make those hdf5 files. Here are what those files mean in the data folder:
*_hsp_vox_only_train.hdf5 -> contains sampled points for shapes in the training set (useless, can use *_hsp_vox_train.hdf5 instead)
*_hsp_vox_only_train_z.hdf5 -> contains latent codes for shapes in the training set
*_hsp_vox_train.hdf5 -> contains sampled points and rendered views for shapes in the training set
*_hsp_vox_test.hdf5 -> contains sampled points and rendered views for shapes in the testing set