zgojcic/3DSmoothNet

Information about training dataset creation

lombardm opened this issue · 2 comments

Good morning,
I'm studying your work and I find it very interesting. If you don't mind, I would like to ask you a detail about your training setup. I read in another issue here on your github that you trained your model with 472000 iterations, using a batch size 256 for 20 epochs.

  • First of all, can you please confirm that when you speak about "batch size 256", you do mean 256 pairs of descriptors 4096dim (computed via SDV) for anchor and positive inputs?
  • Then, every pair of pointclouds should have 300 descriptors for the anchor pointcloud and 300 descriptors for the positive one, each of these computed around the closest points on the target pointcloud. Right? So the training doesn't need to be "pointcloud pair-related" in the sense that every pair of anchor and positive descriptors in the 256 batch could belong to different pairs of poinclouds, even from different scenes. Can you confirm my understanding or I said something wrong?
  • In the end, i suppose your training set was composed by 472000/20=23600 iterations per epoch, coming from nearly 23600*256=6041600 pairs of descriptors computed on the whole training set. I'm asking just to be sure that I got it right. :)

Thanks in advance for your patience, have a nice day.
Marco

Hello Marco,

I will go in the order of the questions:

  • Yes exactly batch size means the number of the "256 pairs of anchor/positive descriptors". The negatives are then sampled on the fly from those 255 (256 - the positive example)

  • Yes the anchors are sampled randomly and the positive example is determined with the help of the ground truth transformation parameters. In a batch there can be training examples from different scenes (we shuffle the data).

  • I am not exactly sure how many training examples are there in the whole training data set. You can download the data and check, but we trained for 472000 iterations so your calculation seems to be correct.

Cheers

Zan

Thank you for your reply Zan!

Cheers,
Marco