lvsn/deeptracking

question about training

mafda opened this issue · 2 comments

mafda commented

Hi,

I have some questions about network training:

  1. In the repository, there are 5 files 'tracker.lua' with some differences between them (rgb_tracker_squeeze, rgbd_quaternion_tracker, rgbd_tracker, rgbd_tracker_no_maxpool, rgbd_tracker_squeeze). I would like to know which of these versions was the one that you used for the training of the network.

  2. In the paper, Deep 6-DOF Tracking, the design of the network is presented in Figure 3, however it is not evident how the quaternion is included in the model. I would like to know how the quaternion is used as input in training.

  3. Finally, in the paper, Deep 6-DOF Tracking in section 5.2, a set of 180 captures is mentioned for training, to carry out a fine-tuning before real-life testing. What are those images? How can I get those images?

I would like to understand these issues in order to achieve the same results presented in your paper.

Thank you.

Hey

  1. The network used for the paper is rgbd_tracker.
  2. We don't use quaternions in the paper (rgbd_quaternion is indeed misleading as it was simply an experiment that ended up in the repo..)
  3. This version of the network needs real images in order to bridge the domain gap between reality and renders. You need to use python generate_real_data.py config_file.json with the right config file and the raw training data downloaded here.
    This code will remove the backgrounds and generate a random previous frame so you can train the network in a similar way than the synthetic one. (Best result obtained if you merge synthetic and real during training)

Just a note, we have an updated version that makes the whole process simpler. The network works much better and does not require any real data (which makes the training easier). We also have a more complete test dataset for 6 DOF tracking. You can find everything at the project page..

If you have more question just ask!
Best,
Mathieu

mafda commented

Nice, thank you!