Question about the pretrained weights
Tiam2Y opened this issue · 4 comments
Hello! Thanks for the great work! @AlessioTonioni
But I have some questions about pretrained weights.
The pretrained weights you provide are exactly the same as the weights in this repository (Real-time-self-adaptive-deep-stereo).
So how do I get the weights pretrained on Carla or synthia (with meta learning)?
Hi, unfortunatelly I don't have these weights anymore.
The code provided covers all the training and pretraining phases so it should be possible to retrain the network on your own if needed.
Well, thanks for your answer and code! @AlessioTonioni
In order to train the weights of L2A+Wad, there are still several problems, as follows:
-
Is the Synthia dataset you used before from this link? http://synthia-dataset.net/downloads/
-
This dataset provides the ground-truth of depth. In order to obtain the ground-truth of disparity, is it converted according to the following calibration information after decoding the depth value? (Like converting the depth value of KITTI Raw data?)
calib_kitti
calibration file on KITTI format. P0, P1, P2, P3 corresponds to the same intrinsic camera matrix.
In order to express question 2 more clearly, take the calibration file calib_cam_to_cam.txt
in KITTI Raw data 2011_09_30_calib
as an example, assuming that the depth
is known, is the disparity calculated by the following formula?
Sorry for the troublesome questions, but I'd appreciate your answers!
Hello,
- We used the Synthia Video Sequences images from the link you provided above.
disparity = (focal*baseline)/depth
, you can get baseline and focal of the camera system from the camera calibration files in every dataset. Remember to express baseline and depth in the same unit of measure (e.g., both in meters).
All right, I got it. Thanks a lot!