This is the source code for the paper Towards Continual, Online, Unsupervised Depth. This code is for stereo-based depth estimation.
Manuscript is available here.
The SfM-based depth estimation is also available at here.
- PyTorch
- Torchvision
- NumPy
- Matplotlib
- OpenCV
- Torchvision
- Pandas
Download the raw KITTI dataset, the Virtual KITTI RGB, the KITTI test dataset, and Virtual KITTI depth. Extract data to appropriate locations. Saving SSD is encouraged but not required.
Run
python script_evaluate.py
to display results in console. Run
python script_test_directory.py
to do the evaluations again.
Set paths in the options/online_train_options.py file. Then run
python script_online_train.py
The online-trained models (for a single epoch only) will be saved in the trained_models directory. Intermediate results will be saved in the qual_dmaps directory.
Set paths in the options/pretrain_options.py file. Then run
python pretrain.py
The pre-trained models should be saved in the directory trained_models/pretrained_models/.
Check this video for qualitative results.
The Absolute Relative metric is shown in the following table.
Training Dataset | Approach | Current Dataset | Other Dataset |
---|---|---|---|
KITTI | Fine-tuning | 0.1920 | 0.1980 |
KITTI | Proposed | 0.1825 | 0.1660 |
VKITTI | Fine-tuning | 0.1991 | 0.2090 |
VKITTI | Proposed | 0.1653 | 0.1770 |
See the following figure for comparison.