Here are some required packages,
python = 3.7.11
pytorch = 1.12.1
opencv = 3.4.2
tqdm
kornia
kornia_moons
tensorboardX = 2.2
scikit-learn
einops
yacs
or try with conda create --name <env> --file requirements.txt
Saved features can be downloaded from diff_ransac_data, including Scene St. Peters Square for training, and other 12 scenes for testing. 878M in total, contains two folders: 878M data and 1M evaluation list of numpy files. Specify the data path in all of the scripts by parameter '-pth <>'. RootSIFT feature preparation is referred to Ransac-tutorial-data, NG-RANSAC.
The minimal solvers, model scoring functions and the RANSAC algorithm,local optimization, etc. are re-implemented in PyTorch referring to MAGSAC. Also, thanks to the public repo of CLNet, NG-RANSAC, and the libraries of PyTorch, Kornia.
$ git clone http://github.com/weitong8591/differentiable_ransac.git
test with GPU, RANSAC + local optimization, refer the data path to your own.
$ python test.py -nf 2000 -m pretrained_models/saved_model_5PC_l_epi/model.net -bs 32 -fmat 1 -sam 3 -ds sacre_coeur -t 2 -pth <data_path>
test on a single scene using -ds <scene_name>
, instead, -bm 1
indicates testing on 12 scenes.
example_model is one of the saved models provided for quick try in this repo,
feel free to try more models, downloaded from diff_ransac_models.
train/test with 8PC using-fmat 1 -sam 3
, 7PC-fmat 1 -sam 2
, 5PC-fmat 0 -sam 2
.
Note that we provide this easy start Python testing for simple checking, to reproduce the test results, feel free to go ahead.
Testing with MAGSAC++ and numerical optimization of essential matrix need C++ pybinding.
Thanks to the public code of MAGSAC, we add the proposed Gumbel Softmax Sampler inside the C++ implementation, please clone it from the forked MAGSAC repo including the new sampler, build the project by CMAKE, and compile, install in Python as follows.
$ git clone https://github.com/weitong8591/magsac.git --recursive
$ cd magsac
$ mkdir build
$ cd build
$ cmake ..
$ make
$ cd ..
$ python setup.py install
In C++ MAGSAC, sampler=3 always indicates the Gumbel Softmax Sampler we propose.
# test with 5PC for E:
$ python test_magsac.py -nf 2000 -m pretrained_models/saved_model_5PC_l_epi/model.net -bs 32 -fmat 0 -sam 3 -ds sacre_coeur -t 2 -pth <>
add -fmat 1
to activate fundamental matrix estimation.
-pth: the source path of all datasets
-sam: choosing samplers, 0 - Uniform sampler, 1,2 - Gumbel Sampler for 5PC/7PC, 3 - Gumbel Sampler for 8PC, default=0
-w0, -w1, -w2: coeffcients of different loss combination, L pose, L classification, L essential
-fmat: 0 - E, 1 - F, default=0
-lr learning rate, default=1e-4
-t: threshold, default=0.75
-e: epoch number, default=10
-bs: batch size of training, default=32
-rbs: batch size of RANSAC iterations, default=64
-tr: train or test model, default=0
-nf: number of features, default=2000
-m: pretrained model or trained m-wodel
-snn: the threshold of SNN ratio filter
-ds dataset name, single dataset
-bm in batch mode, using all the 12 scenes defined in utils.py
-p probabilities, 0-normalized weights, 1-unnormarlized weights, 2-logits, default=2,
Train the fully differentiable RANSAC end-to-end with the provided initialized weights.
Using 5PC for E model training,
$ python train.py -nf 2000 -m pretrained_models/weights_init_net_3_sampler_0_epoch_1000_E_rs_r0.80_t0.00_w1_1.00_.net -bs 32 -fmat 0 -sam 2 -tr 1 -w2 1 -t 0.75 -pth <>
8PC for F model training,
$ python train.py -nf 2000 -m pretrained_models/weights_init_net_3_sampler_0_epoch_1000_E_rs_r0.80_t0.00_w1_1.00_.net -bs 32 -fmat 1 -sam 3 -tr 1 -w2 1 -t 0.75 -pth <>
More details are covered in our paper and feel free to cite it if useful:
@article{wei2023generalized,
title={Generalized differentiable RANSAC},
author={Wei, Tong and Patel, Yash and Shekhovtsov, Alexander and Matas, J and Barath, D},
journal={arXiv preprint arXiv:2212.13185},
year={2023}
}
Contact me at weitongln@gmail.com