/LoFTR-in-Tensorflow

Code for our re-implementation of "LoFTR: Detector-Free Local Feature Matching with Transformers"

Primary LanguageJupyter Notebook

LoFTR-in-Tensorflow

In an attempt to make the LoFTR [1] algorithm more accessible, we have reimplmeneted it in Tensorflow.

Comparison

Below is a comparison of the original LoFTR feature detector (top), and our reimplementation (bottom)

It is clear to see that our implementation is in need of more training. Due to time and computing resource constraints a full training could not be executed. Hence, the difference in results.

Install

git clone link

cd LoFTR-in-Tensorflow

conda env create -f environment.yaml

Usage

conda activate loftr_tf

Demo Notebook

Run running.ipynb to see a visualisation of the LoFTR feature matcher running with our pretrained weights and some demo images.

Training

Training was performed on 3 datasets.

Megadepth [2]

Scannet [3]

NYU Depth V2 [4]

See the Training readme for details.

Next Steps

Below are a few next steps to further improve this reimplementation.

  1. Distrubute dataset using Tensorflows dataset builder to use multiple CPU cores to ush the data to multiple GPU's

  2. Optimise Tensorflows GPU multi-worker strategy to work smoothly with our model

References

[1] J. Sun, Z. Shen, Y. Wang, H. Bao, and X. Zhou (2021). LoFTR: Detector-free local feature matching with transformers

[2] Zhengqi Li and Noah Snavely (2018). MegaDepth: Learning Single-View Depth Prediction from Internet Photos

[3] Dai, Angela and Chang, Angel X. and Savva, Manolis and Halber, Maciej and Funkhouser, Thomas and Nießner, Matthias (2017). ScanNet: Richly-annotated 3D Reconstructions of Indoor Scenes

[4] Nathan Silberman, Derek Hoiem, Pushmeet Kohli and Rob Fergus (2012). Indoor Segmentation and Support Inference from RGBD Images