- Python 3.6
- GPU Memory >= 8G
- Numpy > 1.12.1
- Pytorch 0.3+
- scipy == 1.2.1
- [Optional] apex (for float16) Requirements & Quick Start
- Install Pytorch from http://pytorch.org/
- Install Torchvision from the source
git clone https://github.com/pytorch/vision
cd vision
python setup.py install
- [Optinal] You may skip it. Install apex from the source
git clone https://github.com/NVIDIA/apex.git
cd apex
python setup.py install --cuda_ext --cpp_ext
Download University-1652 upon request and put them under the ./data/
folder. You may use the request template.
import torch
from model import USAM
input = torch.randn(128, 512, 16, 16)
usam = USAM()
output = usam(input)
print(output.shape)
python train.py --name RK-Net --share --extra --stride 1 --fp16;
python test.py --name RK-Net
Default setting: Drone -> Satellite
You could download the trained model at GoogleDrive. After download, please put model folders under ./model/
.
@article{lin2022,
title={Joint Representation Learning and Keypoint Detection for Cross-view Geo-localization},
author={Lin, Jinliang and Zheng, Zhedong and Zhong, Zhun and Luo, Zhiming and Li, Shaozi and Yang, Yi and Sebe, Nicu},
journal={IEEE Transactions on Image Processing (TIP)},
doi = {10.1109/TIP.2022.3175601},
note={\mbox{doi}:\url{10.1109/TIP.2022.3175601}},
year={2022},
}
}
@article{zheng2020university,
title={University-1652: A Multi-view Multi-source Benchmark for Drone-based Geo-localization},
author={Zheng, Zhedong and Wei, Yunchao and Yang, Yi},
journal={ACM Multimedia},
year={2020}
}