This list is created and maintained by Ali Koteich and Hasan Moughnieh from the GEOspatial Artificial Intelligence (GEOAI) research group at the National Center for Remote Sensing - CNRS, Lebanon.
We encourage you to contribute to this project according to the following guidelines.
---If you find this repository useful, please consider giving it a ⭐
Table Of Contents
- Image Captioning
- Text-Image Retrieval
- Visual Grounding
- Visual Question Answering
- VL4EO Datasets
- Related Repos & Libraries
Title | Paper | Code | Year | Venue |
---|---|---|---|---|
RSGPT: A Remote Sensing Vision Language Model and Benchmark | Paper | code | 2023 | |
Multi-Source Interactive Stair Attention for Remote Sensing Image Captioning | paper | 2023 | MDPI Remote Sensing | |
VLCA: vision-language aligning model with cross-modal attention for bilingual remote sensing image captioning | paper | 2023 | IEEE Journal of Systems Engineering and Electronics | |
Towards Unsupervised Remote Sensing Image Captioning and Retrieval with Pre-Trained Language Models | paper | 2023 | Proceedings of the Japanese Association for Natural Language Processing | |
Captioning Remote Sensing Images Using Transformer Architecture | paper | 2023 | International Conference on Artificial Intelligence in Information and Communication | |
Progressive Scale-aware Network for Remote sensing Image Change Captioning | paper | 2023 | ||
Change Captioning: A New Paradigm for Multitemporal Remote Sensing Image Analysis | paper | 2022 | IEEE TGRS | |
Generating the captions for remote sensing images: A spatial-channel attention based memory-guided transformer approach | paper | code | 2022 | Engineering Applications of Artificial Intelligence |
Global Visual Feature and Linguistic State Guided Attention for Remote Sensing Image | paper | 2022 | IEEE TGRS | |
Recurrent Attention and Semantic Gate for Remote Sensing Image Captioning | paper | 2022 | IEEE TGRS | |
NWPU-Captions Dataset and MLCA-Net for Remote Sensing Image Captioning | paper | code | 2022 | IEEE TGRS |
Remote Sensing Image Change Captioning With Dual-Branch Transformers: A New Method and a Large Scale Dataset | paper | 2022 | IEEE TGRS | |
A Mask-Guided Transformer Network with Topic Token for Remote Sensing Image Captioning | paper | 2022 | MDPI Remote Sensing | |
Multiscale Multiinteraction Network for Remote Sensing Image Captioning | paper | 2022 | IEEE JSTARS | |
Using Neural Encoder-Decoder Models with Continuous Outputs for Remote Sensing Image Captioning | paper | 2022 | IEEE Access | |
A Joint-Training Two-Stage Method for Remote Sensing Image Captioning | paper | 2022 | IEEE TGRS | |
Meta captioning: A meta learning based remote sensing image captioning framework | paper | code | 2022 | Elsevier PHOTO |
Exploring Transformer and Multilabel Classification for Remote Sensing Image Captioning | paper | code | 2022 | IEEE GRSL |
High-Resolution Remote Sensing Image Captioning Based on Structured Attention | paper | 2022 | IEEE TGRS | |
Transforming remote sensing images to textual descriptions | paper | 2022 | Int J Appl Earth Obs Geoinf | |
A Joint-Training Two-Stage Method for Remote Sensing Image Captioning | paper | 2022 | IEEE TGRS | |
A Novel SVM-Based Decoder for Remote Sensing Image Captioning | paper | 2021 | IEEE TGRS | |
SD-RSIC: Summarization Driven Deep Remote Sensing Image Captioning | paper | code | 2021 | IEEE TGRS |
Truncation Cross Entropy Loss for Remote Sensing Image Captioning | paper | 2021 | IEEE TGRS | |
Word-Sentence Framework for Remote Sensing Image Captioning | paper | 2021 | IEEE TGRS | |
Toward Remote Sensing Image Retrieval Under a Deep Image Captioning Perspective | paper | 2020 | IEEE JSTARS | |
Remote sensing image captioning via Variational Autoencoder and Reinforcement Learning | paper | 2020 | Elservier Knowledge-Based Systems | |
A multi-level attention model for remote sensing image captions | paper | 2020 | MDPI Remote Sensing | |
LAM: Remote sensing image captioning with Label-Attention Mechanism | paper | 2019 | MDPI Remote Sensing | |
Exploring Models and Data for Remote Sensing Image Caption Generation | paper | 2017 | IEEE TGRS |
Title | Paper | Code | Year | Venue |
---|---|---|---|---|
RemoteCLIP: A Vision Language Foundation Model for Remote Sensing | paper | code | 2023 | |
An End-to-End Framework Based on Vision-Language Fusion for Remote Sensing Cross-Modal Text-Image Retrieval | paper | 2023 | MDPI Mathematics | |
Contrasting Dual Transformer Architectures for Multi-Modal Remote Sensing Image Retrieval | paper | 2023 | MDPI Applied Sciences | |
Reducing Semantic Confusion: Scene-aware Aggregation Network for Remote Sensing Cross-modal Retrieval | paper | code | 2023 | ICMR'23 |
An Unsupervised Cross-Modal Hashing Method Robust to Noisy Training Image-Text Correspondences in Remote Sensing | Paper | code | 2022 | IEEE ICIP |
Unsupervised Contrastive Hashing for Cross-Modal Retrieval in Remote Sensing | Paper | code | 2022 | IEEE ICASSP |
Multisource Data Reconstruction-Based Deep Unsupervised Hashing for Unisource Remote Sensing Image Retrieval | Paper | code | 2022 | IEEE TGRS |
MCRN: A Multi-source Cross-modal Retrieval Network for remote sensing | paper | code | 2022 | Int J Appl Earth Obs Geoinf |
Knowledge-Aware Cross-Modal Text-Image Retrieval for Remote Sensing Images | paper | 2022 | ||
Exploring a Fine-Grained Multiscale Method for Cross-Modal Remote Sensing Image Retrieval | paper | 2022 | IEEE TGRS | |
Remote Sensing Cross-Modal Text-Image Retrieval Based on Global and Local Information | paper | code | 2022 | IEEE TGRS |
Multilanguage Transformer for Improved Text to Remote Sensing Image Retrieval | paper | 2022 | IEEE JSTARS | |
CLIP-RS: A Cross-modal Remote Sensing Image Retrieval Based on CLIP, a Northern Virginia Case Study | paper | 2022 | Virginia Polytechnic Institute and State University | |
A Lightweight Multi-Scale Crossmodal Text-Image Retrieval Method in Remote Sensing | paper | code | 2022 | IEEE TGRS |
Toward Remote Sensing Image Retrieval under a Deep Image Captioning Perspective | paper | 2020 | IEEE JSTARS | |
TextRS: Deep bidirectional triplet network for matching text to remote sensing images | paper | 2020 | MDPI Remote Sensing | |
Deep unsupervised embedding for remote sensing image retrieval using textual cues | paper | 2020 | MDPI Applied Sciences |
Title | Paper | Code | Year | Venue |
---|---|---|---|---|
LaLGA: Multi-Scale Language-Aware Visual Grounding on Remote Sensing Data | paper | code | 2023 | |
Text2Seg: Remote Sensing Image Semantic Segmentation via Text-Guided Visual Foundation Models | paper | code | 2023 | |
RSVG: Exploring Data and Models for Visual Grounding on Remote Sensing Data | paper | code | 2022 | IEEE TGRS |
Title | Paper | Code | Year | Venue |
---|---|---|---|---|
LIT-4-RSVQA: Lightweight Transformer-based Visual Question Answering in Remote Sensing | paper | code | 2023 | IEEE IGARSS |
A Spatial Hierarchical Reasoning Network for Remote Sensing Visual Question Answering | paper | 2023 | IEEE TGRS | |
Multi-Modal Fusion Transformer for Visual Question Answering in Remote Sensing | paper | code | 2022 | SPIE Image and Signal Processing for Remote Sensing |
Change Detection Meets Visual Question Answering | paper | code | 2022 | IEEE TGRS |
Prompt-RSVQA: Prompting visual context to a language model for Remote Sensing Visual Question Answering | paper | 2022 | CVPRW | |
From Easy to Hard: Learning Language-guided Curriculum for Visual Question Answering on Remote Sensing Data | paper | code | 2022 | IEEE TGRS |
Language Transformers for Remote Sensing Visual Question Answering | paper | 2022 | IEEE IGARSS | |
Bi-Modal Transformer-Based Approach for Visual Question Answering in Remote Sensing Imagery | paper | 2022 | IEEE TGRS | |
Mutual Attention Inception Network for Remote Sensing Visual Question Answering | paper | code | 2022 | IEEE TGRS |
RSVQA meets BigEarthNet: a new, large-scale, visual question answering dataset for remote sensing | paper | code | 2021 | IEEE IGARSS |
RSVQA: Visual Question Answering for Remote Sensing Data | paper | code | 2020 | IEEE TGRS |
How to find a good image-text embedding for remote sensing visual question answering? | paper | 2021 | CEUR Workshop Proceedings |
Name | Link | Paper Link | Description |
---|---|---|---|
LAION-EO | link | Paper Link | Size : 24,933 samples with 40.1% english captions as well as other common languages from LAION-5B mean height of 633.0 pixels (up to 9,999) and mean width of 843.7 pixels (up to 19,687) Platforms : Based on LAION-5B |
RS5M: A Large Scale Vision-Language Dataset for Remote Sensing Vision-Language Foundation Model | Link | Paper Link | Size: 5 million remote sensing images with English descriptions Resolution : 256 x 256 Platforms: 11 publicly available image-text paired dataset |
Remote Sensing Visual Question Answering Low Resolution Dataset(RSVQA LR) | Link | Paper Link | Size: 772 images & 77,232 questions and answers Resolution : 256 x 256 Platforms: Sentinel-2 and Open Street Map Use: Remote Sensing Visual Question Answering |
Remote Sensing Visual Question Answering High Resolution Dataset(RSVQA HR) | Link | Paper Link | Size: 10,659 images & 955,664 questions and answers Resolution : 512 x 512 Platforms: USGS and Open Street Map Use: Remote Sensing Visual Question Answering |
Remote Sensing Visual Question Answering BigEarthNet Dataset (RSVQA x BEN) | Link | Paper Link | Size: 140,758,150 image/question/answer triplets Resolution : High-resolution (15cm) Platforms: Sentinel-2, BigEarthNet and Open Street Map Use: Remote Sensing Visual Question Answering |
FloodNet Visual Question Answering Dataset | Link | Paper Link | Size: 11,000 question-image pairs Resolution : 224 x 224 Platforms: UAV-DJI Mavic Pro quadcopters, after Hurricane Harvey Use: Remote Sensing Visual Question Answering |
Change Detection-Based Visual Question Answering Dataset | Link | Paper Link | Size: 2,968 pairs of multitemporal images and more than 122,000 question–answer pairs Classes: 6 Resolution : 512×512 pixels Platforms: It is based on semantic change detection dataset (SECOND) Use: Remote Sensing Visual Question Answering |
Remote Sensing Image Captioning Dataset (RSICap) | link | Paper Link | RSICap comprises 2,585 human-annotated captions with rich and high-quality information This dataset offers detailed descriptions for each image, encompassing scene descriptions (e.g., residential area, airport, or farmland) as well as object information (e.g., color, shape, quantity, absolute position, etc) |
Remote Sensing Image Captioning Evaluation Dataset (RSIEval) | link | Paper Link | 100 human-annotated captions and 936 visual question-answer pairs with rich information and open-ended questions and answers. Can be used for Image Captioning and Visual Question-Answering tasks |
Revised Remote Sensing Image Captioning Dataset (RSCID) | Link | Paper Link | Size: 10,921 images with five captions per image Number of Classes: 30 Resolution : 224 x 224 Platforms: Google Earth, Baidu Map, MapABC and Tianditu Use: Remote Sensing Image Captioning |
Revised University of California Merced dataset (UCM-Captions) | Link | Paper Link | Size: 2,100 images with five captions per image Number of Classes: 21 Resolution : 256 x 256 Platforms: USGS National Map Urban Area Imagery collection Use: Remote Sensing Image Captioning |
Revised Sydney-Captions Dataset | Link | Paper Link | Size: 613 images with five captions per image Number of Classes: 7 Resolution : 500 x 500 Platforms: GoogleEarth Use: Remote Sensing Image Captioning |
LEVIR-CC dataset | Link | Paper Link | Size: 10,077 pairs of RS images and 50,385 corresponding sentences Number of Classes: 10 Resolution : 1024 × 1024 pixels Platforms: Beihang University Use: Remote Sensing Image Captioning |
NWPU-Captions dataset | images_Link, info_Link | Paper Link | Size: 31,500 images with 157,500 sentences Number of Classes: 45 Resolution : 256 x 256 pixels Platforms: based on NWPU-RESISC45 dataset Use: Remote Sensing Image Captioning |
Remote sensing Image-Text Match dataset (RSITMD) | Link | Paper Link | Size: 23,715 captions for 4,743 images Number of Classes: 32 Resolution : 500 x 500 Platforms: RSCID and GoogleEarth Use: Remote Sensing Image-Text Retrieval |
PatterNet | Link | Paper Link | Size: 30,400 images Number of Classes: 38 Resolution : 256 x 256 Platforms: Google Earth imagery and via the Google Map AP Use: Remote Sensing Image Retrieval |
Dense Labeling Remote Sensing Dataset (DLRSD) | Link | Paper Link | Size: 2,100 images Number of Classes: 21 Resolution : 256 x 256 Platforms: Extension of the UC Merced Use: Remote Sensing Image Retrieval (RSIR), Classification and Semantic Segmentation |
Dior-Remote Sensing Visual Grounding Dataset (RSVGD) | Link | Paper Link | Size: 38,320 RS image-query pairs and 17,402 RS images Number of Classes: 20 Resolution : 800 x 800 Platforms: DIOR dataset Use: Remote Sensing Visual Grounding |
Visual Grounding in Remote Sensing Images | link | Paper Link | Size : 4,239 images including 5,994 object instances and 7,933 referring expressions Images are 1024×1024 pixels Platforms: multiple sensors and platforms (e.g. Google Earth) |
Remote Sensing Image Scene Classification (NWPU-RESISC45) | Link | Paper Link | Size: 31,500 images Number of Classes: 45 Resolution : 256 x 256 pixels Platforms: Google Earth Use: Remote Sensing Image Scene Classification |
- ConfigILM Library
- awesome-RSVLM
- awesome-remote-sensing-vision-language-models
- awesome-remote-image-captioning
---Stay tuned for continuous updates and improvements! 🚀