/techniques

Techniques for deep learning with satellite & aerial imagery

Apache License 2.0Apache-2.0

Introduction

Deep learning has revolutionized the analysis and interpretation of satellite and aerial imagery, addressing unique challenges such as vast image sizes and a wide array of object classes. This repository provides an exhaustive overview of deep learning techniques specifically tailored for satellite and aerial image processing. It covers a range of architectures, models, and algorithms suited for key tasks like classification, segmentation, and object detection.

How to use this repository: use Command + F (Mac) or CTRL + F (Windows) to search this page for e.g. 'SAM'

This repository is proudly sponsored by Orboculum. Orbuculum is an innovative and rapidly evolving platform designed with the specific intent to empower GIS and Earth Observation (EO) researchers by offering a unique avenue for monetizing their machine learning models. Standing distinctively apart from conventional marketplaces, Orbuculum pioneers a transformative approach by transmuting these models into smart contracts. This enables automatic remuneration for the creators each time their models are deployed, fostering an efficient and rewarding ecosystem.

Orbuculum's potential extends far beyond the reinvention of the GIS/EO research industry. It is poised to serve as an invaluable conduit for public welfare initiatives, especially those striving to mitigate climate change. By providing access to vital data and insightful analytics, Orbuculum promises to act as a potent resource in the ongoing battle against some of the most urgent global concerns. This integration of cutting-edge technology with socially impactful missions could position Orbuculum as an instrumental platform at the intersection of scientific research and sustainable development.

Techniques

  • Classification
  • Segmentation
  • Instance segmentation
  • Object detection
  • Object counting
  • Regression
  • Cloud detection & removal
  • Change detection
  • Time series
  • Crop classification
  • Crop yield & vegetation forecasting
  • Wealth and economic activity
  • Disaster response
  • Super-resolution
  • Pansharpening
  • Image-to-image translation
  • Data fusion
  • Generative networks
  • Autoencoders, dimensionality reduction, image embeddings & similarity search
  • Anomaly detection
  • Image retrieval
  • Image Captioning
  • Visual Question Answering
  • Mixed data learning
  • Few & zero shot learning
  • Self-supervised, unsupervised & contrastive learning
  • Weakly & semi-supervised learning
  • Active learning
  • Federated Learning
  • Transformers
  • Adversarial ML
  • Image registration
  • Terrain mapping, Disparity Estimation, Lidar, DEMs & NeRF
  • Thermal Infrared
  • SAR
  • NDVI-Vegetation Index
  • General image quality
  • Synthetic data
  • Large vision & language models (LLMs & LVMs)
  • Foundational models

Classification


The UC merced dataset is a well known classification dataset.

Classification is a fundamental task in remote sensing data analysis, where the goal is to assign a semantic label to each image, such as 'urban', 'forest', 'agricultural land', etc. The process of assigning labels to an image is known as image-level classification. However, in some cases, a single image might contain multiple different land cover types, such as a forest with a river running through it, or a city with both residential and commercial areas. In these cases, image-level classification becomes more complex and involves assigning multiple labels to a single image. This can be accomplished using a combination of feature extraction and machine learning algorithms to accurately identify the different land cover types. It is important to note that image-level classification should not be confused with pixel-level classification, also known as semantic segmentation. While image-level classification assigns a single label to an entire image, semantic segmentation assigns a label to each individual pixel in an image, resulting in a highly detailed and accurate representation of the land cover types in an image. Read A brief introduction to satellite image classification with neural networks

Segmentation


(left) a satellite image and (right) the semantic classes in the image.

Image segmentation is a crucial step in image analysis and computer vision, with the goal of dividing an image into semantically meaningful segments or regions. The process of image segmentation assigns a class label to each pixel in an image, effectively transforming an image from a 2D grid of pixels into a 2D grid of pixels with assigned class labels. One common application of image segmentation is road or building segmentation, where the goal is to identify and separate roads and buildings from other features within an image. To accomplish this task, single class models are often trained to differentiate between roads and background, or buildings and background. These models are designed to recognize specific features, such as color, texture, and shape, that are characteristic of roads or buildings, and use this information to assign class labels to the pixels in an image. Another common application of image segmentation is land use or crop type classification, where the goal is to identify and map different land cover types within an image. In this case, multi-class models are typically used to recognize and differentiate between multiple classes within an image, such as forests, urban areas, and agricultural land. These models are capable of recognizing complex relationships between different land cover types, allowing for a more comprehensive understanding of the image content. Read A brief introduction to satellite image segmentation with neural networks. Note that many articles which refer to 'hyperspectral land classification' are often actually describing semantic segmentation. Image source

Segmentation - Land use & land cover

Segmentation - Vegetation, deforestation, crops & crop boundaries

Note that deforestation detection may be treated as a segmentation task or a change detection task

Segmentation - Water, coastlines & floods

Segmentation - Fire, smoke & burn areas

Segmentation - Landslides

Segmentation - Glaciers

  • HED-UNet -> a model for simultaneous semantic segmentation and edge detection, examples provided are glacier fronts and building footprints using the Inria Aerial Image Labeling dataset

  • glacier_mapping -> Mapping glaciers in the Hindu Kush Himalaya, Landsat 7 images, Shapefile labels of the glaciers, Unet with dropout

  • glacier-detect-ML -> a simple logistic regression model to identify a glacier in Landsat satellite imagery

  • GlacierSemanticSegmentation

  • Antarctic-fracture-detection -> uses UNet with the MODIS Mosaic of Antarctica to detect surface fractures

Segmentation - Other environmental

Segmentation - Roads & sidewalks

Extracting roads is challenging due to the occlusions caused by other objects and the complex traffic environment

Segmentation - Buildings & rooftops

Segmentation - Solar panels

Segmentation - Other manmade

  • Aarsh2001/ML_Challenge_NRSC -> Electrical Substation detection

  • electrical_substation_detection

  • MCAN-OilSpillDetection -> Oil Spill Detection with A Multiscale Conditional Adversarial Network under Small Data Training

  • plastics -> Detecting and Monitoring Plastic Waste Aggregations in Sentinel-2 Imagery for globalplasticwatch.org

  • mining-detector -> detection of artisanal gold mines in Sentinel-2 satellite imagery for Amazon Mining Watch. Also covers clandestine airstrips

  • EG-UNet Deep Feature Enhancement Method for Land Cover With Irregular and Sparse Spatial Distribution Features: A Case Study on Open-Pit Mining

  • MADOS -> Detecting Marine Pollutants and Sea Surface Features with Deep Learning in Sentinel-2 Imagery on the MADOS dataset

  • SADMA -> Residual Attention UNet on MARIDA: Marine Debris Archive is a marine debris-oriented dataset on Sentinel-2 satellite images

Panoptic segmentation

Segmentation - Miscellaneous

Instance segmentation

In instance segmentation, each individual 'instance' of a segmented area is given a unique lable. For detection of very small objects this may a good approach, but it can struggle seperating individual objects that are closely spaced.

Object detection


Image showing the suitability of rotated bounding boxes in remote sensing.

Object detection in remote sensing involves locating and surrounding objects of interest with bounding boxes. Due to the large size of remote sensing images and the fact that objects may only comprise a few pixels, object detection can be challenging in this context. The imbalance between the area of the objects to be detected and the background, combined with the potential for objects to be easily confused with random features in the background, further complicates the task. Object detection generally performs better on larger objects, but becomes increasingly difficult as the objects become smaller and more densely packed. The accuracy of object detection models can also degrade rapidly as image resolution decreases, which is why it is common to use high resolution imagery, such as 30cm RGB, for object detection in remote sensing. A unique characteristic of aerial images is that objects can be oriented in any direction. To effectively extract measurements of the length and width of an object, it can be crucial to use rotated bounding boxes that align with the orientation of the object. This approach enables more accurate and meaningful analysis of the objects within the image. Image source

Object tracking in videos

  • TCTrack -> Temporal Contexts for Aerial Tracking

  • CFME -> Object Tracking in Satellite Videos by Improved Correlation Filters With Motion Estimations

  • TGraM -> Multi-Object Tracking in Satellite Videos with Graph-Based Multi-Task Modeling

  • satellite_video_mod_groundtruth -> groundtruth on satellite video for evaluating moving object detection algorithm

  • Moving-object-detection-DSFNet -> DSFNet: Dynamic and Static Fusion Network for Moving Object Detection in Satellite Videos

  • HiFT -> Hierarchical Feature Transformer for Aerial Tracking

Object detection with rotated bounding boxes

Orinted bounding boxes (OBB) are polygons representing rotated rectangles. For datasets checkout DOTA & HRSC2016. Start with Yolov8

  • mmrotate -> Rotated Object Detection Benchmark, with pretrained models and function for inferencing on very large images

  • OBBDetection -> an oriented object detection library, which is based on MMdetection

  • rotate-yolov3 -> Rotation object detection implemented with yolov3. Also see yolov3-polygon

  • DRBox -> for detection tasks where the objects are orientated arbitrarily, e.g. vehicles, ships and airplanes

  • s2anet -> Align Deep Features for Oriented Object Detection

  • CFC-Net -> A Critical Feature Capturing Network for Arbitrary-Oriented Object Detection in Remote Sensing Images

  • ReDet -> A Rotation-equivariant Detector for Aerial Object Detection

  • BBAVectors-Oriented-Object-Detection -> Oriented Object Detection in Aerial Images with Box Boundary-Aware Vectors

  • CSL_RetinaNet_Tensorflow -> Arbitrary-Oriented Object Detection with Circular Smooth Label

  • r3det-on-mmdetection -> R3Det: Refined Single-Stage Detector with Feature Refinement for Rotating Object

  • R-DFPN_FPN_Tensorflow -> Rotation Dense Feature Pyramid Networks (Tensorflow)

  • R2CNN_Faster-RCNN_Tensorflow -> Rotational region detection based on Faster-RCNN

  • Rotated-RetinaNet -> implemented in pytorch, it supports the following datasets: DOTA, HRSC2016, ICDAR2013, ICDAR2015, UCAS-AOD, NWPU VHR-10, VOC2007

  • OBBDet_Swin -> The sixth place winning solution in 2021 Gaofen Challenge

  • CG-Net -> Learning Calibrated-Guidance for Object Detection in Aerial Images

  • OrientedRepPoints_DOTA -> Oriented RepPoints + Swin Transformer/ReResNet

  • yolov5_obb -> yolov5 + Oriented Object Detection

  • How to Train YOLOv5 OBB -> YOLOv5 OBB tutorial and YOLOv5 OBB noteboook

  • OHDet_Tensorflow -> can be applied to rotation detection and object heading detection

  • Seodore -> framework maintaining recent updates of mmdetection

  • Rotation-RetinaNet-PyTorch -> oriented detector Rotation-RetinaNet implementation on Optical and SAR ship dataset

  • AIDet -> an open source object detection in aerial image toolbox based on MMDetection

  • rotation-yolov5 -> rotation detection based on yolov5

  • ShipDetection -> Ship Detection in HR Optical Remote Sensing Images via Rotated Bounding Box, based on Faster R-CNN and ORN, uses caffe

  • SLRDet -> project based on mmdetection to reimplement RRPN and use the model Faster R-CNN OBB

  • AxisLearning -> Axis Learning for Orientated Objects Detection in Aerial Images

  • Detection_and_Recognition_in_Remote_Sensing_Image -> This work uses PaNet to realize Detection and Recognition in Remote Sensing Image by MXNet

  • DrBox-v2-tensorflow -> tensorflow implementation of DrBox-v2 which is an improved detector with rotatable boxes for target detection in remote sensing images

  • Rotation-EfficientDet-D0 -> A PyTorch Implementation Rotation Detector based EfficientDet Detector, applied to custom rotation vehicle datasets

  • DODet -> Dual alignment for oriented object detection, uses DOTA dataset

  • GF-CSL -> Gaussian Focal Loss: Learning Distribution Polarized Angle Prediction for Rotated Object Detection in Aerial Images

  • simplified_rbox_cnn -> RBox-CNN: rotated bounding box based CNN for ship detection in remote sensing image. Uses Tensorflow object detection API

  • Polar-Encodings -> Learning Polar Encodings for Arbitrary-Oriented Ship Detection in SAR Images

  • R-CenterNet -> detector for rotated-object based on CenterNet

  • piou -> Orientated Object Detection; IoU Loss, applied to DOTA dataset

  • DAFNe -> A One-Stage Anchor-Free Approach for Oriented Object Detection

  • AProNet -> Detecting objects with precise orientation from aerial images. Applied to datasets DOTA and HRSC2016

  • UCAS-AOD-benchmark -> A benchmark of UCAS-AOD dataset

  • RotateObjectDetection -> based on Ultralytics/yolov5, with adjustments to enable rotate prediction boxes. Also see PolygonObjectDetection

  • AD-Toolbox -> Aerial Detection Toolbox based on MMDetection and MMRotate, with support for more datasets

  • GGHL -> A General Gaussian Heatmap Label Assignment for Arbitrary-Oriented Object Detection

  • NPMMR-Det -> A Novel Nonlocal-Aware Pyramid and Multiscale Multitask Refinement Detector for Object Detection in Remote Sensing Images

  • AOPG -> Anchor-Free Oriented Proposal Generator for Object Detection

  • SE2-Det -> Semantic-Edge-Supervised Single-Stage Detector for Oriented Object Detection in Remote Sensing Imagery

  • OrientedRepPoints -> Oriented RepPoints for Aerial Object Detection

  • TS-Conv -> Task-wise Sampling Convolutions for Arbitrary-Oriented Object Detection in Aerial Images

  • FCOSR -> A Simple Anchor-free Rotated Detector for Aerial Object Detection. This implement is modified from mmdetection. See also TensorRT_Inference

  • OBB_Detection -> Finalist's solution in the track of Oriented Object Detection in Remote Sensing Images, 2022 Guangdong-Hong Kong-Macao Greater Bay Area International Algorithm Competition

  • sam-mmrotate -> SAM (Segment Anything Model) for generating rotated bounding boxes with MMRotate, which is a comparison method of H2RBox-v2

  • mmrotate-dcfl -> Dynamic Coarse-to-Fine Learning for Oriented Tiny Object Detection

  • h2rbox-mmrotate -> Horizontal Box Annotation is All You Need for Oriented Object Detection

  • Spatial-Transform-Decoupling -> Spatial Transform Decoupling for Oriented Object Detection

  • ARS-DETR -> Aspect Ratio Sensitive Oriented Object Detection with Transformer

  • CFINet -> Small Object Detection via Coarse-to-fine Proposal Generation and Imitation Learning. Introduces SODA-A dataset

Object detection enhanced by super resolution

Salient object detection

Detecting the most noticeable or important object in a scene

  • ACCoNet -> Adjacent Context Coordination Network for Salient Object Detection in Optical Remote Sensing Images

  • MCCNet -> Multi-Content Complementation Network for Salient Object Detection in Optical Remote Sensing Images

  • CorrNet -> Lightweight Salient Object Detection in Optical Remote Sensing Images via Feature Correlation

  • Reading list for deep learning based Salient Object Detection in Optical Remote Sensing Images

  • ORSSD-dataset -> salient object detection dataset

  • EORSSD-dataset -> Extended Optical Remote Sensing Saliency Detection (EORSSD) Dataset

  • DAFNet_TIP20 -> Dense Attention Fluid Network for Salient Object Detection in Optical Remote Sensing Images

  • EMFINet -> Edge-Aware Multiscale Feature Integration Network for Salient Object Detection in Optical Remote Sensing Images

  • ERPNet -> Edge-guided Recurrent Positioning Network for Salient Object Detection in Optical Remote Sensing Images

  • FSMINet -> Fully Squeezed Multi-Scale Inference Network for Fast and Accurate Saliency Detection in Optical Remote Sensing Images

  • AGNet -> AGNet: Attention Guided Network for Salient Object Detection in Optical Remote Sensing Images

  • MSCNet -> A lightweight multi-scale context network for salient object detection in optical remote sensing images

  • GPnet -> Global Perception Network for Salient Object Detection in Remote Sensing Images

  • SeaNet -> Lightweight Salient Object Detection in Optical Remote Sensing Images via Semantic Matching and Edge Alignment

  • GeleNet -> Salient Object Detection in Optical Remote Sensing Images Driven by Transformer

Object detection - Buildings, rooftops & solar panels

Object detection - Ships, boats, vessels & wake

Object detection - Cars, vehicles & trains

Object detection - Planes & aircraft

Object detection - Infrastructure & utilities

Object detection - Oil storage tank detection

Oil is stored in tanks at many points between extraction and sale, and the volume of oil in storage is an important economic indicator.

Object detection - Animals

A variety of techniques can be used to count animals, including object detection and instance segmentation. For convenience they are all listed here:

Object detection - Miscellaneous

Object counting

When the object count, but not its shape is required, U-net can be used to treat this as an image-to-image translation problem.

  • centroid-unet -> Centroid-UNet is deep neural network model to detect centroids from satellite images

  • cownter_strike -> counting cows, located with point-annotations, two models: CSRNet (a density-based method) & LCFCN (a detection-based method)

  • DO-U-Net -> an effective approach for when the size of an object needs to be known, as well as the number of objects in the image, initially created to segment and count Internally Displaced People (IDP) camps in Afghanistan

  • Cassava Crop Counting

  • Counting from Sky -> A Large-scale Dataset for Remote Sensing Object Counting and A Benchmark Method

  • PSGCNet -> PSGCNet: A Pyramidal Scale and Global Context Guided Network for Dense Object Counting in Remote Sensing Images

  • psgcnet -> A Pyramidal Scale and Global Context Guided Network for Dense Object Counting in Remote-Sensing Images

Regression


Regression prediction of windspeed.

Regression in remote sensing involves predicting continuous variables such as wind speed, tree height, or soil moisture from an image. Both classical machine learning and deep learning approaches can be used to accomplish this task. Classical machine learning utilizes feature engineering to extract numerical values from the input data, which are then used as input for a regression algorithm like linear regression. On the other hand, deep learning typically employs a convolutional neural network (CNN) to process the image data, followed by a fully connected neural network (FCNN) for regression. The FCNN is trained to map the input image to the desired output, providing predictions for the continuous variables of interest. Image source

Cloud detection & removal


(left) False colour image and (right) a cloud & shadow mask.

Clouds are a major issue in remote sensing images as they can obscure the underlying ground features. This hinders the accuracy and effectiveness of remote sensing analysis, as the obscured regions cannot be properly interpreted. In order to address this challenge, various techniques have been developed to detect clouds in remote sensing images. Both classical algorithms and deep learning approaches can be employed for cloud detection. Classical algorithms typically use threshold-based techniques and hand-crafted features to identify cloud pixels. However, these techniques can be limited in their accuracy and are sensitive to changes in image appearance and cloud structure. On the other hand, deep learning approaches leverage the power of convolutional neural networks (CNNs) to accurately detect clouds in remote sensing images. These models are trained on large datasets of remote sensing images, allowing them to learn and generalize the unique features and patterns of clouds. The generated cloud mask can be used to identify the cloud pixels and eliminate them from further analysis or, alternatively, cloud inpainting techniques can be used to fill in the gaps left by the clouds. This approach helps to improve the accuracy of remote sensing analysis and provides a clearer view of the ground, even in the presence of clouds. Image adapted from the paper 'Refined UNet Lite: End-to-End Lightweight Network for Edge-precise Cloud Detection'

Change detection


(left) Initial and (middle) after some development, with (right) the change highlighted.

Change detection is a vital component of remote sensing analysis, enabling the monitoring of landscape changes over time. This technique can be applied to identify a wide range of changes, including land use changes, urban development, coastal erosion, and deforestation. Change detection can be performed on a pair of images taken at different times, or by analyzing multiple images collected over a period of time. It is important to note that while change detection is primarily used to detect changes in the landscape, it can also be influenced by the presence of clouds and shadows. These dynamic elements can alter the appearance of the image, leading to false positives in change detection results. Therefore, it is essential to consider the impact of clouds and shadows on change detection analysis, and to employ appropriate methods to mitigate their influence. Image source

  • awesome-remote-sensing-change-detection lists many datasets and publications

  • Change-Detection-Review -> A review of change detection methods, including code and open data sets for deep learning

  • Change Detection using Siamese Networks

  • STANet ->STANet for remote sensing image change detection

  • UNet-based-Unsupervised-Change-Detection -> A convolutional neural network (CNN) and semantic segmentation is implemented to detect the changes between the images, as well as classify the changes into the correct semantic class

  • BIT_CD -> Official Pytorch Implementation of Remote Sensing Image Change Detection with Transformers

  • Unstructured-change-detection-using-CNN

  • Siamese neural network to detect changes in aerial images -> uses Keras and VGG16 architecture

  • Change Detection in 3D: Generating Digital Elevation Models from Dove Imagery

  • QGIS plugin for applying change detection algorithms on high resolution satellite imagery

  • LamboiseNet -> Master thesis about change detection in satellite imagery using Deep Learning

  • Fully Convolutional Siamese Networks for Change Detection

  • Urban Change Detection for Multispectral Earth Observation Using Convolutional Neural Networks -> used the Onera Satellite Change Detection (OSCD) dataset

  • IAug_CDNet -> Official Pytorch Implementation of Adversarial Instance Augmentation for Building Change Detection in Remote Sensing Images

  • dpm-rnn-public -> Code implementing a damage mapping method combining satellite data with deep learning

  • SenseEarth2020-ChangeDetection -> 1st place solution to the Satellite Image Change Detection Challenge hosted by SenseTime; predictions of five HRNet-based segmentation models are ensembled, serving as pseudo labels of unchanged areas

  • KPCAMNet -> Python implementation of the paper Unsupervised Change Detection in Multi-temporal VHR Images Based on Deep Kernel PCA Convolutional Mapping Network

  • CDLab -> benchmarking deep learning-based change detection methods.

  • Siam-NestedUNet -> SNUNet-CD: A Densely Connected Siamese Network for Change Detection of VHR Images

  • SUNet-change_detection -> Implementation of paper SUNet: Change Detection for Heterogeneous Remote Sensing Images from Satellite and UAV Using a Dual-Channel Fully Convolution Network

  • Self-supervised Change Detection in Multi-view Remote Sensing Images

  • MFPNet -> Remote Sensing Change Detection Based on Multidirectional Adaptive Feature Fusion and Perceptual Similarity

  • GitHub for the DIUx xView Detection Challenge -> The xView2 Challenge focuses on automating the process of assessing building damage after a natural disaster

  • DASNet -> Dual attentive fully convolutional siamese networks for change detection of high-resolution satellite images

  • Self-Attention for Raw Optical Satellite Time Series Classification

  • planet-movement -> Find and process Planet image pairs to highlight object movement

  • temporal-cluster-matching -> detecting change in structure footprints from time series of remotely sensed imagery

  • autoRIFT -> fast and intelligent algorithm for finding the pixel displacement between two images

  • DSAMNet -> A Deeply Supervised Attention Metric-Based Network and an Open Aerial Image Dataset for Remote Sensing Change Detection

  • SRCDNet -> Super-resolution-based Change Detection Network with Stacked Attention Module for Images with Different Resolutions. SRCDNet is designed to learn and predict change maps from bi-temporal images with different resolutions

  • Land-Cover-Analysis -> Land Cover Change Detection using Satellite Image Segmentation

  • A deeply supervised image fusion network for change detection in high resolution bi-temporal remote sening images

  • Satellite-Image-Alignment-Differencing-and-Segmentation

  • Change Detection in Multi-temporal Satellite Images -> uses Principal Component Analysis (PCA) and K-means clustering

  • Unsupervised Change Detection Algorithm using PCA and K-Means Clustering -> in Matlab but has paper

  • ChangeFormer -> A Transformer-Based Siamese Network for Change Detection. Uses transformer architecture to address the limitations of CNN in handling multi-scale long-range details. Demonstrates that ChangeFormer captures much finer details compared to the other SOTA methods, achieving better performance on benchmark datasets

  • Heterogeneous_CD -> Heterogeneous Change Detection in Remote Sensing Images

  • ChangeDetectionProject -> Trying out Active Learning in with deep CNNs for Change detection on remote sensing data

  • DSFANet -> Unsupervised Deep Slow Feature Analysis for Change Detection in Multi-Temporal Remote Sensing Images

  • siamese-change-detection -> Targeted synthesis of multi-temporal remote sensing images for change detection using siamese neural networks

  • Bi-SRNet -> Bi-Temporal Semantic Reasoning for the Semantic Change Detection in HR Remote Sensing Images

  • SiROC -> Spatial Context Awareness for Unsupervised Change Detection in Optical Satellite Images. Applied to Sentinel-2 and high-resolution Planetscope imagery on four datasets

  • DSMSCN -> Tensorflow implementation for Change Detection in Multi-temporal VHR Images Based on Deep Siamese Multi-scale Convolutional Neural Networks

  • RaVAEn -> a lightweight, unsupervised approach for change detection in satellite data based on Variational Auto-Encoders (VAEs) with the specific purpose of on-board deployment. It flags changed areas to prioritise for downlink, shortening the response time

  • SemiCD -> Revisiting Consistency Regularization for Semi-supervised Change Detection in Remote Sensing Images. Achieves the performance of supervised CD even with access to as little as 10% of the annotated training data

  • FCCDN_pytorch -> FCCDN: Feature Constraint Network for VHR Image Change Detection. Uses the LEVIR-CD building change detection dataset

  • INLPG_Python -> Structure Consistency based Graph for Unsupervised Change Detection with Homogeneous and Heterogeneous Remote Sensing Images

  • NSPG_Python -> Nonlocal patch similarity based heterogeneous remote sensing change detection

  • LGPNet-BCD -> Building Change Detection for VHR Remote Sensing Images via Local-Global Pyramid Network and Cross-Task Transfer Learning Strategy

  • DS_UNet -> Sentinel-1 and Sentinel-2 Data Fusion for Urban Change Detection using a Dual Stream U-Net, uses Onera Satellite Change Detection dataset

  • SiameseSSL -> Urban change detection with a Dual-Task Siamese network and semi-supervised learning. Uses SpaceNet 7 dataset

  • CD-SOTA-methods -> Remote sensing change detection: State-of-the-art methods and available datasets

  • multimodalCD_ISPRS21 -> Fusing Multi-modal Data for Supervised Change Detection

  • Unsupervised-CD-in-SITS-using-DL-and-Graphs -> Unsupervised Change Detection Analysis in Satellite Image Time Series using Deep Learning Combined with Graph-Based Approaches

  • LSNet -> Extremely Light-Weight Siamese Network For Change Detection in Remote Sensing Image

  • Change-Detection-in-Remote-Sensing-Images -> using PCA & K-means

  • End-to-end-CD-for-VHR-satellite-image -> End-to-End Change Detection for High Resolution Satellite Images Using Improved UNet++

  • Semantic-Change-Detection -> SCDNET: A novel convolutional network for semantic change detection in high resolution optical remote sensing imagery

  • ERCNN-DRS_urban_change_monitoring -> Neural Network-Based Urban Change Monitoring with Deep-Temporal Multispectral and SAR Remote Sensing Data

  • EGRCNN -> Edge-guided Recurrent Convolutional Neural Network for Multi-temporal Remote Sensing Image Building Change Detection

  • Unsupervised-Remote-Sensing-Change-Detection -> An Unsupervised Remote Sensing Change Detection Method Based on Multiscale Graph Convolutional Network and Metric Learning

  • CropLand-CD -> A CNN-transformer Network with Multi-scale Context Aggregation for Fine-grained Cropland Change Detection

  • contrastive-surface-image-pretraining -> Supervising Remote Sensing Change Detection Models with 3D Surface Semantics

  • dcvaVHROptical -> Unsupervised Deep Change Vector Analysis for Multiple-Change Detection in VHR Images

  • hyperdimensionalCD -> Change Detection in Hyperdimensional Images Using Untrained Models

  • DSFANet -> Unsupervised Deep Slow Feature Analysis for Change Detection in Multi-Temporal Remote Sensing Images

  • FCD-GAN-pytorch -> Fully Convolutional Change Detection Framework with Generative Adversarial Network (FCD-GAN) is a framework for change detection in multi-temporal remote sensing images

  • DARNet-CD -> A Densely Attentive Refinement Network for Change Detection Based on Very-High-Resolution Bitemporal Remote Sensing Images

  • xView2_Vulcan -> Damage assessment using pre and post orthoimagery. Modified + productionized model based off the first-place model from the xView2 challenge.

  • ESCNet -> An End-to-End Superpixel-Enhanced Change Detection Network for Very-High-Resolution Remote Sensing Images

  • ForestCoverChange -> Detecting and Predicting Forest Cover Change in Pakistani Areas Using Remote Sensing Imagery

  • deforestation-detection -> DEEP LEARNING FOR HIGH-FREQUENCY CHANGE DETECTION IN UKRAINIAN FOREST ECOSYSTEM WITH SENTINEL-2

  • forest_change_detection -> forest change segmentation with time-dependent models, including Siamese, UNet-LSTM, UNet-diff, UNet3D models

  • SentinelClearcutDetection -> Scripts for deforestation detection on the Sentinel-2 Level-A images

  • clearcut_detection -> research & web-service for clearcut detection

  • CDRL -> Unsupervised Change Detection Based on Image Reconstruction Loss

  • ddpm-cd -> Remote Sensing Change Detection (Segmentation) using Denoising Diffusion Probabilistic Models

  • Remote-sensing-time-series-change-detection -> Graph-based block-level urban change detection using Sentinel-2 time series

  • austin-ml-change-detection-demo -> A change detection demo for the Austin area using a pre-trained PyTorch model scaled with Dask on Planet imagery

  • dfc2021-msd-baseline -> Multitemporal Semantic Change Detection track of the 2021 IEEE GRSS Data Fusion Competition

  • CorrFusionNet -> Multi-Temporal Scene Classification and Scene Change Detection with Correlation based Fusion

  • ChangeDetectionPCAKmeans -> Unsupervised Change Detection in Satellite Images Using Principal Component Analysis and k-Means Clustering.

  • IRCNN -> IRCNN: An Irregular-Time-Distanced Recurrent Convolutional Neural Network for Change Detection in Satellite Time Series

  • UTRNet -> An Unsupervised Time-Distance-Guided Convolutional Recurrent Network for Change Detection in Irregularly Collected Images

  • open-cd -> an open source change detection toolbox based on a series of open source general vision task tools

  • Tiny_model_4_CD -> TINYCD: A (Not So) Deep Learning Model For Change Detection. Uses LEVIR-CD & WHU-CD datasets

  • FHD -> Feature Hierarchical Differentiation for Remote Sensing Image Change Detection

  • Change detection with Raster Vision -> blog post with Colab notebook

  • building-expansion -> Enhancing Environmental Enforcement with Near Real-Time Monitoring: Likelihood-Based Detection of Structural Expansion of Intensive Livestock Farms

  • SaDL_CD -> Semantic-aware Dense Representation Learning for Remote Sensing Image Change Detection

  • EGCTNet_pytorch -> Building Change Detection Based on an Edge-Guided Convolutional Neural Network Combined with a Transformer

  • S2-cGAN -> S2-cGAN: Self-Supervised Adversarial Representation Learning for Binary Change Detection in Multispectral Images

  • A-loss-function-for-change-detection -> UAL: Unchanged Area Loss-Function for Change Detection Networks

  • IEEE_TGRS_SSTFormer -> Spectral–Spatial–Temporal Transformers for Hyperspectral Image Change Detection

  • DMINet -> Change Detection on Remote Sensing Images Using Dual-Branch Multilevel Intertemporal Network

  • AFCF3D-Net -> Adjacent-level Feature Cross-Fusion with 3D CNN for Remote Sensing Image Change Detection

  • DSAHRNet -> A Deeply Attentive High-Resolution Network for Change Detection in Remote Sensing Images

  • RDPNet -> RDP-Net: Region Detail Preserving Network for Change Detection

  • BGAAE_CD -> Bipartite Graph Attention Autoencoders for Unsupervised Change Detection Using VHR Remote Sensing Images

  • Unsupervised-Change-Detection -> Unsupervised Change Detection in Satellite Images Using Principal Component Analysis and k-Means Clustering

  • Metric-CD -> Deep Metric Learning for Unsupervised Change Detection in Remote Sensing Images

  • HANet-CD -> HANet: A hierarchical attention network for change detection with bi-temporal very-high-resolution remote sensing images

  • SRGCAE -> Unsupervised Multimodal Change Detection Based on Structural Relationship Graph Representation Learning

  • change_detection_onera_baselines -> Siamese version of U-Net baseline model

  • SiamCRNN -> Change Detection in Multisource VHR Images via Deep Siamese Convolutional Multiple-Layers Recurrent Neural Network

  • Graph-based methods for change detection in remote sensing images -> Graph Learning Based on Signal Smoothness Representation for Homogeneous and Heterogeneous Change Detection

  • TransUNetplus2 -> TransU-Net++: Rethinking attention gated TransU-Net for deforestation mapping. Uses the Amazon and Atlantic forest dataset

  • AR-CDNet -> Towards Accurate and Reliable Change Detection of Remote Sensing Images via Knowledge Review and Online Uncertainty Estimation

  • CICNet -> Compact Intertemporal Coupling Network for Remote Sensing Change Detection

  • BGINet -> Remote Sensing Image Change Detection with Graph Interaction

  • DSNUNet -> DSNUNet: An Improved Forest Change Detection Network by Combining Sentinel-1 and Sentinel-2 Images

  • Forest-CD -> Forest-CD: Forest Change Detection Network Based on VHR Images

  • S3Net_CD -> Superpixel-Guided Self-Supervised Learning Network for Change Detection in Multitemporal Image Change Detection

  • T-UNet -> T-UNet: Triplet UNet for Change Detection in High-Resolution Remote Sensing Images

  • UCDFormer -> UCDFormer: Unsupervised Change Detection Using a Transformer-driven Image Translation

  • satellite-change-events -> Change Event Dataset for Discovery from Spatio-temporal Remote Sensing Imagery, uses Sentinel 2 CaiRoad & CalFire datasets

  • CACo -> Change-Aware Sampling and Contrastive Learning for Satellite Images

  • LightCDNet -> LightCDNet: Lightweight Change Detection Network Based on VHR Images

  • OpenMineChangeDetection -> Characterising Open Cast Mining from Satellite Data (Sentinel 2), implements TinyCD, LSNet & DDPM-CD

  • multi-task-L-UNet -> A Deep Multi-Task Learning Framework Coupling Semantic Segmentation and Fully Convolutional LSTM Networks for Urban Change Detection. Applied to SpaceNet7 dataset

  • urban_change_detection -> Detecting Urban Changes With Recurrent Neural Networks From Multitemporal Sentinel-2 Data. fabric is another implementation

  • UNetLSTM -> Detecting Urban Changes With Recurrent Neural Networks From Multitemporal Sentinel-2 Data

  • SDACD -> An End-to-end Supervised Domain Adaptation Framework for Cross-domain Change Detection

  • CycleGAN-Based-DA-for-CD -> CycleGAN-based Domain Adaptation for Deforestation Detection

  • CGNet-CD -> Change Guiding Network: Incorporating Change Prior to Guide Change Detection in Remote Sensing Imagery

  • PA-Former -> PA-Former: Learning Prior-Aware Transformer for Remote Sensing Building Change Detection

  • AERNet -> AERNet: An Attention-Guided Edge Refinement Network and a Dataset for Remote Sensing Building Change Detection (HRCUS-CD)

  • S1GFlood-Detection -> DAM-Net: Global Flood Detection from SAR Imagery Using Differential Attention Metric-Based Vision Transformers. Includes S1GFloods dataset

  • Changen -> Scalable Multi-Temporal Remote Sensing Change Data Generation via Simulating Stochastic Change Process

  • TTP -> Time Travelling Pixels: Bitemporal Features Integration with Foundation Model for Remote Sensing Image Change Detection

  • SAM-CD -> Adapting Segment Anything Model for Change Detection in HR Remote Sensing Images

  • SCanNet -> Joint Spatio-Temporal Modeling for Semantic Change Detection in Remote Sensing Images

  • ELGC-Net -> Efficient Local-Global Context Aggregation for Remote Sensing Change Detection

  • Official_Remote_Sensing_Mamba -> RS-Mamba for Large Remote Sensing Image Dense Prediction

  • ChangeMamba -> Remote Sensing Change Detection with Spatio-Temporal State Space Model

  • ClearSCD -> Comprehensively leveraging semantics and change relationships for semantic change detection in high spatial resolution remote sensing imagery

  • RSCaMa -> Remote Sensing Image Change Captioning with State Space Model

  • ChangeBind -> A Hybrid Change Encoder for Remote Sensing Change Detection

  • OctaveNet -> An efficient multi-scale pseudo-siamese network for change detection in remote sensing images

  • MaskCD -> A Remote Sensing Change Detection Network Based on Mask Classification

  • I3PE -> Exchange means change: an unsupervised single-temporal change detection framework based on intra- and inter-image patch exchange

Time series


Prediction of the next image in a series.

The analysis of time series observations in remote sensing data has numerous applications, including enhancing the accuracy of classification models and forecasting future patterns and events. Image source. Note: since classifying crops and predicting crop yield are such prominent use case for time series data, these tasks have dedicated sections after this one.

Crop classification


(left) false colour image and (right) the crop map.

Crop classification in remote sensing is the identification and mapping of different crops in images or sequences of images. It aims to provide insight into the distribution and composition of crops in a specific area, with applications that include monitoring crop growth and evaluating crop damage. Both traditional machine learning methods, such as decision trees and support vector machines, and deep learning techniques, such as convolutional neural networks (CNNs), can be used to perform crop classification. The optimal method depends on the size and complexity of the dataset, the desired accuracy, and the available computational resources. However, the success of crop classification relies heavily on the quality and resolution of the input data, as well as the availability of labeled training data. Image source: High resolution satellite imaging sensors for precision agriculture by Chenghai Yang

Crop yield & vegetation forecasting


Wheat yield data. Blue vertical lines denote observation dates.

Crop yield is a crucial metric in agriculture, as it determines the productivity and profitability of a farm. It is defined as the amount of crops produced per unit area of land and is influenced by a range of factors including soil fertility, weather conditions, the type of crop grown, and pest and disease control. By utilizing time series of satellite images, it is possible to perform accurate crop type classification and take advantage of the seasonal variations specific to certain crops. This information can be used to optimize crop management practices and ultimately improve crop yield. However, to achieve accurate results, it is essential to consider the quality and resolution of the input data, as well as the availability of labeled training data. Appropriate pre-processing and feature extraction techniques must also be employed. Image source.

Wealth and economic activity


COVID-19 impacts on human and economic activities.

The traditional approach of collecting economic data through ground surveys is a time-consuming and resource-intensive process. However, advancements in satellite technology and machine learning offer an alternative solution. By utilizing satellite imagery and applying machine learning algorithms, it is possible to obtain accurate and current information on economic activity with greater efficiency. This shift towards satellite imagery-based forecasting not only provides cost savings but also offers a wider and more comprehensive perspective of economic activity. As a result, it is poised to become a valuable asset for both policymakers and businesses. Image source.

Disaster response


Detecting buildings destroyed in a disaster.

Remote sensing images are used in disaster response to identify and assess damage to an area. This imagery can be used to detect buildings that are damaged or destroyed, identify roads and road networks that are blocked, determine the size and shape of a disaster area, and identify areas that are at risk of flooding. Remote sensing images can also be used to detect and monitor the spread of forest fires and monitor vegetation health. Also checkout the sections on change detection and water/fire/building segmentation. Image source.

  • DisaVu -> combines building & damage detection and provides an app for viewing predictions

  • Soteria -> uses machine learning with satellite imagery to map natural disaster impacts for faster emergency response

  • DisasterHack -> Wildfire Mitigation: Computer Vision Identification of Hazard Fuels Using Landsat

  • forestcasting -> Forest fire prediction powered by analytics

  • Machine Learning-based Damage Assessment for Disaster Relief on Google AI blog -> uses object detection to locate buildings, then a classifier to determine if a building is damaged. Challenge of generalising due to small dataset

  • hurricane_damage -> Post-hurricane structure damage assessment based on aerial imagery with CNN

  • rescue -> code of the paper: Attention to fires: multi-channel deep-learning models forwildfire severity prediction

-. Disaster-Classification -> A disaster classification model to predict the type of disaster given an input image

Super-resolution


Super resolution using multiple low resolution images as input.

Super-resolution is a technique aimed at improving the resolution of an imaging system. This process can be applied prior to other image processing steps to increase the visibility of small objects or boundaries. Despite its potential benefits, the use of super-resolution is controversial due to the possibility of introducing artifacts that could be mistaken for real features. Super-resolution techniques are broadly categorized into two groups: single image super-resolution (SISR) and multi-image super-resolution (MISR). SISR focuses on enhancing the resolution of a single image, while MISR utilizes multiple images of the same scene to create a high-resolution output. Each approach has its own advantages and limitations, and the choice of method depends on the specific application and desired outcome. Image source.

Multi image super-resolution (MISR)

Note that nearly all the MISR publications resulted from the PROBA-V Super Resolution competition

  • deepsum -> Deep neural network for Super-resolution of Unregistered Multitemporal images (ESA PROBA-V challenge)

  • 3DWDSRNet -> Satellite Image Multi-Frame Super Resolution (MISR) Using 3D Wide-Activation Neural Networks

  • RAMS -> Multi-Image Super Resolution of Remotely Sensed Images Using Residual Attention Deep Neural Networks

  • TR-MISR -> Transformer-based MISR framework for the the PROBA-V super-resolution challenge. With paper

  • HighRes-net -> Pytorch implementation of HighRes-net, a neural network for multi-frame super-resolution, trained and tested on the European Space Agency’s Kelvin competition

  • ProbaVref -> Repurposing the Proba-V challenge for reference-aware super resolution

  • The missing ingredient in deep multi-temporal satellite image super-resolution -> Permutation invariance harnesses the power of ensembles in a single model, with repo piunet

  • MSTT-STVSR -> Space-time Super-resolution for Satellite Video: A Joint Framework Based on Multi-Scale Spatial-Temporal Transformer, JAG, 2022

  • Self-Supervised Super-Resolution for Multi-Exposure Push-Frame Satellites

  • DDRN -> Deep Distillation Recursive Network for Video Satellite Imagery Super-Resolution

-worldstrat -> SISR and MISR implementations of SRCNN

  • MISR-GRU -> Pytorch implementation of MISR-GRU, a deep neural network for multi image super-resolution (MISR), for ProbaV Super Resolution Competition

  • MSDTGP -> Satellite Video Super-Resolution via Multiscale Deformable Convolution Alignment and Temporal Grouping Projection

  • proba-v-super-resolution-challenge -> Solution to ESA's satellite imagery super resolution challenge

  • PROBA-V-Super-Resolution -> solution using a custom deep learning architecture

  • satlas-super-resolution -> Satlas Super Resolution: model is an adaptation of ESRGAN, with changes that allow the input to be a time series of Sentinel-2 images.

  • MISR Remote Sensing SRGAN -> PyTorch SRGAN for RGB Remote Sensing imagery, performing both SISR and MISR. MISR implementation inspired by RecursiveNet (HighResNet). Includes pretrained Checkpoints.

  • MISR-S2 -> Cross-sensor super-resolution of irregularly sampled Sentinel-2 time series

Single image super-resolution (SISR)

Super-resolution - Miscellaneous

Pansharpening


Pansharpening example with a resolution difference of factor 4.

Pansharpening is a data fusion method that merges the high spatial detail from a high-resolution panchromatic image with the rich spectral information from a lower-resolution multispectral image. The result is a single, high-resolution color image that retains both the sharpness of the panchromatic band and the color information of the multispectral bands. This process enhances the spatial resolution while preserving the spectral qualities of the original images. Image source

  • Several algorithms described in the ArcGIS docs, with the simplest being taking the mean of the pan and RGB pixel value.

  • PGCU -> Probability-based Global Cross-modal Upsampling for Pansharpening

  • rio-pansharpen -> pansharpening Landsat scenes

  • Simple-Pansharpening-Algorithms

  • Working-For-Pansharpening -> long list of pansharpening methods and update of Awesome-Pansharpening

  • PSGAN -> A Generative Adversarial Network for Remote Sensing Image Pan-sharpening

  • Pansharpening-by-Convolutional-Neural-Network

  • PBR_filter -> Pansharpening by Background Removal algorithm for sharpening RGB images

  • py_pansharpening -> multiple algorithms implemented in python

  • Deep-Learning-PanSharpening -> deep-learning based pan-sharpening code package, we reimplemented include PNN, MSDCNN, PanNet, TFNet, SRPPNN, and our purposed network DIPNet

  • HyperTransformer -> A Textural and Spectral Feature Fusion Transformer for Pansharpening

  • DIP-HyperKite -> Hyperspectral Pansharpening Based on Improved Deep Image Prior and Residual Reconstruction

  • D2TNet -> A ConvLSTM Network with Dual-direction Transfer for Pan-sharpening

  • PanColorGAN-VHR-Satellite-Images -> Rethinking CNN-Based Pansharpening: Guided Colorization of Panchromatic Images via GANs

  • MTL_PAN_SEG -> Multi-task deep learning for satellite image pansharpening and segmentation

  • Z-PNN -> Pansharpening by convolutional neural networks in the full resolution framework

  • GTP-PNet -> GTP-PNet: A residual learning network based on gradient transformation prior for pansharpening

  • UDL -> Dynamic Cross Feature Fusion for Remote Sensing Pansharpening

  • PSData -> A Large-Scale General Pan-sharpening DataSet, which contains PSData3 (QB, GF-2, WV-3) and PSData4 (QB, GF-1, GF-2, WV-2).

  • AFPN -> Adaptive Detail Injection-Based Feature Pyramid Network For Pan-sharpening

  • pan-sharpening -> multiple methods demonstrated for multispectral and panchromatic images

  • PSGan-Family -> PSGAN: A Generative Adversarial Network for Remote Sensing Image Pan-Sharpening

  • PanNet-Landsat -> A Deep Network Architecture for Pan-Sharpening

  • DLPan-Toolbox -> Machine Learning in Pansharpening: A Benchmark, from Shallow to Deep Networks

  • LPPN -> Laplacian pyramid networks: A new approach for multispectral pansharpening

  • S2_SSC_CNN -> Zero-shot Sentinel-2 Sharpening Using A Symmetric Skipped Connection Convolutional Neural Network

  • S2S_UCNN -> Sentinel 2 sharpening using a single unsupervised convolutional neural network with MTF-Based degradation model

  • SSE-Net -> Spatial and Spectral Extraction Network With Adaptive Feature Fusion for Pansharpening

  • UCGAN -> Unsupervised Cycle-consistent Generative Adversarial Networks for Pan-sharpening

  • GCPNet -> When Pansharpening Meets Graph Convolution Network and Knowledge Distillation

  • PanFormer -> PanFormer: a Transformer Based Model for Pan-sharpening

  • Pansharpening -> Pansformers: Transformer-Based Self-Attention Network for Pansharpening

  • Sentinel-2 Band Pan-Sharpening

Image-to-image translation


(left) Sentinel-1 SAR input, (middle) translated to RGB and (right) Sentinel-2 true RGB image for comparison.

Image-to-image translation is a crucial aspect of computer vision that utilizes machine learning models to transform an input image into a new, distinct output image. In the field of remote sensing, it plays a significant role in bridging the gap between different imaging domains, such as converting Synthetic Aperture Radar (SAR) images into RGB (Red Green Blue) images. This technology has a wide range of applications, including improving image quality, filling in missing information, and facilitating cross-domain image analysis and comparison. By leveraging deep learning algorithms, image-to-image translation has become a powerful tool in the arsenal of remote sensing researchers and practitioners. Image source

Data fusion


Illustration of a fusion workflow.

Data fusion is a technique for combining information from different sources such as Synthetic Aperture Radar (SAR), optical imagery, and non-imagery data such as Internet of Things (IoT) sensor data. The integration of diverse data sources enables data fusion to overcome the limitations of individual sources, leading to the creation of models that are more accurate and informative than those constructed from a single source. Image source

  • Awesome-Data-Fusion-for-Remote-Sensing

  • UDALN_GRSL -> Deep Unsupervised Blind Hyperspectral and Multispectral Data Fusion

  • CropTypeMapping -> Crop type mapping from optical and radar (Sentinel-1&2) time series using attention-based deep learning

  • Multimodal-Remote-Sensing-Toolkit -> uses Hyperspectral and LiDAR Data

  • Aerial-Template-Matching -> development of an algorithm for template Matching on aerial imagery applied to UAV dataset

  • DS_UNet -> Sentinel-1 and Sentinel-2 Data Fusion for Urban Change Detection using a Dual Stream U-Net, uses Onera Satellite Change Detection dataset

  • DDA_UrbanExtraction -> Unsupervised Domain Adaptation for Global Urban Extraction using Sentinel-1 and Sentinel-2 Data

  • swinstfm -> Remote Sensing Spatiotemporal Fusion using Swin Transformer

  • LoveCS -> Cross-sensor domain adaptation for high-spatial resolution urban land-cover mapping: from airborne to spaceborne imagery

  • comingdowntoearth -> Implementation of 'Coming Down to Earth: Satellite-to-Street View Synthesis for Geo-Localization'

  • Matching between acoustic and satellite images

  • MapRepair -> Deep Cadastre Maps Alignment and Temporal Inconsistencies Fix in Satellite Images

  • Compressive-Sensing-and-Deep-Learning-Framework -> Compressive Sensing is used as an initial guess to combine data from multiple sources, with LSTM used to refine the result

  • DeepSim -> DeepSIM: GPS Spoofing Detection on UAVs using Satellite Imagery Matching

  • MHF-net -> Multispectral and Hyperspectral Image Fusion by MS/HS Fusion Net

  • Remote_Sensing_Image_Fusion -> Semi-Supervised Remote Sensing Image Fusion Using Multi-Scale Conditional Generative Adversarial network with Siamese Structure

  • CNNs for Multi-Source Remote Sensing Data Fusion -> Single-stream CNN with Learnable Architecture for Multi-source Remote Sensing Data

  • Deep Generative Reflectance Fusion -> Achieving Landsat-like reflectance at any date by fusing Landsat and MODIS surface reflectance with deep generative models

  • IEEE_TGRS_MDL-RS -> More Diverse Means Better: Multimodal Deep Learning Meets Remote-Sensing Imagery Classification

  • SSRNET -> SSR-NET: Spatial-Spectral Reconstruction Network for Hyperspectral and Multispectral Image Fusion

  • cross-view-image-matching -> Bridging the Domain Gap for Ground-to-Aerial Image Matching

  • CoF-MSMG-PCNN -> Remote Sensing Image Fusion via Boundary Measured Dual-Channel PCNN in Multi-Scale Morphological Gradient Domain

  • robust_matching_network_on_remote_sensing_imagery_pytorch -> A Robust Matching Network for Gradually Estimating Geometric Transformation on Remote Sensing Imagery

  • edcstfn -> An Enhanced Deep Convolutional Model for Spatiotemporal Image Fusion

  • ganstfm -> A Flexible Reference-Insensitive Spatiotemporal Fusion Model for Remote Sensing Images Using Conditional Generative Adversarial Network

  • CMAFF -> Cross-Modality Attentive Feature Fusion for Object Detection in Multispectral Remote Sensing Imagery

  • SOLC -> MCANet: A joint semantic segmentation framework of optical and SAR images for land use classification. Uses WHU-OPT-SAR-dataset

  • MFT -> Multimodal Fusion Transformer for Remote Sensing Image Classification

  • ISPRS_S2FL -> Multimodal Remote Sensing Benchmark Datasets for Land Cover Classification with A Shared and Specific Feature Learning Model

  • HSHT-Satellite-Imagery-Synthesis -> Improving Flood Maps by Increasing the Temporal Resolution of Satellites Using Hybrid Sensor Fusion

  • MDC -> Unsupervised Data Fusion With Deeper Perspective: A Novel Multisensor Deep Clustering Algorithm

  • FusAtNet -> FusAtNet: Dual Attention based SpectroSpatial Multimodal Fusion Network for Hyperspectral and LiDAR Classification

  • AMM-FuseNet -> Attention-Based Multi-Modal Image Fusion Network for Land Cover Mapping

  • MANet -> MANet: A Network Architecture for Remote Sensing Spatiotemporal Fusion Based on Multiscale and Attention Mechanisms

  • DCSA-Net -> Dynamic Convolution Self-Attention Network for Land-Cover Classification in VHR Remote-Sensing Images

  • deforestation-from-data-fusion -> Fusing Sentinel-1 and Sentinel-2 images for deforestation detection in the Brazilian Amazon under diverse cloud conditions

  • sct-fusion -> Transformer-based Multi-Modal Learning for Multi Label Remote Sensing Image Classification

  • RSI-MMSegmentation -> GAMUS: A Geometry-aware Multi-modal Semantic Segmentation Benchmark for Remote Sensing Data

  • dfc2022-baseline -> baseline solution to the 2022 IEEE GRSS Data Fusion Contest (DFC2022) using TorchGeo, PyTorch Lightning, and Segmentation Models PyTorch to train a U-Net with a ResNet-18 backbone and a loss function of Focal + Dice loss to perform semantic segmentation on the DFC2022 dataset

  • multiviewRS-models -> List of multi-view fusion learning models proposed for remote sensing (RS) multi-view data

Generative networks


Example generated images using a GAN.

Generative networks (e.g. GANs) aim to generate new, synthetic data that appears similar to real-world data. This generated data can be used for a wide range of purposes, including data augmentation, data imbalance correction, and filling in missing or corrupted data. Including generating synthetic data can improve the performance of remote sensing algorithms and models, leading to more accurate and reliable results. Image source

Autoencoders, dimensionality reduction, image embeddings & similarity search


Example of using an autoencoder to create a low dimensional representation of hyperspectral data.

Autoencoders are a type of neural network that aim to simplify the representation of input data by compressing it into a lower dimensional form. This is achieved through a two-step process of encoding and decoding, where the encoding step compresses the data into a lower dimensional representation, and the decoding step restores the data back to its original form. The goal of this process is to reduce the data's dimensionality, making it easier to store and process, while retaining the essential information. Dimensionality reduction, as the name suggests, refers to the process of reducing the number of dimensions in a dataset. This can be achieved through various techniques such as principal component analysis (PCA) or singular value decomposition (SVD). Autoencoders are one type of neural network that can be used for dimensionality reduction. In the field of computer vision, image embeddings are vector representations of images that capture the most important features of the image. These embeddings can then be used to perform similarity searches, where images are compared based on their features to find similar images. This process can be used in a variety of applications, such as image retrieval, where images are searched based on certain criteria like color, texture, or shape. It can also be used to identify duplicate images in a dataset. Image source

Anomaly detection

Anomaly detection refers to the process of identifying unusual patterns or outliers in satellite or aerial images that do not conform to expected norms. This is crucial in applications such as environmental monitoring, defense surveillance, and urban planning. Machine learning algorithms, particularly unsupervised learning methods, are used to analyze vast amounts of remote sensing data efficiently. These algorithms learn the typical patterns and variations in the data, allowing them to flag anomalies such as unexpected land cover changes, illegal deforestation, or unusual maritime activities. The detection of these anomalies can provide valuable insights for timely decision-making and intervention in various fields.

  • AgriSen-COG -> a Multicountry, Multitemporal Large-Scale Sentinel-2 Benchmark Dataset for Crop Mapping: includes an anomaly detection preprocessing step

Image retrieval


Illustration of the remote sensing image retrieval process.

Image retrieval is the task of retrieving images from a collection that are similar to a query image. Image retrieval plays a vital role in remote sensing by enabling the efficient and effective search for relevant images from large image archives, and by providing a way to quantify changes in the environment over time. Image source

Image Captioning


Example captioned image.

Image Captioning is the task of automatically generating a textual description of an image. In remote sensing, image captioning can be used to automatically generate captions for satellite or aerial images, which can be useful for a variety of purposes, such as image search and retrieval, data cataloging, and data dissemination. The generated captions can provide valuable information about the content of the images, including the location, the type of terrain or objects present, and the weather conditions, among others. This information can be used to quickly and easily understand the content of the images, without having to manually examine each image. Image source

Visual Question Answering

Visual Question Answering (VQA) is the task of automatically answering a natural language question about an image. In remote sensing, VQA enables users to interact with the images and retrieve information using natural language questions. For example, a user could ask a VQA system questions such as "What is the type of land cover in this area?", "What is the dominant crop in this region?" or "What is the size of the city in this image?". The system would then analyze the image and generate an answer based on its understanding of the image content.

  • VQA-easy2hard -> From Easy to Hard: Learning Language-guided Curriculum for Visual Question Answering on Remote Sensing Data

  • lit4rsvqa -> LiT-4-RSVQA: Lightweight Transformer-based Visual Question Answering in Remote Sensing

  • Change-Agent -> Towards Interactive Comprehensive Remote Sensing Change Interpretation and Analysis

Mixed data learning

Mixed data learning is the process of learning from datasets that may contain an mix of images, textual and numeric data. Mixed data learning can help improve the accuracy of models by allowing them to learn from multiple sources at once and use more sophisticated methods to identify patterns and correlations.

Few & zero shot learning

This is a class of techniques which attempt to make predictions for classes with few, one or even zero examples provided during training. In zero shot learning (ZSL) the model is assisted by the provision of auxiliary information which typically consists of descriptions/semantic attributes/word embeddings for both the seen and unseen classes at train time (ref). These approaches are particularly relevant to remote sensing, where there may be many examples of common classes, but few or even zero examples for other classes of interest.

  • Aerial-SAM -> Zero-Shot Refinement of Buildings’ Segmentation Models using SAM

  • FSODM -> Few-shot Object Detection on Remote Sensing Images

  • Few-Shot Classification of Aerial Scene Images via Meta-Learning -> 2020 publication, a classification model that can quickly adapt to unseen categories using only a few labeled samples

  • Papers about Few-shot Learning / Meta-Learning on Remote Sensing

  • SPNet -> Siamese-Prototype Network for Few-Shot Remote Sensing Image Scene Classification

  • MDL4OW -> Few-Shot Hyperspectral Image Classification With Unknown Classes Using Multitask Deep Learning

  • P-CNN -> Prototype-CNN for Few-Shot Object Detection in Remote Sensing Images

  • CIR-FSD-2022 -> Context Information Refinement for Few-Shot Object Detection in Remote Sensing Images

  • IEEE_TNNLS_Gia-CFSL -> Graph Information Aggregation Cross-Domain Few-Shot Learning for Hyperspectral Image Classification

  • TIP_2022_CMFSL -> Few-shot Learning with Class-Covariance Metric for Hyperspectral Image Classification

  • sen12ms-human-few-shot-classifier -> Humans are poor few-shot classifiers for Sentinel-2 land cover

  • S3Net -> S3Net: Spectral–Spatial Siamese Network for Few-Shot Hyperspectral Image Classification

  • SiameseNet-for-few-shot-Hyperspectral-Classification -> 3DCSN:SiameseNet-for-few-shot-Hyperspectral-Classification

  • MESSL -> Multiform Ensemble Self-Supervised Learning for Few-Shot Remote Sensing Scene Classification

  • SCCNet -> Self-Correlation and Cross-Correlation Learning for Few-Shot Remote Sensing Image Semantic Segmentation

  • OEM-Fewshot-Challenge -> OpenEarthMap Land Cover Mapping Few-Shot Challenge Generalized Few-shot Semantic Segmentation

  • meteor -> a small deep learning meta-model with a single output

  • SegLand -> Generalized Few-Shot Meets Remote Sensing: Discovering Novel Classes in Land Cover Mapping via Hybrid Semantic Segmentation Framework. 1st place in the OpenEarthMap Land Cover Mapping Few-Shot Challenge

Self-supervised, unsupervised & contrastive learning

Self-supervised, unsupervised & contrastive learning are all methods of machine learning that use unlabeled data to train algorithms. Self-supervised learning uses labeled data to create an artificial supervisor, while unsupervised learning uses only the data itself to identify patterns and similarities. Contrastive learning uses pairs of data points to learn representations of data, usually for classification tasks. Note that self-supervised approaches are commonly used in the training of so-called Foundational models, since they enable learning from large quantities of unlablleded data, tyipcally time series.

Weakly & semi-supervised learning

Weakly & semi-supervised learning are two methods of machine learning that use both labeled and unlabeled data for training. Weakly supervised learning uses weakly labeled data, which may be incomplete or inaccurate, while semi-supervised learning uses both labeled and unlabeled data. Weakly supervised learning is typically used in situations where labeled data is scarce and unlabeled data is abundant. Semi-supervised learning is typically used in situations where labeled data is abundant but also contains some noise or errors. Both techniques can be used to improve the accuracy of machine learning models by making use of additional data sources.

  • MARE -> self-supervised Multi-Attention REsu-net for semantic segmentation in remote sensing

  • SSGF-for-HRRS-scene-classification -> A semi-supervised generative framework with deep learning features for high-resolution remote sensing image scene classification

  • SFGAN -> Semantic-Fusion Gans for Semi-Supervised Satellite Image Classification

  • SSDAN -> Multi-Source Semi-Supervised Domain Adaptation Network for Remote Sensing Scene Classification

  • HR-S2DML -> High-Rankness Regularized Semi-Supervised Deep Metric Learning for Remote Sensing Imagery

  • Semantic Segmentation of Satellite Images Using Point Supervision

  • fcd -> Fixed-Point GAN for Cloud Detection. A weakly-supervised approach, training with only image-level labels

  • weak-segmentation -> Weakly supervised semantic segmentation for aerial images in pytorch

  • TNNLS_2022_X-GPN -> Semisupervised Cross-scale Graph Prototypical Network for Hyperspectral Image Classification

  • weakly_supervised -> Weakly Supervised Deep Learning for Segmentation of Remote Sensing Imagery. Demonstrates that segmentation can be performed using small datasets comprised of pixel or image labels

  • wan -> Weakly-Supervised Domain Adaptation for Built-up Region Segmentation in Aerial and Satellite Imagery

  • sourcerer -> A Bayesian-inspired deep learning method for semi-supervised domain adaptation designed for land cover mapping from satellite image time series (SITS)

  • MSMatch -> Semi-Supervised Multispectral Scene Classification with Few Labels. Includes code to work with both the RGB and the multispectral (MS) versions of EuroSAT dataset and the UC Merced Land Use (UCM) dataset

  • Flood Segmentation on Sentinel-1 SAR Imagery with Semi-Supervised Learning

  • Semi-supervised learning in satellite image classification -> experimenting with MixMatch and the EuroSAT data set

  • ScRoadExtractor -> Scribble-based Weakly Supervised Deep Learning for Road Surface Extraction from Remote Sensing Images

  • ICSS -> Weakly-supervised continual learning for class-incremental segmentation

  • es-CP -> Semi-Supervised Hyperspectral Image Classification Using a Probabilistic Pseudo-Label Generation Framework

  • Flood_Mapping_SSL -> Enhancement of Urban Floodwater Mapping From Aerial Imagery With Dense Shadows via Semisupervised Learning

  • MS4D-Net-Building-Damage-Assessment -> MS4D-Net: Multitask-Based Semi-Supervised Semantic Segmentation Framework with Perturbed Dual Mean Teachers for Building Damage Assessment from High-Resolution Remote Sensing Imagery

Active learning

Supervised deep learning techniques typically require a huge number of annotated/labelled examples to provide a training dataset. However labelling at scale take significant time, expertise and resources. Active learning techniques aim to reduce the total amount of annotation that needs to be performed by selecting the most useful images to label from a large pool of unlabelled images, thus reducing the time to generate useful training datasets. These processes may be referred to as Human-in-the-Loop Machine Learning

Federated learning

Federated learning is an approach to distributed machine learning where a central processor coordinates the training of an individual model in each of its clients. It is a type of distributed ML which means that the data is distributed among different devices or locations and the model is trained on all of them. The central processor aggregates the model updates from all the clients and then sends the global model parameters back to the clients. This is done to protect the privacy of data, as the data remains on the local device and only the global model parameters are shared with the central processor. This technique can be used to train models with large datasets that cannot be stored in a single device, as well as to enable certain privacy-preserving applications.

Adversarial ML

Efforts to detect falsified images & deepfakes

  • UAE-RS -> dataset that provides black-box adversarial samples in the remote sensing field

  • PSGAN -> Perturbation Seeking Generative Adversarial Networks: A Defense Framework for Remote Sensing Image Scene Classification

  • SACNet -> Self-Attention Context Network: Addressing the Threat of Adversarial Attacks for Hyperspectral Image Classification

Image registration

Image registration is the process of registering one or more images onto another (typically well georeferenced) image. Traditionally this is performed manually by identifying control points (tie-points) in the images, for example using QGIS. This section lists approaches which mostly aim to automate this manual process. There is some overlap with the data fusion section but the distinction I make is that image registration is performed as a prerequisite to downstream processes which will use the registered data as an input.

  • Wikipedia article on registration -> register for change detection or image stitching

  • Phase correlation is used to estimate the XY translation between two images with sub-pixel accuracy. Can be used for accurate registration of low resolution imagery onto high resolution imagery, or to register a sub-image on a full image -> Unlike many spatial-domain algorithms, the phase correlation method is resilient to noise, occlusions, and other defects. With additional pre-processing image rotation and scale changes can also be calculated.

  • How to Co-Register Temporal Stacks of Satellite Images

  • ImageRegistration -> Interview assignment for multimodal image registration using SIFT

  • imreg_dft -> Image registration using discrete Fourier transform. Given two images it can calculate the difference between scale, rotation and position of imaged features.

  • arosics -> Perform automatic subpixel co-registration of two satellite image datasets using phase-correlation, XY translations only.

  • SubpixelAlignment -> Implementation of tiff image alignment through phase correlation for pixel- and subpixel-bias

  • cnn-registration -> A image registration method using convolutional neural network features written in Python2, Tensorflow 1.5

  • Siamese_ShiftNet -> NN predicting spatial coregistration shift of remote sensing imagery. Adapted from HighRes-net

  • ImageCoregistration -> Image registration with openCV using sift and RANSAC

  • mapalignment -> Aligning and Updating Cadaster Maps with Remote Sensing Images

  • CVPR21-Deep-Lucas-Kanade-Homography -> deep learning pipeline to accurately align challenging multimodality images. The method is based on traditional Lucas-Kanade algorithm with feature maps extracted by deep neural networks.

  • eolearn implements phase correlation, feature matching and ECC

  • Reprojecting the Perseverance landing footage onto satellite imagery

  • Kornia provides image registration

  • LoFTR -> Detector-Free Local Feature Matching with Transformers. Good performance matching satellite image pairs, tryout the web demo on your data

  • image-to-db-registration -> This remote module implements an algorithm for automated vector Database registration onto an Image. Implemented in the orfeo-toolbox

  • MS_HLMO_registration -> Multi-scale Histogram of Local Main Orientation for Remote Sensing Image Registration, with paper

  • cnn-matching -> Deep learning algorithm for feature matching of cross modality remote sensing images

  • Imatch-P -> A demo using SuperGlue and SuperPoint to do the image matching task based PaddlePaddle

  • NBR-Net -> A Non-rigid Bi-directional Registration Network for Multi-temporal Remote Sensing Images

  • MU-Net -> A Multi-Scale Framework with Unsupervised Learning for Remote Sensing Image Registration

  • unsupervisedDeepHomographyRAL2018 -> Unsupervised Deep Homography applied to aerial data

  • registration_cnn_ntg -> A Multispectral Image Registration Method Based on Unsupervised Learning

  • remote-sensing-images-registration-dataset -> at 0.23m, 3.75m & 30m resolution

  • semantic-template-matching -> A deep learning semantic template matching framework for remote sensing image registration

  • GMN-Generative-Matching-Network -> Deep Generative Matching Network for Optical and SAR Image Registration

  • SOMatch -> A deep learning framework for matching of SAR and optical imagery

  • Interspectral image registration dataset -> including satellite and drone imagery

  • RISG-image-matching -> A rotation invariant SuperGlue image matching algorithm

  • DeepAerialMatching_pytorch -> A Two-Stream Symmetric Network with Bidirectional Ensemble for Aerial Image Matching

  • DPCN -> Deep Phase Correlation for End-to-End Heterogeneous Sensor Measurements Matching

  • FSRA -> A Transformer-Based Feature Segmentation and Region Alignment Method For UAV-View Geo-Localization

  • IHN -> Iterative Deep Homography Estimation

  • OSMNet -> Explore Better Network Framework for High-Resolution Optical and SAR Image Matching

  • L2_Siamese -> Registration of Multiresolution Remote Sensing Images Based on L2-Siamese Model

  • Multi-Step-Deformable-Registration -> Unsupervised Multi-Step Deformable Registration of Remote Sensing Imagery based on Deep Learning

Terrain mapping, Disparity Estimation, Lidar, DEMs & NeRF

Measure surface contours & locate 3D points in space from 2D images. NeRF stands for Neural Radiance Fields and is the term used in deep learning communities to describe a model that generates views of complex 3D scenes based on a partial set of 2D images

Thermal Infrared

Thermal infrared remote sensing is a technique used to detect and measure thermal radiation emitted from the Earth’s surface. This technique can be used to measure the temperature of the ground and any objects on it and can detect the presence of different materials. Thermal infrared remote sensing is used to assess land cover, detect land-use changes, and monitor urban heat islands, as well as to measure the temperature of the ground during nighttime or in areas of limited visibility.

SAR

SAR (synthetic aperture radar) is used to detect and measure the properties of objects and surfaces on the Earth's surface. SAR can be used to detect changes in terrain, features, and objects over time, as well as to measure the size, shape, and composition of objects and surfaces. SAR can also be used to measure moisture levels in soil and vegetation, or to detect and monitor changes in land use.

NDVI - vegetation index

Normalized Difference Vegetation Index (NDVI) is an index used to measure the amount of healthy vegetation in a given area. It is calculated by taking the difference between the near-infrared (NIR) and red (red) bands of a satellite image, and dividing by the sum of the two bands. NDVI can be used to identify areas of healthy vegetation and to assess the health of vegetation in a given area. ndvi = np.true_divide((ir - r), (ir + r))

General image quality

Image quality describes the degree of accuracy with which an image can represent the original object. Image quality is typically measured by the amount of detail, sharpness, and contrast that an image contains. Factors that contribute to image quality include the resolution, format, and compression of the image.

  • lvrnet -> Lightweight Image Restoration for Aerial Images under Low Visibility

  • jitter-compensation -> Remote Sensing Image Jitter Detection and Compensation Using CNN

  • DeblurGANv2 -> Deblurring (Orders-of-Magnitude) Faster and Better

  • image-quality-assessment -> CNN to predict the aesthetic and technical quality of images

  • DOTA-C -> evaluating the robustness of object detection models to 19 types of image quality degradation

  • piq -> a collection of measures and metrics for image quality assessment

  • FFA-Net -> Feature Fusion Attention Network for Single Image Dehazing

  • DeepCalib -> A Deep Learning Approach for Automatic Intrinsic Calibration of Wide Field-of-View Cameras

  • PerceptualSimilarity -> LPIPS is a perceptual metric which aims to overcome the limitations of traditional metrics such as PSNR & SSIM, to better represent the features the human eye picks up on

  • Optical-RemoteSensing-Image-Resolution -> Deep Memory Connected Neural Network for Optical Remote Sensing Image Restoration. Two applications: Gaussian image denoising and single image super-resolution

  • Hyperspectral-Deblurring-and-Destriping

  • HyDe -> Hyperspectral Denoising algorithm toolbox in Python

  • HLF-DIP -> Unsupervised Hyperspectral Denoising Based on Deep Image Prior and Least Favorable Distribution

  • RQUNetVAE -> Riesz-Quincunx-UNet Variational Auto-Encoder for Satellite Image Denoising

  • deep-hs-prior -> Deep Hyperspectral Prior: Denoising, Inpainting, Super-Resolution

  • iquaflow -> from Satellogic, an image quality framework that aims at providing a set of tools to assess image quality by using the performance of AI models trained on the images as a proxy.

Synthetic data

Training data can be hard to acquire, particularly for rare events such as change detection after disasters, or imagery of rare classes of objects. In these situations, generating synthetic training data might be the only option. This has become quite sophisticated, with 3D models being use with open source games engines such as Unreal.

Large vision & language models (LLMs & LVMs)

Foundational models

  • Awesome Remote Sensing Foundation Models

  • Clay Foundation Model -> an open source AI model and interface for Earth.

  • TerraTorch -> a Python toolkit for fine-tuning Geospatial Foundation Models from IBM, based on PyTorch Lightning and TorchGeo

  • EarthPT -> A time series foundation model for Earth Observation

  • SpectralGPT -> Spectral remote sensing foundation model, with finetuning on classification, segmentation, and change detection tasks

  • DOFA-pytorch -> Dynamic One-For-All (DOFA) multimodal foundation models for Earth vision reference implementation

  • Prithvi foundation model -> also see the Baseline Model for Segmentation

  • prithvi-pytorch -> makes Prithvi usable from Pytorch Lightning

  • geo-bench -> a General Earth Observation benchmark for evaluating the performances of large pre-trained models on geospatial data

  • USat -> A Unified Self-Supervised Encoder for Multi-Sensor Satellite Imagery

  • hydro-foundation-model -> A Foundation Model for Water in Satellite Imagery

  • RSBuilding -> Towards General Remote Sensing Image Building Extraction and Change Detection with Foundation Model

  • Text2Seg -> a pipeline that combined multiple Vision Foundation Models (SAM, CLIP, GroundingDINO) to perform semantic segmentation.

  • Remote-Sensing-RVSA -> Advancing Plain Vision Transformer Towards Remote Sensing Foundation Model

  • FoMo-Bench -> a multi-modal, multi-scale and multi-task Forest Monitoring Benchmark for remote sensing foundation models

  • MTP -> Advancing Remote Sensing Foundation Model via Multi-Task Pretraining

  • DiffusionSat -> A Generative Foundation Model For Satellite Imagery