/satellite-image-deep-learning

Resources for deep learning with satellite & aerial imagery

Apache License 2.0Apache-2.0

This page lists resources for performing deep learning on satellite imagery. To a lesser extent classical Machine learning (e.g. random forests) are also discussed, as are classical image processing techniques. Note there is a huge volume of academic literature published on these topics, and this repository does not seek to index them all but rather list approachable resources with published code that will benefit both the research and developer communities. If you find this work useful please give it a star and consider sponsoring it. You can also follow me on Twitter and LinkedIn where I aim to post frequent updates on my new discoveries, and I have created a dedicated group on LinkedIn. I have also started a blog here and have published a post on the history of this repository called Dissecting the satellite-image-deep-learning repo If you use this work in your research please cite using the citation information on the right. Thanks!

Twitter Follow Linkedin: robmarkcole Google Scholar Badge

Table of contents

Techniques

This section explores the different deep and machine learning (ML) techniques applied to common problems in satellite imagery analysis. Good background reading is Deep learning in remote sensing applications: A meta-analysis and review

Classification

The classic cats vs dogs image classification task, which in the remote sensing domain is used to assign a label to an image, e.g. this is an image of a forest. The more complex case is applying multiple labels to an image. This approach of image level classification is not to be confused with pixel-level classification which is called semantic segmentation. In general, aerial images cover large geographical areas that include multiple classes of land, so treating this is as a classification problem is less common than using semantic segmentation. I recommend to get started with the EuroSAT dataset.

Segmentation

Segmentation will assign a class label to each pixel in an image. Segmentation is typically grouped into semantic, instance or panoptic segmentation. In semantic segmentation objects of the same class are assigned the same label, whilst in instance segmentation each object is assigned a unique label. Panoptic segmentation combines instance and semantic predictions. Read this beginner’s guide to segmentation. Single class models are often trained for road or building segmentation, with multi class for land use/crop type classification. Image annotation can take longer than for object detection since every pixel must be annotated. Note that many articles which refer to 'hyperspectral land classification' are actually describing semantic segmentation. Note that cloud detection can be addressed with semantic segmentation and has its own section Cloud detection & removal

Segmentation - Land use & land cover

Segmentation - Vegetation, crops & crop boundaries

Segmentation - Water, coastlines & floods

Segmentation - Fire, smoke & burn areas

Segmentation - Landslides

Segmentation - Glaciers

  • HED-UNet -> a model for simultaneous semantic segmentation and edge detection, examples provided are glacier fronts and building footprints using the Inria Aerial Image Labeling dataset
  • glacier_mapping -> Mapping glaciers in the Hindu Kush Himalaya, Landsat 7 images, Shapefile labels of the glaciers, Unet with dropout
  • glacier-detect-ML -> a simple logistic regression model to identify a glacier in Landsat satellite imagery
  • GlacierSemanticSegmentation -> uses unet

Segmentation - Other environmental

Segmentation - Roads

Extracting roads is challenging due to the occlusions caused by other objects and the complex traffic environment

Segmentation - Buildings & rooftops

Segmentation - Solar panels

Segmentation - Electrical substations

The repos below resulted from the ICETCI 2021 competition on Machine Learning based feature extraction of Electrical Substations from Satellite Data using Open Source Tools

Instance segmentation

In instance segmentation, each individual 'instance' of a segmented area is given a unique lable. For detection of very small objects this may a good approach, but it can struggle seperating individual objects that are closely spaced.

Panoptic segmentation

Object detection

Several different techniques can be used to count the number of objects in an image. The returned data can be an object count (regression), a bounding box around individual objects in an image (typically using Yolo or Faster R-CNN architectures), a pixel mask for each object (instance segmentation), key points for an an object (such as wing tips, nose and tail of an aircraft), or simply a classification for a sliding tile over an image. A good introduction to the challenge of performing object detection on aerial imagery is given in this paper. In summary, images are large and objects may comprise only a few pixels, easily confused with random features in background. For the same reason, object detection datasets are inherently imbalanced, since the area of background typically dominates over the area of the objects to be detected. In general object detection performs well on large objects, and gets increasingly difficult as the objects get smaller & more densely packed. Model accuracy falls off rapidly as image resolution degrades, so it is common for object detection to use very high resolution imagery, e.g. 30cm RGB. A particular characteristic of aerial images is that objects can be oriented in any direction, so using rotated bounding boxes which aligning with the object can be crucial for extracting metrics such as the length and width of an object. Note that newer models such as Yolov5 may not achieve the same performance as 'older' models like Faster RCNN or Retinanet since they are no longer pre-trained on datasets such as ImageNet, but smaller datasets such as COCO dataset.

Object counting

When the object count, but not its shape is required, U-net can be used to treat this as an image-to-image translation problem.

  • centroid-unet -> Centroid-UNet is deep neural network model to detect centroids from satellite images, with paper
  • count-sea-lion -> uses keras & Count-ception network
  • cownter_strike -> counting cows, located with point-annotations, two models: CSRNet (a density-based method) & LCFCN (a detection-based method)
  • DO-U-Net -> an effective approach for when the size of an object needs to be known, as well as the number of objects in the image, initially created to segment and count Internally Displaced People (IDP) camps in Afghanistan
  • Cassava Crop Counting
  • Counting from Sky -> A Large-scale Dataset for Remote Sensing Object Counting and A Benchmark Method
  • PSGCNet -> code for 2022 paper: PSGCNet: A Pyramidal Scale and Global Context Guided Network for Dense Object Counting in Remote Sensing Images
  • psgcnet -> code for 2022 paper: PSGCNet: A Pyramidal Scale and Global Context Guided Network for Dense Object Counting in Remote-Sensing Images

Object detection with rotated bounding boxes

  • OBB: orinted bounding boxes are polygons representing rotated rectangles
  • For datasets checkout DOTA & HRSC2016
  • mmrotate -> Rotated Object Detection Benchmark, with pretrained models and function for inferencing on very large images
  • OBBDetection -> an oriented object detection library, which is based on MMdetection
  • rotate-yolov3 -> Rotation object detection implemented with yolov3. Also see yolov3-polygon
  • DRBox -> for detection tasks where the objects are orientated arbitrarily, e.g. vehicles, ships and airplanes
  • s2anet -> Official code of the paper 'Align Deep Features for Oriented Object Detection'
  • CFC-Net -> Official implementation of "CFC-Net: A Critical Feature Capturing Network for Arbitrary-Oriented Object Detection in Remote Sensing Images"
  • ReDet -> Official code of the paper "ReDet: A Rotation-equivariant Detector for Aerial Object Detection"
  • BBAVectors-Oriented-Object-Detection -> Oriented Object Detection in Aerial Images with Box Boundary-Aware Vectors
  • CSL_RetinaNet_Tensorflow -> Code for ECCV 2020 paper: Arbitrary-Oriented Object Detection with Circular Smooth Label
  • r3det-on-mmdetection -> R3Det: Refined Single-Stage Detector with Feature Refinement for Rotating Object
  • R-DFPN_FPN_Tensorflow -> Rotation Dense Feature Pyramid Networks (Tensorflow)
  • R2CNN_Faster-RCNN_Tensorflow -> Rotational region detection based on Faster-RCNN
  • Rotated-RetinaNet -> implemented in pytorch, it supports the following datasets: DOTA, HRSC2016, ICDAR2013, ICDAR2015, UCAS-AOD, NWPU VHR-10, VOC2007
  • OBBDet_Swin -> The sixth place winning solution in 2021 Gaofen Challenge
  • CG-Net -> Learning Calibrated-Guidance for Object Detection in Aerial Images. With paper
  • OrientedRepPoints_DOTA -> Oriented RepPoints + Swin Transformer/ReResNet
  • yolov5_obb -> yolov5 + Oriented Object Detection
  • How to Train YOLOv5 OBB -> YOLOv5 OBB tutorial and YOLOv5 OBB noteboook
  • OHDet_Tensorflow -> can be applied to rotation detection and object heading detection
  • Seodore -> framework maintaining recent updates of mmdetection
  • Rotation-RetinaNet-PyTorch -> oriented detector Rotation-RetinaNet implementation on Optical and SAR ship dataset
  • AIDet -> an open source object detection in aerial image toolbox based on MMDetection
  • rotation-yolov5 -> rotation detection based on yolov5
  • ShipDetection -> Ship Detection in HR Optical Remote Sensing Images via Rotated Bounding Box, based on Faster R-CNN and ORN, uses caffe
  • SLRDet -> project based on mmdetection to reimplement RRPN and use the model Faster R-CNN OBB
  • AxisLearning -> code for 2020 paper: Axis Learning for Orientated Objects Detection in Aerial Images
  • Detection_and_Recognition_in_Remote_Sensing_Image -> This work uses PaNet to realize Detection and Recognition in Remote Sensing Image by MXNet
  • DrBox-v2-tensorflow -> tensorflow implementation of DrBox-v2 which is an improved detector with rotatable boxes for target detection in remote sensing images
  • Rotation-EfficientDet-D0 -> A PyTorch Implementation Rotation Detector based EfficientDet Detector, applied to custom rotation vehicle datasets
  • DODet -> Dual alignment for oriented object detection, uses DOTA dataset. With paper
  • GF-CSL -> code for 2022 paper: Gaussian Focal Loss: Learning Distribution Polarized Angle Prediction for Rotated Object Detection in Aerial Images
  • simplified_rbox_cnn -> code for 2018 paper: RBox-CNN: rotated bounding box based CNN for ship detection in remote sensing image. Uses Tensorflow object detection API
  • Polar-Encodings -> code for 2021 [paper](Learning Polar Encodings for Arbitrary-Oriented Ship Detection in SAR Images)
  • R-CenterNet -> detector for rotated-object based on CenterNet
  • piou -> Orientated Object Detection; IoU Loss, applied to DOTA dataset
  • DAFNe -> code for 2021 paper: DAFNe: A One-Stage Anchor-Free Approach for Oriented Object Detection
  • AProNet -> code for 2021 paper: AProNet: Detecting objects with precise orientation from aerial images. Applied to datasets DOTA and HRSC2016
  • UCAS-AOD-benchmark -> A benchmark of UCAS-AOD dataset
  • RotateObjectDetection -> based on Ultralytics/yolov5, with adjustments to enable rotate prediction boxes. Also see PolygonObjectDetection
  • AD-Toolbox -> Aerial Detection Toolbox based on MMDetection and MMRotate, with support for more datasets
  • GGHL -> code for 2022 paper: A General Gaussian Heatmap Label Assignment for Arbitrary-Oriented Object Detection
  • NPMMR-Det -> code for 2021 paper: A Novel Nonlocal-Aware Pyramid and Multiscale Multitask Refinement Detector for Object Detection in Remote Sensing Images
  • AOPG -> code for 2022 paper: Anchor-Free Oriented Proposal Generator for Object Detection

Object detection enhanced by super resolution

Salient object detection

Detecting the most noticeable or important object in a scene

  • ACCoNet -> code for 2022 paper: Adjacent Context Coordination Network for Salient Object Detection in Optical Remote Sensing Images
  • MCCNet -> Multi-Content Complementation Network for Salient Object Detection in Optical Remote Sensing Images
  • CorrNet -> Lightweight Salient Object Detection in Optical Remote Sensing Images via Feature Correlation. With paper
  • Reading list for deep learning based Salient Object Detection in Optical Remote Sensing Images
  • ORSSD-dataset -> salient object detection dataset
  • EORSSD-dataset -> Extended Optical Remote Sensing Saliency Detection (EORSSD) Dataset
  • DAFNet_TIP20 -> code for 2020 paper: Dense Attention Fluid Network for Salient Object Detection in Optical Remote Sensing Images
  • EMFINet -> code for 2021 paper: Edge-Aware Multiscale Feature Integration Network for Salient Object Detection in Optical Remote Sensing Images
  • ERPNet -> code for 2022 paper: Edge-guided Recurrent Positioning Network for Salient Object Detection in Optical Remote Sensing Images
  • FSMINet -> code for 2022 paper: Fully Squeezed Multi-Scale Inference Network for Fast and Accurate Saliency Detection in Optical Remote Sensing Images
  • AGNet -> code for 2022 paper: AGNet: Attention Guided Network for Salient Object Detection in Optical Remote Sensing Images
  • MSCNet -> code for 2022 paper: A lightweight multi-scale context network for salient object detection in optical remote sensing images
  • GPnet -> code for 2022 paper: Global Perception Network for Salient Object Detection in Remote Sensing Images

Object detection - buildings, rooftops & solar panels

Object detection - ships & boats

Object detection - cars, vehicles & trains

Object detection - planes & aircraft

Object detection - infrastructure & utilities

Object detection - animals

A variety of techniques can be used to count animals, including object detection and instance segmentation. For convenience they are all listed here:

Object tracking in videos

Counting trees

Oil storage tank detection & oil spills

Oil is stored in tanks at many points between extraction and sale, and the volume of oil in storage is an important economic indicator.

Cloud detection & removal

Generally treated as a semantic segmentation problem or custom features created using band math

Change detection

Generally speaking, change detection methods are applied to a pair of images to generate a mask of change, e.g. of buildings damaged in a disaster. Note, clouds & shadows change often too..!

  • awesome-remote-sensing-change-detection lists many datasets and publications
  • Change-Detection-Review -> A review of change detection methods, including code and open data sets for deep learning
  • Unstructured-change-detection-using-CNN
  • Siamese neural network to detect changes in aerial images -> uses Keras and VGG16 architecture
  • Change Detection in 3D: Generating Digital Elevation Models from Dove Imagery
  • QGIS plugin for applying change detection algorithms on high resolution satellite imagery
  • LamboiseNet -> Master thesis about change detection in satellite imagery using Deep Learning
  • Fully Convolutional Siamese Networks for Change Detection -> with paper
  • Urban Change Detection for Multispectral Earth Observation Using Convolutional Neural Networks -> with paper, used the Onera Satellite Change Detection (OSCD) dataset
  • STANet -> official implementation of the spatial-temporal attention neural network (STANet) for remote sensing image change detection
  • BIT_CD -> Official Pytorch Implementation of Remote Sensing Image Change Detection with Transformers
  • IAug_CDNet -> Official Pytorch Implementation of Adversarial Instance Augmentation for Building Change Detection in Remote Sensing Images
  • dpm-rnn-public -> Code implementing a damage mapping method combining satellite data with deep learning
  • SenseEarth2020-ChangeDetection -> 1st place solution to the Satellite Image Change Detection Challenge hosted by SenseTime; predictions of five HRNet-based segmentation models are ensembled, serving as pseudo labels of unchanged areas
  • KPCAMNet -> Python implementation of the paper Unsupervised Change Detection in Multi-temporal VHR Images Based on Deep Kernel PCA Convolutional Mapping Network
  • CDLab -> benchmarking deep learning-based change detection methods.
  • Siam-NestedUNet -> The pytorch implementation for "SNUNet-CD: A Densely Connected Siamese Network for Change Detection of VHR Images"
  • SUNet-change_detection -> Implementation of paper SUNet: Change Detection for Heterogeneous Remote Sensing Images from Satellite and UAV Using a Dual-Channel Fully Convolution Network
  • Self-supervised Change Detection in Multi-view Remote Sensing Images
  • MFPNet -> Remote Sensing Change Detection Based on Multidirectional Adaptive Feature Fusion and Perceptual Similarity
  • GitHub for the DIUx xView Detection Challenge -> The xView2 Challenge focuses on automating the process of assessing building damage after a natural disaster
  • DASNet -> Dual attentive fully convolutional siamese networks for change detection of high-resolution satellite images
  • Self-Attention for Raw Optical Satellite Time Series Classification
  • planet-movement -> Find and process Planet image pairs to highlight object movement
  • UNet-based-Unsupervised-Change-Detection -> A convolutional neural network (CNN) and semantic segmentation is implemented to detect the changes between the images, as well as classify the changes into the correct semantic class, with arxiv paper
  • temporal-cluster-matching -> detecting change in structure footprints from time series of remotely sensed imagery
  • autoRIFT -> fast and intelligent algorithm for finding the pixel displacement between two images
  • DSAMNet -> Code for “A Deeply Supervised Attention Metric-Based Network and an Open Aerial Image Dataset for Remote Sensing Change Detection”. The main types of changes in the dataset include: (a) newly built urban buildings; (b) suburban dilation; (c) groundwork before construction; (d) change of vegetation; (e) road expansion; (f) sea construction.
  • SRCDNet -> The pytorch implementation for "Super-resolution-based Change Detection Network with Stacked Attention Module for Images with Different Resolutions ". SRCDNet is designed to learn and predict change maps from bi-temporal images with different resolutions
  • Land-Cover-Analysis -> Land Cover Change Detection using Satellite Image Segmentation
  • A deeply supervised image fusion network for change detection in high resolution bi-temporal remote sening images
  • Satellite-Image-Alignment-Differencing-and-Segmentation -> thesis on change detection
  • Change Detection in Multi-temporal Satellite Images -> uses Principal Component Analysis (PCA) and K-means clustering
  • Unsupervised Change Detection Algorithm using PCA and K-Means Clustering -> in Matlab but has paper
  • ChangeFormer -> A Transformer-Based Siamese Network for Change Detection. Uses transformer architecture to address the limitations of CNN in handling multi-scale long-range details. Demonstrates that ChangeFormer captures much finer details compared to the other SOTA methods, achieving better performance on benchmark datasets
  • Heterogeneous_CD -> Heterogeneous Change Detection in Remote Sensing Images. Accompanies Code-Aligned Autoencoders for Unsupervised Change Detection in Multimodal Remote Sensing Images
  • ChangeDetectionProject -> Trying out Active Learning in with deep CNNs for Change detection on remote sensing data
  • DSFANet -> Unsupervised Deep Slow Feature Analysis for Change Detection in Multi-Temporal Remote Sensing Images
  • siamese-change-detection -> Targeted synthesis of multi-temporal remote sensing images for change detection using siamese neural networks
  • Bi-SRNet -> code for 2022 paper: Bi-Temporal Semantic Reasoning for the Semantic Change Detection in HR Remote Sensing Images
  • RaVAEn -> RaVAEn is a lightweight, unsupervised approach for change detection in satellite data based on Variational Auto-Encoders (VAEs) with the specific purpose of on-board deployment
  • SiROC -> Implementation of the paper Spatial Context Awareness for Unsupervised Change Detection in Optical Satellite Images. Applied to Sentinel-2 and high-resolution Planetscope imagery on four datasets
  • DSMSCN -> Tensorflow implementation for Change Detection in Multi-temporal VHR Images Based on Deep Siamese Multi-scale Convolutional Neural Networks
  • RaVAEn -> a lightweight, unsupervised approach for change detection in satellite data based on Variational Auto-Encoders (VAEs) with the specific purpose of on-board deployment. It flags changed areas to prioritise for downlink, shortening the response time
  • SemiCD -> Code for paper: Revisiting Consistency Regularization for Semi-supervised Change Detection in Remote Sensing Images. Achieves the performance of supervised CD even with access to as little as 10% of the annotated training data
  • FCCDN_pytorch -> code for paper: FCCDN: Feature Constraint Network for VHR Image Change Detection. Uses the LEVIR-CD building change detection dataset
  • INLPG_Python -> code for paper: Structure Consistency based Graph for Unsupervised Change Detection with Homogeneous and Heterogeneous Remote Sensing Images
  • NSPG_Python -> code for paper: Nonlocal patch similarity based heterogeneous remote sensing change detection
  • LGPNet-BCD -> code for 2021 paper: Building Change Detection for VHR Remote Sensing Images via Local-Global Pyramid Network and Cross-Task Transfer Learning Strategy
  • DS_UNet -> code for 2021 paper: Sentinel-1 and Sentinel-2 Data Fusion for Urban Change Detection using a Dual Stream U-Net, uses Onera Satellite Change Detection dataset
  • SiameseSSL -> code for 2022 paper: Urban change detection with a Dual-Task Siamese network and semi-supervised learning. Uses SpaceNet 7 dataset
  • CD-SOTA-methods -> Remote sensing change detection: State-of-the-art methods and available datasets
  • multimodalCD_ISPRS21 -> code for 2021 paper: Fusing Multi-modal Data for Supervised Change Detection
  • Unsupervised-CD-in-SITS-using-DL-and-Graphs -> code for article: Unsupervised Change Detection Analysis in Satellite Image Time Series using Deep Learning Combined with Graph-Based Approaches
  • LSNet -> code for 2022 paper: Extremely Light-Weight Siamese Network For Change Detection in Remote Sensing Image
  • Change-Detection-in-Remote-Sensing-Images -> using PCA & K-means
  • End-to-end-CD-for-VHR-satellite-image -> code for 2019 paper: End-to-End Change Detection for High Resolution Satellite Images Using Improved UNet++
  • Semantic-Change-Detection -> code for 2021 paper: SCDNET: A novel convolutional network for semantic change detection in high resolution optical remote sensing imagery
  • ERCNN-DRS_urban_change_monitoring -> code for 2021 paper: Neural Network-Based Urban Change Monitoring with Deep-Temporal Multispectral and SAR Remote Sensing Data
  • EGRCNN -> code for 2021 paper: Edge-guided Recurrent Convolutional Neural Network for Multi-temporal Remote Sensing Image Building Change Detection
  • Unsupervised-Remote-Sensing-Change-Detection -> code for 2021 paper: An Unsupervised Remote Sensing Change Detection Method Based on Multiscale Graph Convolutional Network and Metric Learning
  • CropLand-CD -> code for 2022 paper: A CNN-transformer Network with Multi-scale Context Aggregation for Fine-grained Cropland Change Detection
  • contrastive-surface-image-pretraining -> code for 2022 paper: Supervising Remote Sensing Change Detection Models with 3D Surface Semantics
  • dcvaVHROptical -> Deep Change Vector Analysis (DCVA) change detection. Code for 2019 paper: Unsupervised Deep Change Vector Analysis for Multiple-Change Detection in VHR Images
  • hyperdimensionalCD -> code for 2021 paper: Change Detection in Hyperdimensional Images Using Untrained Models
  • DSFANet -> code for 2018 paper: Unsupervised Deep Slow Feature Analysis for Change Detection in Multi-Temporal Remote Sensing Images
  • FCD-GAN-pytorch -> Fully Convolutional Change Detection Framework with Generative Adversarial Network (FCD-GAN) is a framework for change detection in multi-temporal remote sensing images
  • DARNet-CD -> code for 2022 paper: A Densely Attentive Refinement Network for Change Detection Based on Very-High-Resolution Bitemporal Remote Sensing Images
  • xView2_Vulcan -> Damage assessment using pre and post orthoimagery. Modified + productionized model based off the first-place model from the xView2 challenge.
  • ESCNet -> code for 2021 paper: An End-to-End Superpixel-Enhanced Change Detection Network for Very-High-Resolution Remote Sensing Images
  • ForestCoverChange -> Detecting and Predicting Forest Cover Change in Pakistani Areas Using Remote Sensing Imagery
  • deforestation-detection -> code for 2020 paper: DEEP LEARNING FOR HIGH-FREQUENCY CHANGE DETECTION IN UKRAINIAN FOREST ECOSYSTEM WITH SENTINEL-2
  • forest_change_detection -> forest change segmentation with time-dependent models, including Siamese, UNet-LSTM, UNet-diff, UNet3D models. Code for 2021 paper: Deep Learning for Regular Change Detection in Ukrainian Forest Ecosystem With Sentinel-2
  • SentinelClearcutDetection -> Scripts for deforestation detection on the Sentinel-2 Level-A images
  • clearcut_detection -> research & web-service for clearcut detection
  • CDRL -> code for 2022 paper: Unsupervised Change Detection Based on Image Reconstruction Loss
  • ddpm-cd -> code for 2022 paper: Remote Sensing Change Detection (Segmentation) using Denoising Diffusion Probabilistic Models
  • Remote-sensing-time-series-change-detection -> code for 2022 paper: Graph-based block-level urban change detection using Sentinel-2 time series
  • austin-ml-change-detection-demo -> A change detection demo for the Austin area using a pre-trained PyTorch model scaled with Dask on Planet imagery
  • dfc2021-msd-baseline -> A baseline for the "Multitemporal Semantic Change Detection" track of the 2021 IEEE GRSS Data Fusion Competition
  • CorrFusionNet -> code for 2020 paper: Multi-Temporal Scene Classification and Scene Change Detection with Correlation based Fusion
  • ChangeDetectionPCAKmeans -> MATLAB implementation for Unsupervised Change Detection in Satellite Images Using Principal Component Analysis and k-Means Clustering.
  • IRCNN -> code for 2022 paper: IRCNN: An Irregular-Time-Distanced Recurrent Convolutional Neural Network for Change Detection in Satellite Time Series
  • UTRNet -> An Unsupervised Time-Distance-Guided Convolutional Recurrent Network for Change Detection in Irregularly Collected Images

Time series

More general than change detection, time series observations can be used for applications including improving the accuracy of crop classification, or predicting future patterns & events. Crop yield is very typically application and has its own section below

  • CropDetectionDL -> using GRU-net, First place solution for Crop Detection from Satellite Imagery competition organized by CV4A workshop at ICLR 2020
  • LANDSAT Time Series Analysis for Multi-temporal Land Cover Classification using Random Forest
  • temporalCNN -> Temporal Convolutional Neural Network for the Classification of Satellite Image Time Series
  • pytorch-psetae -> PyTorch implementation of the model presented in Satellite Image Time Series Classification with Pixel-Set Encoders and Temporal Self-Attention
  • satflow -> optical flow models for predicting future satellite images from current and past ones
  • esa-superresolution-forecasting -> Forecasting air pollution using ESA Sentinel-5p data, and an encoder-decoder convolutional LSTM neural network architecture, implemented in Pytorch
  • Radiant-Earth-Spot-the-Crop-Challenge -> The main objective of this challenge was to use time-series of Sentinel-2 multi-spectral data to classify crops in the Western Cape of South Africa. The challenge was to build a machine learning model to predict crop type classes for the test dataset
  • lightweight-temporal-attention-pytorch -> A PyTorch implementation of the Light Temporal Attention Encoder (L-TAE) for satellite image time series. classification
  • Crop-Classification -> crop classification using multi temporal satellite images
  • dtwSat -> Time-Weighted Dynamic Time Warping for satellite image time series analysis
  • DeepCropMapping -> A multi-temporal deep learning approach with improved spatial generalizability for dynamic corn and soybean mapping, uses LSTM
  • CropMappingInterpretation -> An interpretation pipeline towards understanding multi-temporal deep learning approaches for crop mapping
  • MTLCC -> code for paper: Multitemporal Land Cover Classification Network. A recurrent neural network approach to encode multi-temporal data for land cover classification
  • timematch -> code for 2022 paper: A method to perform unsupervised cross-region adaptation of crop classifiers trained with satellite image time series. We also introduce an open-access dataset for cross-region adaptation with SITS from four different regions in Europe
  • PWWB -> Code for the 2021 paper: Real-Time Spatiotemporal Air Pollution Prediction with Deep Convolutional LSTM through Satellite Image Analysis
  • Classification of Crop Fields through Satellite Image Time Series -> using a pytorch-psetae & Sentinel-2 data
  • spaceweather -> predicting geomagnetic storms from satellite measurements of the solar wind and solar corona, uses LSTMs
  • Forest_wildfire_spreading_convLSTM -> Modeling of the spreading of forest wildfire using a neural network with ConvLSTM cells. Prediction 3-days forward
  • ConvTimeLSTM -> Extension of ConvLSTM and Time-LSTM for irregularly spaced images, appropriate for Remote Sensing
  • dl-time-series -> Deep Learning algorithms applied to characterization of Remote Sensing time-series
  • tpe -> code for 2022 paper: Generalized Classification of Satellite Image Time Series With Thermal Positional Encoding
  • wildfire_forecasting -> code for 2021 paper: Deep Learning Methods for Daily Wildfire Danger Forecasting. Uses ConvLSTM

Crop yield

Wealth and economic activity

The goal is to predict economic activity from satellite imagery rather than conducting labour intensive ground surveys

Disaster response

Also checkout the sections on change detection and water/fire/building segmentation

Weather phenomena

  • EddyData -> code for paper: A Deep Framework for Eddy Detection and Tracking from Satellite Sea Surface Height Data
  • python-windspeed -> Predicting windspeed of hurricanes from satellite images, uses CNN regression in keras
  • hurricane-wind-speed-cnn -> Predicting windspeed of hurricanes from satellite images, uses CNN regression in keras

Super-resolution

Super-resolution attempts to enhance the resolution of an imaging system, and can be applied as a pre-processing step to improve the detection of small objects or boundaries. Its use is controversial since it can introduce artefacts at the same rate as real features. These techniques are generally grouped into single image super resolution (SISR) or a multi image super resolution (MISR)

Single image super-resolution (SISR)

Multi image super-resolution (MISR)

Note that nearly all the MISR publications resulted from the PROBA-V Super Resolution competition

  • deepsum -> Deep neural network for Super-resolution of Unregistered Multitemporal images (ESA PROBA-V challenge)
  • 3DWDSRNet -> code to reproduce Satellite Image Multi-Frame Super Resolution (MISR) Using 3D Wide-Activation Neural Networks
  • RAMS -> Official TensorFlow code for paper Multi-Image Super Resolution of Remotely Sensed Images Using Residual Attention Deep Neural Networks
  • TR-MISR -> Transformer-based MISR framework for the the PROBA-V super-resolution challenge. With paper
  • HighRes-net -> Pytorch implementation of HighRes-net, a neural network for multi-frame super-resolution, trained and tested on the European Space Agency’s Kelvin competition
  • ProbaVref -> Repurposing the Proba-V challenge for reference-aware super resolution
  • The missing ingredient in deep multi-temporal satellite image super-resolution -> Permutation invariance harnesses the power of ensembles in a single model, with repo piunet
  • MSTT-STVSR -> Space-time Super-resolution for Satellite Video: A Joint Framework Based on Multi-Scale Spatial-Temporal Transformer, JAG, 2022
  • Self-Supervised Super-Resolution for Multi-Exposure Push-Frame Satellites
  • DDRN -> Deep Distillation Recursive Network for Video Satellite Imagery Super-Resolution
  • worldstrat -> SISR and MISR implementations of SRCNN

Pansharpening

Image fusion of low res multispectral with high res pan band.

  • Several algorithms described in the ArcGIS docs, with the simplest being taking the mean of the pan and RGB pixel value.
  • For into to classical methods see this notebook and this kaggle kernel
  • rio-pansharpen -> pansharpening Landsat scenes
  • Simple-Pansharpening-Algorithms
  • Working-For-Pansharpening -> long list of pansharpening methods and update of Awesome-Pansharpening
  • PSGAN -> A Generative Adversarial Network for Remote Sensing Image Pan-sharpening, arxiv paper
  • Pansharpening-by-Convolutional-Neural-Network
  • PBR_filter -> {P}ansharpening by {B}ackground {R}emoval algorithm for sharpening RGB images
  • py_pansharpening -> multiple algorithms implemented in python
  • Deep-Learning-PanSharpening -> deep-learning based pan-sharpening code package, we reimplemented include PNN, MSDCNN, PanNet, TFNet, SRPPNN, and our purposed network DIPNet
  • HyperTransformer -> A Textural and Spectral Feature Fusion Transformer for Pansharpening
  • DIP-HyperKite -> Hyperspectral Pansharpening Based on Improved Deep Image Prior and Residual Reconstruction
  • D2TNet -> code for 2022 paper: A ConvLSTM Network with Dual-direction Transfer for Pan-sharpening
  • PanColorGAN-VHR-Satellite-Images -> code for 2020 paper: Rethinking CNN-Based Pansharpening: Guided Colorization of Panchromatic Images via GANs
  • MTL_PAN_SEG -> code for 2019 paper: Multi-task deep learning for satellite image pansharpening and segmentation
  • Z-PNN -> code for 2022 paper: Pansharpening by convolutional neural networks in the full resolution framework
  • GTP-PNet -> code for 2021 paper: GTP-PNet: A residual learning network based on gradient transformation prior for pansharpening
  • UDL -> code for 2021 paper: Dynamic Cross Feature Fusion for Remote Sensing Pansharpening
  • PSData -> A Large-Scale General Pan-sharpening DataSet, which contains PSData3 (QB, GF-2, WV-3) and PSData4 (QB, GF-1, GF-2, WV-2).
  • AFPN -> Adaptive Detail Injection-Based Feature Pyramid Network For Pan-sharpening
  • pan-sharpening -> multiple methods demonstrated for multispectral and panchromatic images
  • PSGan-Family -> code for 2020 paper: PSGAN: A Generative Adversarial Network for Remote Sensing Image Pan-Sharpening
  • PanNet-Landsat -> code for 2017 paper: A Deep Network Architecture for Pan-Sharpening
  • DLPan-Toolbox -> code for 2022 paper: Machine Learning in Pansharpening: A Benchmark, from Shallow to Deep Networks
  • LPPN -> code for 2021 paper: Laplacian pyramid networks: A new approach for multispectral pansharpening
  • S2_SSC_CNN -> code for 2020 paper: Zero-shot Sentinel-2 Sharpening Using A Symmetric Skipped Connection Convolutional Neural Network
  • S2S_UCNN -> code for 2021 paper: Sentinel 2 sharpening using a single unsupervised convolutional neural network with MTF-Based degradation model
  • SSE-Net -> code for 2022 paper: Spatial and Spectral Extraction Network With Adaptive Feature Fusion for Pansharpening
  • UCGAN -> code for 2022 paper: Unsupervised Cycle-consistent Generative Adversarial Networks for Pan-sharpening
  • GCPNet -> code for 2022 paper: When Pansharpening Meets Graph Convolution Network and Knowledge Distillation
  • PanFormer -> code for 2022 paper: PanFormer: a Transformer Based Model for Pan-sharpening

Image-to-image translation

Translate images e.g. from SAR to RGB.

GANS

GANS are famously used for generating synthetic data, see the section Synthetic data

Adversarial ML

Efforts to detect falsified images & deepfakes. Also checkout Synthetic data

  • UAE-RS -> dataset that provides black-box adversarial samples in the remote sensing field
  • PSGAN -> code for paper: Perturbation Seeking Generative Adversarial Networks: A Defense Framework for Remote Sensing Image Scene Classification
  • SACNet -> code for 2021 paper: Self-Attention Context Network: Addressing the Threat of Adversarial Attacks for Hyperspectral Image Classification

Autoencoders, dimensionality reduction, image embeddings & similarity search

Image retreival

  • Demo_AHCL_for_TGRS2022 -> code for 2022 paper: Asymmetric Hash Code Learning (AHCL) for remote sensing image retreival
  • GaLR -> code for 2022 paper: Remote Sensing Cross-Modal Text-Image Retrieval Based on Global and Local Information
  • retrievalSystem -> cross-modal image retrieval system
  • AMFMN -> code for the 2021 paper: Exploring a Fine-grained Multiscale Method for Cross-modal Remote Sensing Image Retrieval
  • Active-Learning-for-Remote-Sensing-Image-Retrieval -> unofficial implementation of paper: A Novel Active Learning Method in Relevance Feedback for Content-Based Remote Sensing Image Retrieval
  • CMIR-NET -> code for 2020 paper: A deep learning based model for cross-modal retrieval in remote sensing
  • Deep-Hash-learning-for-Remote-Sensing-Image-Retrieval -> code for 2020 paper: Deep Hash Learning for Remote Sensing Image Retrieval
  • MHCLN -> code for 2018 paper: Deep Metric and Hash-Code Learning for Content-Based Retrieval of Remote Sensing Images
  • HydroViet_VOR -> Object Retrieval in satellite images with Triplet Network
  • AMFMN -> code for 2021 paper: Exploring a Fine-Grained Multiscale Method for Cross-Modal Remote Sensing Image Retrieval

Image Captioning & Visual Question Answering

Mixed data learning

These techniques combine multiple data types, e.g. imagery and text data.

Few-shot learning

This is a class of techniques which attempt to make predictions for classes with few, one or even zero examples provided during training. In zero shot learning (ZSL) the model is assisted by the provision of auxiliary information which typically consists of descriptions/semantic attributes/word embeddings for both the seen and unseen classes at train time (ref). These approaches are particularly relevant to remote sensing, where there may be many examples of common classes, but few or even zero examples for other classes of interest.

Self-supervised, unsupervised & contrastive learning

These techniques use unlabelled datasets. Yann LeCun has described self/unsupervised learning as the 'base of the cake': If we think of our brain as a cake, then the cake base is unsupervised learning. The machine predicts any part of its input for any observed part, all without the use of labelled data. Supervised learning forms the icing on the cake, and reinforcement learning is the cherry on top.

Weakly & semi-supervised learning

These techniques use a partially annotated dataset

  • MARE -> self-supervised Multi-Attention REsu-net for semantic segmentation in remote sensing
  • SSGF-for-HRRS-scene-classification -> code for 2018 paper: A semi-supervised generative framework with deep learning features for high-resolution remote sensing image scene classification
  • SFGAN -> code for 2018 paper: Semantic-Fusion Gans for Semi-Supervised Satellite Image Classification
  • SSDAN -> code for 2021 paper: Multi-Source Semi-Supervised Domain Adaptation Network for Remote Sensing Scene Classification
  • HR-S2DML -> code for 2020 paper: High-Rankness Regularized Semi-Supervised Deep Metric Learning for Remote Sensing Imagery
  • Semantic Segmentation of Satellite Images Using Point Supervision
  • fcd -> code for 2021 paper: Fixed-Point GAN for Cloud Detection. A weakly-supervised approach, training with only image-level labels
  • weak-segmentation -> Weakly supervised semantic segmentation for aerial images in pytorch
  • TNNLS_2022_X-GPN -> Code for paper: Semisupervised Cross-scale Graph Prototypical Network for Hyperspectral Image Classification
  • weakly_supervised -> code for the paper Weakly Supervised Deep Learning for Segmentation of Remote Sensing Imagery. Demonstrates that segmentation can be performed using small datasets comprised of pixel or image labels
  • wan -> Weakly-Supervised Domain Adaptation for Built-up Region Segmentation in Aerial and Satellite Imagery, with arxiv paper
  • sourcerer -> A Bayesian-inspired deep learning method for semi-supervised domain adaptation designed for land cover mapping from satellite image time series (SITS). Paper
  • MSMatch -> Semi-Supervised Multispectral Scene Classification with Few Labels. Includes code to work with both the RGB and the multispectral (MS) versions of EuroSAT dataset and the UC Merced Land Use (UCM) dataset. Paper
  • Flood Segmentation on Sentinel-1 SAR Imagery with Semi-Supervised Learning with arxiv paper
  • Semi-supervised learning in satellite image classification -> experimenting with MixMatch and the EuroSAT data set
  • ScRoadExtractor -> code for 2020 paper: Scribble-based Weakly Supervised Deep Learning for Road Surface Extraction from Remote Sensing Images
  • ICSS -> code for 2022 paper: Weakly-supervised continual learning for class-incremental segmentation

Active learning

Supervised deep learning techniques typically require a huge number of annotated/labelled examples to provide a training dataset. However labelling at scale take significant time, expertise and resources. Active learning techniques aim to reduce the total amount of annotation that needs to be performed by selecting the most useful images to label from a large pool of unlabelled images, thus reducing the time to generate useful training datasets. These processes may be referred to as Human-in-the-Loop Machine Learning

Federated learning

Federated learning is a process for training models in a distributed fashion without sharing of data

Image registration

Image registration is the process of registering one or more images onto another (typically well georeferenced) image. Traditionally this is performed manually by identifying control points (tie-points) in the images, for example using QGIS. This section lists approaches which mostly aim to automate this manual process. There is some overlap with the data fusion section but the distinction I make is that image registration is performed as a prerequisite to downstream processes which will use the registered data as an input.

Data fusion

Data fusion covers techniques which integrate multiple datasources, for example fusing SAR & optical to make predictions about crop type. It can also cover fusion with non imagery data such as IOT sensor data

  • Awesome-Data-Fusion-for-Remote-Sensing
  • UDALN_GRSL -> Deep Unsupervised Blind Hyperspectral and Multispectral Data Fusion
  • CropTypeMapping -> Crop type mapping from optical and radar (Sentinel-1&2) time series using attention-based deep learning
  • Multimodal-Remote-Sensing-Toolkit -> uses Hyperspectral and LiDAR Data
  • Aerial-Template-Matching -> development of an algorithm for template Matching on aerial imagery applied to UAV dataset
  • DS_UNet -> code for 2021 paper: Sentinel-1 and Sentinel-2 Data Fusion for Urban Change Detection using a Dual Stream U-Net, uses Onera Satellite Change Detection dataset
  • DDA_UrbanExtraction -> Unsupervised Domain Adaptation for Global Urban Extraction using Sentinel-1 and Sentinel-2 Data
  • swinstfm -> code for paper: Remote Sensing Spatiotemporal Fusion using Swin Transformer
  • LoveCS -> code for 2022 paper: Cross-sensor domain adaptation for high-spatial resolution urban land-cover mapping: from airborne to spaceborne imagery
  • comingdowntoearth -> code for 2021 paper: Implementation of 'Coming Down to Earth: Satellite-to-Street View Synthesis for Geo-Localization'
  • Matching between acoustic and satellite images
  • MapRepair -> Deep Cadastre Maps Alignment and Temporal Inconsistencies Fix in Satellite Images
  • Compressive-Sensing-and-Deep-Learning-Framework -> Compressive Sensing is used as an initial guess to combine data from multiple sources, with LSTM used to refine the result
  • DeepSim -> code for paper: DeepSIM: GPS Spoofing Detection on UAVs using Satellite Imagery Matching
  • MHF-net -> code for 2019 paper: Multispectral and Hyperspectral Image Fusion by MS/HS Fusion Net
  • Remote_Sensing_Image_Fusion -> code for 2021 paper: Semi-Supervised Remote Sensing Image Fusion Using Multi-Scale Conditional Generative Adversarial network with Siamese Structure
  • CNNs for Multi-Source Remote Sensing Data Fusion -> code for 2021 paper: Single-stream CNN with Learnable Architecture for Multi-source Remote Sensing Data
  • Deep Generative Reflectance Fusion -> Achieving Landsat-like reflectance at any date by fusing Landsat and MODIS surface reflectance with deep generative models
  • IEEE_TGRS_MDL-RS -> code for 2021 paper: More Diverse Means Better: Multimodal Deep Learning Meets Remote-Sensing Imagery Classification
  • SSRNET -> code for 2022 paper: SSR-NET: Spatial-Spectral Reconstruction Network for Hyperspectral and Multispectral Image Fusion
  • cross-view-image-matching -> code for 2019 paper: Bridging the Domain Gap for Ground-to-Aerial Image Matching
  • CoF-MSMG-PCNN -> code for 2020 paper: Remote Sensing Image Fusion via Boundary Measured Dual-Channel PCNN in Multi-Scale Morphological Gradient Domain
  • robust_matching_network_on_remote_sensing_imagery_pytorch -> code for 2019 paper: A Robust Matching Network for Gradually Estimating Geometric Transformation on Remote Sensing Imagery
  • edcstfn -> code for 2019 paper: An Enhanced Deep Convolutional Model for Spatiotemporal Image Fusion
  • ganstfm -> code for 2021 paper: A Flexible Reference-Insensitive Spatiotemporal Fusion Model for Remote Sensing Images Using Conditional Generative Adversarial Network
  • CMAFF -> code for 2021 paper: Cross-Modality Attentive Feature Fusion for Object Detection in Multispectral Remote Sensing Imagery
  • SOLC -> code for 2022 paper: MCANet: A joint semantic segmentation framework of optical and SAR images for land use classification. Uses WHU-OPT-SAR-dataset
  • MFT -> code for 2022 paper: Multimodal Fusion Transformer for Remote Sensing Image Classification
  • ISPRS_S2FL -> code for 2021 paper: Multimodal Remote Sensing Benchmark Datasets for Land Cover Classification with A Shared and Specific Feature Learning Model

Terrain mapping, Disparity Estimation, Lidar, DEMs & NeRF

Measure surface contours & locate 3D points in space from 2D images. NeRF stands for Neural Radiance Fields and is the term used in deep learning communities to describe a model that generates views of complex 3D scenes based on a partial set of 2D images

Thermal Infrared

SAR

NVDI - vegetation index

General image quality

  • Convolutional autoencoder network can be employed to image denoising, read about this on the Keras blog
  • jitter-compensation -> Remote Sensing Image Jitter Detection and Compensation Using CNN
  • DeblurGANv2 -> Deblurring (Orders-of-Magnitude) Faster and Better
  • image-quality-assessment -> CNN to predict the aesthetic and technical quality of images
  • Convolutional autoencoder for image denoising -> keras guide
  • piq -> a collection of measures and metrics for image quality assessment
  • FFA-Net -> Feature Fusion Attention Network for Single Image Dehazing
  • DeepCalib -> A Deep Learning Approach for Automatic Intrinsic Calibration of Wide Field-of-View Cameras
  • PerceptualSimilarity -> LPIPS is a perceptual metric which aims to overcome the limitations of traditional metrics such as PSNR & SSIM, to better represent the features the human eye picks up on
  • Optical-RemoteSensing-Image-Resolution -> code for 2018 paper: Deep Memory Connected Neural Network for Optical Remote Sensing Image Restoration. Two applications: Gaussian image denoising and single image super-resolution

Neural nets in space

Processing on board a satellite allows less data to be downlinked. e.g. super-resolution image might take 8 images to generate, then a single image is downlinked. Other applications include cloud detection and collision avoidance.

ML best practice

This section includes tips and ideas I have picked up from other practitioners including ai-fast-track, FraPochetti & the IceVision community

Metrics

A number of metrics are common to all model types (but can have slightly different meanings in contexts such as object detection), whilst other metrics are very specific to particular classes of model. The correct choice of metric is particularly critical for imbalanced dataset problems, e.g. object detection

  • TP = true positive, FP = false positive, TN = true negative, FN = false negative
  • Precision is the % of correct positive predictions, calculated as precision = TP/(TP+FP)
  • Recall or true positive rate (TPR), is the % of true positives captured by the model, calculated as recall = TP/(TP+FN). Note that FN is not possible in object detection, so recall is not appropriate.
  • The F1 score (also called the F-score or the F-measure) is the harmonic mean of precision and recall, calculated as F1 = 2*(precision * recall)/(precision + recall). It conveys the balance between the precision and the recall. Ref
  • The false positive rate (FPR), calculated as FPR = FP/(FP+TN) is often plotted against recall/TPR in an ROC curve which shows how the TPR/FPR tradeoff varies with classification threshold. Lowering the classification threshold returns more true positives, but also more false positives. Note that since FN is not possible in object detection, ROC curves are not appropriate.
  • Precision-vs-recall curves visualise the tradeoff between making false positives and false negatives
  • Accuracy is the most commonly used metric in 'real life' but can be a highly misleading metric for imbalanced data sets.
  • IoU is an object detection specific metric, being the average intersect over union of prediction and ground truth bounding boxes for a given confidence threshold
  • mAP@0.5 is another object detection specific metric, being the mean value of the average precision for each class. @0.5 sets a threshold for how much of the predicted bounding box overlaps the ground truth bounding box, i.e. "minimum 50% overlap"
  • For more comprehensive definitions checkout Object-Detection-Metrics

Datasets

This section contains a short list of datasets relevant to deep learning, particularly those which come up regularly in the literature. Warning satellite image files can be LARGE, and even a small datasets may comprise 50GB+ of imagery

Lists of datasets

Sentinel

Landsat

Maxar

Planet

UC Merced

EuroSAT

PatternNet

Million-AID

DIOR object detection dataset

Multiscene

FAIR1M object detection dataset

DOTA object detection dataset

HRSC RGB ship object detection dataset

SAR Ship Detection Dataset (SSDD)

SAR Aircraft Detection Dataset

xView Challenge Datasets for Humanitarian Assistance and Disaster Response

  • xView1 - Objects in context for overhead imagery. A fine-grained object detection dataset with 60 object classes along an ontology of 8 class types. Over 1,000,000 objects across over 1,400 km^2 of 0.3m resolution imagery. Paper available on arXiv.
  • xView2/xBD - Finding and assessing damaged buildings on pre- and post-natural disaster imagery. With over 850,000 annotated buildings across over 45,000 km^2 of 0.3m resolution imagery, this dataset provides precise segmentation masks and damage labels on a four-level spectrum. Paper available on arXiv.
  • xView3 - Detecting dark vessels engaged in illegal, unreported, and unregulated (IUU) fishing activities on synthetic aperture radar (SAR) imagery. With human and algorithm annotated instances of vessels and fixed infrastructure across 43,200,000 km^2 of Sentinel-1 imagery, this multi-modal dataset enables algorithms to detect and classify dark vessels. Paper available on arXiv.
  • All reference code, dataset processing utilities, and winning model codes + weights are available on the (xView GitHub organization page)[https://github.com/DIUx-xView).

Vehicle Detection in Aerial Imagery (VEDAI)

Cars Overhead With Context (COWC)

AI-TOD - tiny object detection

  • https://github.com/jwwangchn/AI-TOD
  • The mean size of objects in AI-TOD is about 12.8 pixels, which is much smaller than other datasets
  • NWD -> code for 2021 paper: A Normalized Gaussian Wasserstein Distance for Tiny Object Detection. Uses AI-TOD dataset

Counting from Sky

AIRS (Aerial Imagery for Roof Segmentation)

Inria building/not building segmentation dataset

AICrowd Mapping Challenge \building segmentation dataset

  • Dataset release as part of the mapping-challenge
  • 300x300 pixel RGB images with annotations in COCO format
  • Imagery appears to be global but with significant fraction from North America
  • Winning solution published by neptune.ai here, achieved precision 0.943 and recall 0.954 using Unet with Resnet.
  • mappingchallenge -> YOLOv5 applied to the AICrowd Mapping Challenge dataset

BONAI - building footprint dataset

GID15 large scale semantic segmentation dataset

LEVIR-CD building change detection dataset

ISPRS

iSAID

SpaceNet

WorldStrat Dataset

  • https://github.com/worldstrat/worldstrat
  • Nearly 10,000 km² of free high-resolution satellite imagery of unique locations which ensure stratified representation of all types of land-use across the world: from agriculture to ice caps, from forests to multiple urbanization densities.
  • Each high-resolution image (1.5 m/pixel) comes with multiple temporally-matched low-resolution images from the freely accessible lower-resolution Sentinel-2 satellites (10 m/pixel)
  • Several super-resolution benchmark models trained on it

Tensorflow datasets

  • resisc45 -> RESISC45 dataset is a publicly available benchmark for Remote Sensing Image Scene Classification (RESISC), created by Northwestern Polytechnical University (NWPU). This dataset contains 31,500 images, covering 45 scene classes with 700 images in each class.
  • eurosat -> EuroSAT dataset is based on Sentinel-2 satellite images covering 13 spectral bands and consisting of 10 classes with 27000 labeled and geo-referenced samples.
  • BigEarthNet -> a large-scale Sentinel-2 land use classification dataset, consisting of 590,326 Sentinel-2 image patches. The image patch size on the ground is 1.2 x 1.2 km with variable image size depending on the channel resolution. This is a multi-label dataset with 43 imbalanced labels. Official website includes version of the dataset with Sentinel 1 & 2 chips
  • so2sat -> a dataset consisting of co-registered synthetic aperture radar and multispectral optical image patches acquired by Sentinel 1 & 2

AWS datasets

Microsoft

Google Earth Engine (GEE)

Since there is a whole community around GEE I will not reproduce it here but list very select references. Get started at https://developers.google.com/earth-engine/

Radiant Earth

Image captioning datasets

Weather Datasets

Forest datasets

  • awesome-forests -> A curated list of ground-truth forest datasets for the machine learning and forestry community
  • ReforesTree -> A dataset for estimating tropical forest biomass based on drone and field data

Geospatial datasets

  • Resource Watch provides a wide range of geospatial datasets and a UI to visualise them

Time series & change detection datasets

  • BreizhCrops -> A Time Series Dataset for Crop Type Mapping
  • The SeCo dataset contains image patches from Sentinel-2 tiles captured at different timestamps at each geographical location. Download SeCo here
  • Onera Satellite Change Detection Dataset comprises 24 pairs of multispectral images taken from the Sentinel-2 satellites between 2015 and 2018
  • SYSU-CD -> The dataset contains 20000 pairs of 0.5-m aerial images of size 256×256 taken between the years 2007 and 2014 in Hong Kong

DEM (digital elevation maps)

  • Shuttle Radar Topography Mission, search online at usgs.gov
  • Copernicus Digital Elevation Model (DEM) on S3, represents the surface of the Earth including buildings, infrastructure and vegetation. Data is provided as Cloud Optimized GeoTIFFs. link
  • Awesome-DEM

UAV & Drone datasets

Other datasets

  • land-use-land-cover-datasets
  • EORSSD-dataset -> Extended Optical Remote Sensing Saliency Detection (EORSSD) Dataset
  • RSD46-WHU -> 46 scene classes for image classification, free for education, research and commercial use
  • RSOD-Dataset -> dataset for object detection in PASCAL VOC format. Aircraft, playgrounds, overpasses & oiltanks
  • VHR-10_dataset_coco -> Object detection and instance segmentation dataset based on NWPU VHR-10 dataset. RGB & SAR
  • HRSID -> high resolution sar images dataset for ship detection, semantic segmentation, and instance segmentation tasks
  • MAR20 -> Military Aircraft Recognition dataset
  • RSSCN7 -> Dataset of the article “Deep Learning Based Feature Selection for Remote Sensing Scene Classification”
  • Sewage-Treatment-Plant-Dataset -> object detection
  • TGRS-HRRSD-Dataset -> High Resolution Remote Sensing Detection (HRRSD)
  • MUSIC4HA -> MUltiband Satellite Imagery for object Classification (MUSIC) to detect Hot Area
  • MUSIC4GC -> MUltiband Satellite Imagery for object Classification (MUSIC) to detect Golf Course
  • MUSIC4P3 -> MUltiband Satellite Imagery for object Classification (MUSIC) to detect Photovoltaic Power Plants (solar panels)
  • ABCDdataset -> damage detection dataset to identify whether buildings have been washed-away by tsunami
  • OGST -> Oil and Gas Tank Dataset
  • LS-SSDD-v1.0-OPEN -> Large-Scale SAR Ship Detection Dataset
  • S2Looking -> A Satellite Side-Looking Dataset for Building Change Detection, paper
  • Zurich Summer Dataset -> Semantic segmentation of urban scenes
  • AISD -> Aerial Imagery dataset for Shadow Detection
  • LEVIR-Ship -> a dataset for tiny ship detection under medium-resolution remote sensing images
  • Awesome-Remote-Sensing-Relative-Radiometric-Normalization-Datasets
  • SearchAndRescueNet -> Satellite Imagery for Search And Rescue Dataset, with example Faster R-CNN model
  • geonrw -> orthorectified aerial photographs, LiDAR derived digital elevation models and segmentation maps with 10 classes. With repo
  • Thermal power plans dataset
  • University1652-Baseline -> A Multi-view Multi-source Benchmark for Drone-based Geo-localization
  • benchmark_ISPRS2021 -> A new stereo dense matching benchmark dataset for deep learning
  • WHU-SEN-City -> A paired SAR-to-optical image translation dataset which covers 34 big cities of China
  • SAR_vehicle_detection_dataset -> 104 SAR images for vehicle detection, collected from Sandia MiniSAR/FARAD SAR images and MSTAR images
  • ERA-DATASET -> A Dataset and Deep Learning Benchmark for Event Recognition in Aerial Videos
  • SSL4EO-S12 -> a large-scale dataset for self-supervised learning in Earth observation
  • UBC-dataset -> a dataset for building detection and classification from very high-resolution satellite imagery with the focus on object-level interpretation of individual buildings
  • AIR-CD -> a challenging cloud detection data set called AIR-CD, with higher spatial resolution and more representative landcover types
  • AIR-PolSAR-Seg -> a challenging PolSAR terrain segmentation dataset
  • HRC_WHU -> High-Resolution Cloud Detection Dataset comprising 150 RGB images and a resolution varying from 0.5 to 15 m in different global regions
  • AeroRIT -> A New Scene for Hyperspectral Image Analysis
  • Building_Dataset -> High-speed Rail Line Building Dataset Display
  • Haiming-Z/MtS-WH-reference-map -> a reference map for change detection based on MtS-WH
  • MtS-WH-Dataset -> Multi-temporal Scene WuHan (MtS-WH) Dataset
  • Multi-modality-image-matching -> image matching dataset including several remote sensing modalities
  • RID -> Roof Information Dataset for CV-Based Photovoltaic Potential Assessment. With paper
  • APKLOT -> A dataset for aerial parking block segmentation
  • QXS-SAROPT -> Optical and SAR pairing dataset from the paper: The QXS-SAROPT Dataset for Deep Learning in SAR-Optical Data Fusion

Kaggle

Kaggle hosts over > 200 satellite image datasets, search results here. The kaggle blog is an interesting read.

Kaggle - Amazon from space - classification challenge

Kaggle - DSTL segmentation challenge

Kaggle - DeepSat land cover classification

Kaggle - Airbus ship detection challenge

Kaggle - Shipsnet classification dataset

Kaggle - Ships in Google Earth

Kaggle - Ships in San Franciso Bay

Kaggle - Swimming pool and car detection using satellite imagery

Kaggle - Planesnet classification dataset

Kaggle - CGI Planes in Satellite Imagery w/ BBoxes

Kaggle - Draper challenge to place images in order of time

Kaggle - Dubai segmentation

Kaggle - Massachusetts Roads & Buildings Datasets - segmentation

Kaggle - Deepsat classification challenge

Not satellite but airborne imagery. Each sample image is 28x28 pixels and consists of 4 bands - red, green, blue and near infrared. The training and test labels are one-hot encoded 1x6 vectors. Each image patch is size normalized to 28x28 pixels. Data in .mat Matlab format. JPEG?

  • Sat4 500,000 image patches covering four broad land cover classes - barren land, trees, grassland and a class that consists of all land cover classes other than the above three
  • Sat6 405,000 image patches each of size 28x28 and covering 6 landcover classes - barren land, trees, grassland, roads, buildings and water bodies.

Kaggle - High resolution ship collections 2016 (HRSC2016)

Kaggle - SWIM-Ship Wake Imagery Mass

Kaggle - Understanding Clouds from Satellite Images

In this challenge, you will build a model to classify cloud organization patterns from satellite images.

Kaggle - 38-Cloud Cloud Segmentation

Kaggle - Airbus Aircraft Detection Dataset

Kaggle - Airbus oil storage detection dataset

Kaggle - Satellite images of hurricane damage

Kaggle - Austin Zoning Satellite Images

Kaggle - Statoil/C-CORE Iceberg Classifier Challenge

Kaggle - Land Cover Classification Dataset from DeepGlobe Challenge - segmentation

Kaggle - Next Day Wildfire Spread

A Data Set to Predict Wildfire Spreading from Remote-Sensing Data

Kaggle - Satellite Next Day Wildfire Spread

Inspired by the above dataset, using different data sources

Kaggle - Spacenet 7 Multi-Temporal Urban Change Detection

Kaggle - Satellite Images to predict poverty in Africa

Kaggle - NOAA Fisheries Steller Sea Lion Population Count

Kaggle - Arctic Sea Ice Image Masking

Kaggle - Overhead-MNIST

Kaggle - Satellite Image Classification

Kaggle - miscellaneous

Synthetic data

Training data can be hard to acquire, particularly for rare events such as change detection after disasters, or imagery of rare classes of objects. In these situations, generating synthetic training data might be the only option. This has become quite sophisticated, with 3D models being use with open source games engines such as Unreal.

Online platforms for analytics

  • This article discusses some of the available platforms
  • Pangeo -> There is no single software package called “pangeo”; rather, the Pangeo project serves as a coordination point between scientists, software, and computing infrastructure. Includes open source resources for parallel processing using Dask and Xarray. Pangeo recently announced their 2.0 goals: pivoting away from directly operating cloud-based JupyterHubs, and towards eductaion and research
  • Descartes Labs -> access to EO imagery from a variety of providers via python API
  • Planet have a Jupyter notebook platform which can be deployed locally.
  • eurodatacube.com -> data & platform for EO analytics in Jupyter env, paid
  • up42 is a developer platform and marketplace, offering all the building blocks for powerful, scalable geospatial products
  • Microsoft Planetary Computer -> direct Google Earth Engine competitor in the making?
  • eofactory.ai -> supports multi public and private data sources that can be used to analyse and extract information
  • mapflow.ai -> imagery analysis platform with its instant access to the major satellite imagery providers, models for extract building footprints etc & QGIS plugin
  • openeo by ESA data platform
  • Adam platform -> the Advanced geospatial Data Management platform (ADAM) is a tool to access a large variety and volume of global environmental data

Free online compute

A GPU is required for training deep learning models (but not necessarily for inferencing), and this section lists a couple of free Jupyter environments with GPU available. There is a good overview of online Jupyter development environments on the fastai site. I personally use Colab Pro with data hosted on Google Drive, or Sagemaker if I have very long running training jobs.

Google Colab

  • Collaboratory notebooks with GPU as a backend for free for 12 hours at a time. Note that the GPU may be shared with other users, so if you aren't getting good performance try reloading.
  • Also a pro tier for $10 a month -> https://colab.research.google.com/signup
  • Tensorflow, pytorch & fastai available but you may need to update them
  • Colab Alive is a chrome extension that keeps Colab notebooks alive.
  • colab-ssh -> lets you ssh to a colab instance like it’s an EC2 machine and install packages that require full linux functionality

Kaggle - also Google!

  • Free to use
  • GPU Kernels - may run for 1 hour
  • Tensorflow, pytorch & fastai available but you may need to update them
  • Advantage that many datasets are already available

AWS SageMaker Studio Lab

Others

State of the art engineering

  • Compute and data storage are on the cloud. Read how Planet and Airbus use the cloud
  • Traditional data formats aren't designed for processing on the cloud, so new standards are evolving such as COG and STAC
  • Google Earth Engine and Microsoft Planetary Computer are democratising access to 'planetary scale' compute
  • Google Colab and others are providing free acces to GPU compute to enable training deep learning models
  • No-code platforms and auto-ml are making ML techniques more accessible than ever
  • Serverless compute (e.g. AWS Lambda) mean that managing servers may become a thing of the past
  • Custom hardware is being developed for rapid training and inferencing with deep learning models, both in the datacenter and at the edge
  • Supervised ML methods typically require large annotated datasets, but approaches such as self-supervised and active learning require less or even no annotation
  • Computer vision traditionally delivered high performance image processing on a CPU by using compiled languages like C++, as used by OpenCV for example. The advent of GPUs are changing the paradigm, with alternatives optimised for GPU being created, such as Kornia
  • Whilst the combo of python and keras/tensorflow/pytorch are currently preeminent, new python libraries such as Jax and alternative languages such as Julia are showing serious promise

Cloud providers

An overview of the most relevant services provided by AWS, Google and Microsoft. Also consider one of the many smaller but more specialised platorms such as paperspace

AWS

Google Cloud

  • For storage use Cloud Storage (AWS S3 equivalent)
  • For data warehousing use BigQuery (AWS Redshift equivalent). Visualize massive spatial datasets directly in BigQuery using CARTO
  • For model training use Vertex (AWS Sagemaker equivalent)
  • For containerised apps use Cloud Run (AWS App Runner equivalent but can scale to zero)

Microsoft Azure

Deploying models

This section discusses how to get a trained machine learning & specifically deep learning model into production. For an overview on serving deep learning models checkout Practical-Deep-Learning-on-the-Cloud. There are many options if you are happy to dedicate a server, although you may want a GPU for batch processing. For serverless use AWS lambda.

Rest API on dedicated server

A common approach to serving up deep learning model inference code is to wrap it in a rest API. The API can be implemented in python (flask or FastAPI), and hosted on a dedicated server e.g. EC2 instance. Note that making this a scalable solution will require significant experience.

Framework specific model serving

If you are happy to live with some lock-in, these are good options:

Framework agnostic model serving

Using lambda functions - i.e. serverless

Using lambda functions allows inference without having to configure or manage the underlying infrastructure

Models in the browser

The model is run in the browser itself on live images, ensuring processing is always with the latest model available and removing the requirement for dedicated server side inferencing

Model optimisation for deployment

The general approaches are outlined in this article from NVIDIA which discusses fine tuning a model pre-trained on synthetic data (Rareplanes) with 10% real data, then pruning the model to reduce its size, before quantizing the model to improve inference speed. There are also toolkits for optimisation, in particular ONNX which is framework agnostic.

MLOps

MLOps is a set of practices that aims to deploy and maintain machine learning models in production reliably and efficiently.

Model monitoring

Once your model is deployed you will want to monitor for data errors, broken pipelines, and model performance degradation/drift ref

Image annotation

For supervised machine learning, you will require annotated images. For example if you are performing object detection you will need to annotate images with bounding boxes. Check that your annotation tool of choice supports large image (likely geotiff) files, as not all will. Note that GeoJSON is widely used by remote sensing researchers but this annotation format is not commonly supported in general computer vision frameworks, and in practice you may have to convert the annotation format to use the data with your chosen framework. There are both closed and open source tools for creating and converting annotation formats. Some of these tools are simply for performing annotation, whilst others add features such as dataset management and versioning. Note that self-supervised and active learning approaches might circumvent the need to perform a large scale annotation exercise. Note that tiffs/geotiffs cannot be displayed by most browsers (Chrome), but CAN render in Safari.

Annotation tools with GEO features

Also check the section Image handling, manipulation & dataset creation

  • GroundWork is designed for annotating and labeling geospatial data like satellite imagery, from Azavea
  • labelbox.com -> free tier is quite generous, supports annotating Geotiffs & returning annotations with geospatial coordinates. Watch this webcast
  • diffgram describes itself as a complete training data platform for machine learning delivered as a single application, supports streaming data to pytorch & tensorflow. COGS can be annotated
  • iris -> Tool for manual image segmentation and classification of satellite imagery
  • If you are considering building an in house annotation platform read this article. Used PostGis database, GeoJson format and GIS standard in a stateless architecture
  • satellite-imagery-labeling-tool -> from Microsoft, this is a lightweight web-interface for creating and sharing vector annotations over satellite/aerial imagery scenes
  • RSLabel -> remote sensing (RS) image annotation tool for deep learning
  • encord -> supports annotatin SAR

Open source annotation tools

  • awesome-data-labeling -> long list of annotation tools
  • awesome-open-data-annotation -> another long list of annotation tools
  • labelImg is the classic desktop tool, limited to bounding boxes for object detection. Also checkout roLabelImg which supports ROTATED rectangle regions, as often occurs in aerial imagery. labelImg_OBB is another fork supporting orinted bounding boxes (OBB)
  • Labelme is a very popular & simple dektop app for polygonal annotation suitable for object detection and semantic segmentation. Note it outputs annotations in a custom LabelMe JSON format which you will need to convert, e.g. using labelme2coco. Read Labelme Image Annotation for Geotiffs
  • Label Studio is a multi-type data labeling and annotation tool with standardized output format, syncing to buckets, and supports importing pre-annotations (create with a model). Checkout label-studio-converter for converting Label Studio annotations into common dataset formats
  • CVAT suports object detection, segmentation and classification via a local web app. This article on Roboflow gives a good intro to CVAT. Checkout CVAT images validator
  • VoTT -> an electron app for building end to end Object Detection Models from Images and Videos, by Microsoft
  • Create your own annotation tool using Bokeh Holoviews, tkinter, or see these dash examples for object detection and segmentation
  • Deeplabel is a cross-platform tool for annotating images with labelled bounding boxes. Deeplabel also supports running inference using state-of-the-art object detection models like Faster-RCNN and YOLOv4. With support out-of-the-box for CUDA, you can quickly label an entire dataset using an existing model.
  • Alturos.ImageAnnotation is a collaborative tool for labeling image data on S3 for yolo
  • pigeonXT -> create custom image classification annotators within Jupyter notebooks
  • ipyannotations -> Image annotations in python using Jupyter notebooks
  • Label-Detect -> is a graphical image annotation tool and using this tool a user can also train and test large satellite images, fork of the popular labelImg tool
  • Swipe-Labeler -> Swipe Labeler is a Graphical User Interface based tool that allows rapid labeling of image data
  • SuperAnnotate can be run locally or used via a cloud service
  • dash_doodler -> A web application built with plotly/dash for image segmentation with minimal supervision
  • remo -> A webapp and Python library that lets you explore and control your image datasets
  • TensorFlow Object Detection API provides a handy utility for object annotation within Google Colab notebooks. See usage here
  • coco-annotator -> Web-based image segmentation tool for object detection, localization, and keypoints
  • pylabel -> Python library for computer vision labeling tasks. The core functionality is to translate bounding box annotations between different formats-for example, from coco to yolo. PyLabel also includes an image labeling tool that runs in a Jupyter notebook that can annotate images manually or perform automatic labeling using a pre-trained model
  • BMW-Labeltool-Lite -> bounding box annotator
  • django-labeller -> An image labelling tool for creating segmentation data sets, for Django and Flask
  • scalabel -> supports 2D images and 3D point clouds
  • Detection-Label-Tool -> Change detection and object annotation, uses PyQt

Cloud hosted & paid annotation tools & services

Several open source tools are also available on the cloud, including CVAT, label-studio & Diffgram. In general cloud solutions will provide a lot of infrastructure and storage for you, as well as integration with outsourced annotators.

  • GroundWork is designed for annotating and labeling geospatial data like satellite imagery, from Azavea
  • labelbox.com -> free tier is quite generous, supports annotating Geotiffs & returning annotations with geospatial coordinates. Watch this webcast
  • Roboflow -> in addition to annotation this platform makes it easy to convert between annotation formats & manage datasets, as well as train and deploy custom models to private API endpoints. Read How to Train Computer Vision Models on Aerial Imagery
  • supervise.ly is one of the more fully featured platforms, decent free tier
  • AWS supports image annotation via the Rekognition Custom Labels console
  • rectlabel is a desktop app for MacOS to annotate images for bounding box object detection and segmentation, paid and free (rectlabel-lite) versions
  • hasty.ai -> supports model assisted annotation & inferencing

Annotation formats

Note there are many annotation formats, although PASCAL VOC and coco-json are the most commonly used. I recommend using geojson for storing polygons, then converting these to the required format when needed.

  • PASCAL VOC format: XML files in the format used by ImageNet
  • coco-json format: JSON in the format used by the 2015 COCO dataset
  • YOLO Darknet TXT format: contains one text file per image, used by YOLO
  • Tensorflow TFRecord: a proprietary binary file format used by the Tensorflow Object Detection API
  • Many more formats listed here
  • OBB: orinted bounding boxes are polygons representing rotated rectangles

Annotation visualisation & conversion tools

Tools to visualise annotations & convert between formats. Note that most annotation software will allow you to visualise existing annotations

  • Dataset-Converters -> a conversion toolset between different object detection and instance segmentation annotation formats
  • FiftyOne -> open-source tool for building high quality datasets and computer vision models. Visualise labels, evaluate model predictions, explore scenarios of interest, identify failure modes, find annotation mistakes, and much more! Read Nearest Neighbor Embeddings Search with Qdrant and FiftyOne
  • rebox -> Easily convert between bounding box annotation formats
  • Pascal VOC BBox Viewer
  • COCO-Assistant -> Helper for dealing with MS-COCO annotations; Merge datasets, Remove specfic category from dataset, Generate annotations statistics - distribution of object areas and category distribution
  • pybboxes -> Light weight toolkit for bounding boxes providing conversion between bounding box types and simple computations
  • voc2coco -> Convert VOC format XMLs to COCO format json
  • ObjectDetectionEval -> Parse all kinds of object detection databases (ImageNet, COCO, YOLO, PascalVOC, OpenImage, CVAT, LabelMe, etc.) & save to other formats
  • LabelMeYoloConverter -> Convert LabelMe Annotation Tool JSON format to YOLO text file format

Open source software

By software, I here mean desktop type apps. A note on licensing: The two general types of licenses for open source are copyleft and permissive. Copyleft requires that subsequent derived software products also carry the license forward, e.g. the GNU Public License (GNU GPLv3). For permissive, options to modify and use the code as one please are more open, e.g. MIT & Apache 2. Checkout choosealicense.com/

General utilities

Scripts and command line applications

  • geospatial-cli -> a collection of geospatial programs with commandline interface
  • PyShp -> The Python Shapefile Library (PyShp) reads and writes Shapefiles in pure Python
  • s2p -> a Python library and command line tool that implements a stereo pipeline which produces elevation models from images taken by high resolution optical satellites such as Pléiades, WorldView, QuickBird, Spot or Ikonos
  • EarthPy -> A set of helper functions to make working with spatial data in open source tools easier. readExploratory Data Analysis (EDA) on Satellite Imagery Using EarthPy
  • pygeometa -> provides a lightweight and Pythonic approach for users to easily create geospatial metadata in standards-based formats using simple configuration files
  • pesto -> PESTO is designed to ease the process of packaging a Python algorithm as a processing web service into a docker image. It contains shell tools to generate all the boiler plate to build an OpenAPI processing web service compliant with the Geoprocessing-API. By Airbus Defence And Space
  • GEOS -> Google Earth Overlay Server (GEOS) is a python-based server for creating Google Earth overlays of tiled maps. Your can also display maps in the web browser, measure distances and print maps as high-quality PDF’s.
  • GeoDjango intends to be a world-class geographic Web framework. Its goal is to make it as easy as possible to build GIS Web applications and harness the power of spatially enabled data. Some features of GDAL are supported.
  • rasterstats -> summarize geospatial raster datasets based on vector geometries
  • turfpy -> a Python library for performing geospatial data analysis which reimplements turf.js
  • rsgislib -> Remote Sensing and GIS Software Library; python module tools for processing spatial and image data
  • eo-learn -> seamlessly access and process spatio-temporal image sequences acquired by any satellite fleet in a timely and automatic manner. See eo-learn-examples
  • RStoolbox: Tools for Remote Sensing Data Analysis in R
  • nd -> Framework for the analysis of n-dimensional, multivariate Earth Observation data, built on xarray
  • reverse-geocoder -> a fast, offline reverse geocoder in Python
  • MuseoToolBox -> a python library to simplify the use of raster/vector, especially for machine learning and remote sensing
  • py6s -> an interface to the Second Simulation of the Satellite Signal in the Solar Spectrum (6S) atmospheric Radiative Transfer Model
  • timvt -> PostGIS based Vector Tile server built on top of the modern and fast FastAPI framework
  • titiler -> A dynamic Web Map tile server using FastAPI
  • BRAILS -> an AI-based pipeline for city-scale building information modelling (BIM)
  • color-thief-py -> Grabs the dominant color or a representative color palette from an image
  • force -> an all-in-one processing engine for medium-resolution Earth Observation image archives
  • mapwarper -> an open source map geo-rectification, warping and georeferencing application
  • sarpy -> A basic Python library to demonstrate reading, writing, display, and simple processing of complex SAR data using the NGA SICD standard
  • buzzard -> Advanced raster and geometry manipulations
  • sentinel1denoised -> Thermal noise subtraction, scalloping correction, angular correction
  • RStoolbox -> Remote Sensing Data Analysis in R
  • kart -> Distributed version-control for geospatial and tabular data
  • picogeojson -> a Python library for reading, writing, and working with GeoJSON
  • shareloc -> a simple remote sensing geometric library, to perform image coordinates projections between sensor and ground and vice versa
  • geoblaze -> Blazing Fast JavaScript Raster Processing Engine
  • nasa-wildfires -> Download wildfire hotspots detected by NASA satellites and the Fire Information for Resource Management System (FIRMS)
  • SSGP-toolbox -> Simple Spatial Gapfilling Processor. Toolbox for filling gaps in spatial datasets
  • imgreg2D -> 2D image registration in python, using napari
  • georust -> A collection of geospatial tools and libraries written in Rust
  • DataPillager -> Download data from Esri REST service
  • litexplore -> a Python web app that lets you explore remote SQLite databases over SSH connections
  • tifeatures -> Simple and Fast Geospatial Features API for PostGIS
  • pyroSAR -> framework for large-scale SAR satellite data processing
  • S1_NRB -> A prototype processor for the Sentinel-1 Normalised Radar Backscatter product
  • AGBench -> a Python library that benchmarks satellite-based aboveground biomass or carbon estimate maps
  • mbtiles-s3-server -> Python server to on-the-fly extract and serve vector tiles from an mbtiles file on S3
  • matico -> a set of tools and services that allow users to manage geospatial datasets, build APIs that use those datasets and full geospatial applications with little to no code
  • gmtsar -> easy and fast satellite interferometry (InSAR) processing

Low level numerical & data formats

Image processing, handling, manipulation

  • Pillow is the Python Imaging Library -> this will be your go-to package for image manipulation in python
  • opencv-python is pre-built CPU-only OpenCV packages for Python
  • kornia is a differentiable computer vision library for PyTorch, like openCV but on the GPU. Perform image transformations, epipolar geometry, depth estimation, and low-level image processing such as filtering and edge detection that operate directly on tensors.
  • tifffile -> Read and write TIFF files
  • xtiff -> A small Python 3 library for writing multi-channel TIFF stacks
  • geotiff -> A noGDAL tool for reading and writing geotiff files
  • geolabel-maker -> combine satellite or aerial imagery with vector spatial data to create your own ground-truth dataset in the COCO format for deep-learning models
  • imagehash -> Image hashes tell whether two images look nearly identical
  • fake-geo-images -> A module to programmatically create geotiff images which can be used for unit tests
  • imagededup -> Finding duplicate images made easy! Uses perceptual hashing
  • duplicate-img-detection -> A basic duplicate image detection service using perceptual image hash functions and nearest neighbor search, implemented using faiss, fastapi, and imagehash
  • rmstripes -> Remove stripes from images with a combined wavelet/FFT approach
  • activeloopai Hub -> The fastest way to store, access & manage datasets with version-control for PyTorch/TensorFlow. Works locally or on any cloud. Scalable data pipelines.
  • sewar -> All image quality metrics you need in one package
  • Satellite imagery label tool -> provides an easy way to collect a random sample of labels over a given scene of satellite imagery
  • Missing-Pixel-Filler -> given images that may contain missing data regions (like satellite imagery with swath gaps), returns these images with the regions filled
  • color_range_filter -> a script that allows us to find range of colors in images using openCV, and then convert them into geo vectors
  • eo4ai -> easy-to-use tools for preprocessing datasets for image segmentation tasks in Earth Observation
  • rasterix -> a cross-platform utility built around the GDAL library and the Qt framework designed to process geospatial raster data
  • datumaro -> Dataset Management Framework, a Python library and a CLI tool to build, analyze and manage Computer Vision datasets
  • sentinelPot -> a python package to preprocess Sentinel 1&2 imagery
  • ImageAnalysis -> Aerial imagery analysis, processing, and presentation scripts.
  • rastertodataframe -> Convert any GDAL compatible raster to a Pandas DataFrame
  • yeoda -> provides lower and higher-level data cube classes to work with well-defined and structured earth observation data
  • tiles-to-tiff -> Python script for converting XYZ raster tiles for slippy maps to a georeferenced TIFF image
  • telluric -> a Python library to manage vector and raster geospatial data in an interactive and easy way
  • Sniffer -> A python application for sorting through geospatial imagery
  • pyjeo -> a library for image processing for geospatial data implemented in JRC Ispra, with paper
  • vpv -> Image viewer designed for image processing experts
  • arop -> Automated Registration and Orthorectification Package
  • satellite_image -> Python package to process images from Landsat satellites and return geographic information, cloud mask, numpy array, geotiff
  • large_image -> Python modules to work with large multiresolution images
  • ResizeRight -> The correct way to resize images or tensors. For Numpy or Pytorch (differentiable)
  • pysat -> a package providing a simple and flexible interface for downloading, loading, cleaning, managing, processing, and analyzing scientific measurements
  • plcompositor -> c++ tool from Planet to create seamless and cloudless image mosaics from deep stacks of satellite imagery

Image chipping/tiling & merging

Since raw images can be very large, it is usually necessary to chip/tile them into smaller images before annotation & training

  • image_slicer -> Split images into tiles. Join the tiles back together
  • tiler by nuno-faria -> split images into tiles and merge tiles into a large image
  • tiler by the-lay -> N-dimensional NumPy array tiling and merging with overlapping, padding and tapering
  • xbatcher -> Xbatcher is a small library for iterating xarray DataArrays in batches. The goal is to make it easy to feed xarray datasets to machine learning libraries such as Keras
  • GeoTagged_ImageChip -> A simple script to create geo tagged image chips from high resolution RS iamges for training deep learning models such as Unet
  • geotiff-crop-dataset -> A Pytorch Dataloader for tif image files that dynamically crops the image
  • Train-Test-Validation-Dataset-Generation -> app to crop images and create small patches of a large image e.g. Satellite/Aerial Images, which will then be used for training and testing Deep Learning models specifically semantic segmentation models
  • satproc -> Python library and CLI tools for processing geospatial imagery for ML
  • Sliding Window -> break large images into a series of smaller chunks
  • patchify -> A library that helps you split image into small, overlappable patches, and merge patches into original image
  • split-rs-data -> Divide remote sensing images and their labels into data sets of specified size
  • image-reconstructor-patches -> Reconstruct Image from Patches with a Variable Stride
  • rpc_cropper -> A small standalone tool to crop satellite images and their RPC
  • geotile -> python library for tiling the geographic raster data
  • GeoPatch -> generating patches from remote sensing data
  • ImageTilingUtils -> Minimalistic set of image reader agnostic tools to easily iterate over large images

Image dataset creation

Many datasets on kaggle & elsewhere have been created by screen-clipping Google Maps or browsing web portals. The tools below are to create datasets programatically

  • MapTilesDownloader -> A super easy to use map tiles downloader built using Python
  • jimutmap -> get enormous amount of high resolution satellite images from apple / google maps quickly through multi-threading
  • google-maps-downloader -> A short python script that downloads satellite imagery from Google Maps
  • ExtractSatelliteImagesFromCSV -> extract satellite images using a CSV file that contains latitude and longitude, uses mapbox
  • sentinelsat -> Search and download Copernicus Sentinel satellite images
  • SentinelDownloader -> a high level wrapper to the SentinelSat that provides an object oriented interface, asynchronous downloading, quickview & simpler searching methods
  • GEES2Downloader -> Downloader for GEE S2 bands
  • Sentinel-2 satellite tiles images downloader from Copernicus -> Minimizes data download and combines multiple tiles to return a single area of interest
  • felicette -> Satellite imagery for dummies. Generate JPEG earth imagery from coordinates/location name with publicly available satellite data
  • Easy Landsat Download
  • A simple python scrapper to get satellite images of Africa, Europe and Oceania's weather using the Sat24 website
  • RGISTools -> Tools for Downloading, Customizing, and Processing Time Series of Satellite Images from Landsat, MODIS, and Sentinel
  • DeepSatData -> Automatically create machine learning datasets from satellite images
  • landsat_ingestor -> Scripts and other artifacts for landsat data ingestion into Amazon public hosting
  • satpy -> a python library for reading and manipulating meteorological remote sensing data and writing it to various image and data file formats
  • GIBS-Downloader -> a command-line tool which facilitates the downloading of NASA satellite imagery and offers different functionalities in order to prepare the images for training in a machine learning pipeline
  • eodag -> Earth Observation Data Access Gateway
  • pylandsat -> Search, download, and preprocess Landsat imagery
  • landsatxplore -> Search and download Landsat scenes from EarthExplorer
  • OpenSarToolkit -> High-level functionality for the inventory, download and pre-processing of Sentinel-1 data in the python language
  • lsru -> Query and Order Landsat Surface Reflectance data via ESPA
  • eoreader -> Remote-sensing opensource python library reading optical and SAR sensors, loading and stacking bands, clouds, DEM and index in a sensor-agnostic way
  • Export thumbnails from Earth Engine
  • deepsentinel-osm -> A repository to generate land cover labels from OpenStreetMap
  • img2dataset -> Easily turn large sets of image urls to an image dataset. Can download, resize and package 100M urls in 20h on one machine
  • ohsome2label -> Historical OpenStreetMap (OSM) Objects to Machine Learning Training Samples
  • Label Maker -> downloads OpenStreetMap QA Tile information and satellite imagery tiles and saves them as an .npz file for use in machine learning training. This should be used instead of the deprecated skynet-data
  • sentinel2tools -> downloading & basic processing of Sentinel 2 imagesry. Read Sentinel2tools: simple lib for downloading Sentinel-2 satellite images
  • Aerial-Satellite-Imagery-Retrieval -> A program using Bing maps tile system to automatically download Aerial / Satellite Imagery given a lat/lon bounding box and level of detail
  • google-maps-at-88-mph -> Google Maps keeps old satellite imagery around for a while – this tool collects what's available for a user-specified region in the form of a GIF
  • srtmDownloader -> Python library (multi-threaded) for retrieving SRTM elevation map of CGIAR-CSI
  • ImageDatasetViz -> create a mosaic of images in a dataset for previewing purposes

Image augmentation packages

Image augmentation is a technique used to expand a training dataset in order to improve ability of the model to generalise

  • AugLy -> A data augmentations library for audio, image, text, and video. By Facebook
  • albumentations -> Fast image augmentation library and an easy-to-use wrapper around other libraries
  • FoHIS -> Towards Simulating Foggy and Hazy Images and Evaluating their Authenticity
  • Kornia provides augmentation on the GPU
  • toolbox by ming71 -> various cv tools, such as label tools, data augmentation, label conversion, etc.
  • AstroAugmentations -> augmentations designed around astronomical instruments
  • Chessmix -> data augmentation method for remote sensing semantic segmentation
  • satellite_object_augmentation -> Object-based augmentation for remote sensing images segmentation via CNN
  • hypernet -> hyperspectral data augmentation

Image formats, data management and catalogues

Deep learning packages, frameworks & projects

  • TorchGeo -> a PyTorch domain library providing datasets, samplers, transforms, and pre-trained models specific to geospatial data, supported by Microsoft. Read Geospatial deep learning with TorchGeo
  • rastervision -> An open source Python framework for building computer vision models on aerial, satellite, and other large imagery sets
  • torchrs -> PyTorch implementation of popular datasets and models in remote sensing tasksenhance) -> Enhance PyTorch vision for semantic segmentation, multi-channel images and TIF file
  • DeepHyperX -> A Python/pytorch tool to perform deep learning experiments on various hyperspectral datasets
  • DELTA -> Deep Earth Learning, Tools, and Analysis, by NASA is a framework for deep learning on satellite imagery, based on Tensorflow & using MLflow for tracking experiments
  • Lightly is a computer vision framework for training deep learning models using self-supervised learning
  • Icevision offers a curated collection of hundreds of high-quality pre-trained models within an easy to use framework
  • pytorch_eo -> aims to make Deep Learning for Earth Observation data easy and accessible to real-world cases and research alike
  • NGVEO -> applying convolutional neural networks (CNN) to Earth Observation (EO) data from Sentinel 1 and 2 using python and PyTorch
  • chip-n-scale-queue-arranger by developmentseed -> an orchestration pipeline for running machine learning inference at scale. Supports fastai models
  • http://spaceml.org/ -> A Machine Learning toolbox and developer community building the next generation AI applications for space science and exploration
  • TorchSat is an open-source deep learning framework for satellite imagery analysis based on PyTorch (no activity since June 2020)
  • DeepNetsForEO -> Uses SegNET for working on remote sensing images using deep learning (no activity since 2019)
  • RoboSat -> semantic segmentation on aerial and satellite imagery. Extracts features such as: buildings, parking lots, roads, water, clouds (no longer maintained)
  • DeepOSM -> Train a deep learning net with OpenStreetMap features and satellite imagery (no activity since 2017)
  • mapwith.ai -> AI assisted mapping of roads with OpenStreetMap. Part of Open-Mapping-At-Facebook
  • sahi -> A vision library for performing sliced inference on large images/small objects. Read the arxiv paper and article A practical guide to using Slicing-Aided Hyper Inference for analyzing satellite images
  • terragpu -> Python library to process and classify remote sensing imagery by means of GPUs and AI/ML
  • EOTorchLoader -> Pytorch dataloader and pytorch lightning datamodule for Earth Observation imagery
  • satellighte -> an image classification library that consist state-of-the-art deep learning methods, using PyTorch Lightning
  • aeronetlib -> Python library to work with geospatial raster and vector data for deep learning
  • rsi-semantic-segmentation -> A unified PyTorch framework for semantic segmentation from remote sensing imagery
  • AiTLAS -> implements state-of-the-art AI methods for exploratory and predictive analysis of satellite images
  • mmsegmentation -> Semantic Segmentation Toolbox with support for many remote sensing datasets including LoveDA , Potsdam, Vaihingen & iSAID
  • ODEON landcover -> a set of command-line tools performing semantic segmentation on remote sensing images (aerial and/or satellite) with as many layers as you wish
  • aitlas-arena -> An open-source benchmark framework for evaluating state-of-the-art deep learning approaches for image classification in Earth Observation (EO)
  • PaddleRS -> remote sensing image processing development kit

Model tracking, versioning, specification & compilation

  • dvc -> a git extension to keep track of changes in data, source code, and ML models together
  • Weights and Biases -> keep track of your ML projects. Log hyperparameters and output metrics from your runs, then visualize and compare results and quickly share findings with your colleagues
  • geo-ml-model-catalog -> provides a common metadata definition for ML models that operate on geospatial data
  • hummingbird -> a library for compiling trained traditional ML models into tensor computations, e.g. scikit learn model to pytorch for fast inference on a GPU
  • deepchecks -> Deepchecks is a Python package for comprehensively validating your machine learning models and data with minimal effort
  • pachyderm -> Data Versioning and Pipelines for MLOps. Read Pachyderm + Label Studio which discusses versioning and lineage of data annotations

Graphing and visualisation

  • hvplot -> A high-level plotting API for the PyData ecosystem built on HoloViews. Allows overlaying data on map tiles, see Exploring USGS Terrain Data in COG format using hvPlot
  • Pyviz examples include several interesting geospatial visualisations
  • napari -> napari is a fast, interactive, multi-dimensional image viewer for Python. It’s designed for browsing, annotating, and analyzing large multi-dimensional images. By integrating closely with the Python ecosystem, napari can be easily coupled to leading machine learning and image analysis tools. Note that to view a 3GB COG I had to install the napari-tifffile-reader plugin.
  • pixel-adjust -> Interactively select and adjust specific pixels or regions within a single-band raster. Built with rasterio, matplotlib, and panel.
  • Plotly Dash can be used for making interactive dashboards
  • folium -> a python wrapper to the excellent leaflet.js which makes it easy to visualize data that’s been manipulated in Python on an interactive leaflet map. Also checkout the streamlit-folium component for adding folium maps to your streamlit apps
  • ipyearth -> An IPython Widget for Earth Maps
  • geopandas-view -> Interactive exploration of GeoPandas GeoDataFrames
  • geogif -> Turn xarray timestacks into GIFs
  • leafmap -> geospatial analysis and interactive mapping with minimal coding in a Jupyter environment
  • xmovie -> A simple way of creating movies from xarray objects
  • acquisition-time -> Drawing (Satellite) acquisition dates in a timeline
  • splot -> Lightweight plotting for geospatial analysis in PySAL
  • prettymaps -> A small set of Python functions to draw pretty maps from OpenStreetMap data
  • Tools to Design or Visualize Architecture of Neural Network
  • AstronomicAL -> An interactive dashboard for visualisation, integration and classification of data using Active Learning
  • pyodi -> A simple tool for explore your object detection dataset
  • Interactive-TSNE -> a tool that provides a way to visually view a PyTorch model's feature representation for better embedding space interpretability
  • fastgradio -> Build fast gradio demos of fastai learners
  • pysheds -> Simple and fast watershed delineation in python
  • mapboxgl-jupyter -> Use Mapbox GL JS to visualize data in a Python Jupyter notebook
  • cartoframes -> integrate CARTO maps, analysis, and data services into data science workflows
  • datashader -> create meaningful representations of large datasets quickly and flexibly. Read Creating Visual Narratives from Geospatial Data Using Open-Source Technology Maxar blog post
  • Kaleido -> Fast static image export for web-based visualization libraries with zero dependencies
  • Embedding Projector in Wandb -> allows users to plot multi-dimensional embeddings on a 2D plane using common dimension reduction algorithms like PCA, UMAP, and t-SNE
  • PlotNeuralNet -> Latex code for making neural networks diagrams
  • Damage Assessment Visualizer -> leverages satellite imagery from a disaster region to visualize conditions of building and structures before and after a disaster
  • NN-SVG -> is a tool for creating Neural Network (NN) architecture drawings parametrically rather than manually
  • bbox-visualizer -> Make drawing and labeling bounding boxes easy as cake
  • jupyter-bbox-widget -> A Jupyter widget for annotating images with bounding boxes
  • EOmaps -> A library to create interactive maps of geographical datasets
  • H3-Pandas -> Integrates H3 with GeoPandas and Pandas
  • gmplot -> a matplotlib-like interface to render all the data you'd like on top of Google Maps
  • NPYViewer -> a simple GUI tool that provides multiple ways to view .npy files containing 2D NumPy Arrays
  • pyGEOVis -> Visualize geo-tiff/json based on folium
  • bokeh-tiler -> Tile large geospatial images for use in Bokeh. Read Serving up SpaceNet Imagery for Bokeh
  • torchshow -> Visualize PyTorch tensor in one-line of code
  • pixels -> Mapping and charting pixels from remote sensing Earth observation data with JavaScript
  • MulimgViewer -> a multi-image viewer that can open multiple images in one interface
  • cnn-explainer -> Learning Convolutional Neural Networks with Interactive Visualization
  • Overlay-GeoTiff-Raster-with-nodata-On-Interactive-Map
  • shapefile2gif -> Given a shapefile with time-annotated vector objects (e.g., building footprints + construction year), this script will automatically create an animated GIF illustrating the dynamics for a user-specified period of time
  • insat3d_imagen -> Processes INSAT HDF file and generates satellite images
  • pygieons -> A simple package to visualize and keep track of GIS and Earth Observation libraries in Python

Algorithms

  • WaterDetect -> an end-to-end algorithm to generate open water cover mask, specially conceived for L2A Sentinel 2 imagery. It can also be used for Landsat 8 images and for other multispectral clustering/segmentation tasks.
  • GatorSense Hyperspectral Image Analysis Toolkit -> This repo contains algorithms for Anomaly Detectors, Classifiers, Dimensionality Reduction, Endmember Extraction, Signature Detectors, Spectral Indices
  • detectree -> Tree detection from aerial imagery
  • pylandstats -> compute landscape metrics
  • dg-calibration -> Coefficients and functions for calibrating DigitalGlobe imagery
  • python-fmask -> Implementation in Python of the cloud and shadow algorithms known collectively as Fmask
  • pyshepseg -> Python implementation of image segmentation algorithm of Shepherd et al (2019) Operational Large-Scale Segmentation of Imagery Based on Iterative Elimination.
  • Shadow-Detection-Algorithm-for-Aerial-and-Satellite-Images -> shadow detection and correction algorithm
  • faiss -> A library for efficient similarity search and clustering of dense vectors, e.g. image embeddings
  • awesome-spectral-indices -> A ready-to-use curated list of Spectral Indices for Remote Sensing applications
  • urban-footprinter -> A convolution-based approach to detect urban extents from raster datasets
  • ocean_color -> Tools and algorithms for drone and satellite based ocean color science
  • poliastro -> pure Python library for interactive Astrodynamics and Orbital Mechanics, with a focus on ease of use, speed, and quick visualization
  • acolite -> generic atmospheric correction module
  • pmapper -> a super-resolution and deconvolution toolkit for python. PMAP stands for Poisson Maximum A-Posteriori, a highly flexible and adaptable algorithm for these problems
  • pylandtemp -> Algorithms for computing global land surface temperature and emissivity from NASA's Landsat satellite images with Python
  • sarsen -> Algorithms and utilities for Synthetic Aperture Radar (SAR) sensors
  • sun-position -> code for computing sun position
  • simple_ortho -> Fast and simple orthorectification of images with known DEM and camera model
  • imageResolution -> Simple spatial resolution calculator for nadir & oblique aerial imagery
  • Spectral-Clustering -> normalized and unnormalized spectral clustering algorithms
  • Fogpy -> nowcasting of fog and low stratus clouds
  • orthorectification -> Orthorectification in Python. Note that all of this functionality already exists in libraries like GDAL and others. The goal of this codebase was to present and deep dive into these subroutines
  • Flood-Severity-Estimation -> estimate the height of the water in geo-referenced photos that depict floods using DEMs from JAXA
  • coastline-extraction -> Methods to identify and extract coastline from remote sensed data
  • Near real-time shadow detection and removal in remote sensing imagery application
  • image-registration -> using Point Feature Detection, Normalized DLT, RANSAC & Image Warping
  • pyTSEB -> A python Two Source Energy Balance model for estimation of evapotranspiration with remote sensing data
  • libpredict -> satellite orbit prediction library
  • GOTCHA -> Command line implementation of the GOTCHA stereo matching algorithm
  • SREM -> A Simplified and Robust Surface Reflectance Estimation Method for Satellite Imagery
  • kaizen -> A library to map match and help tackle the problem of overlapping/intersecting road and building footprint that arises in the process of map making

GDAL & Rasterio

So improtant this pair gets their own section. GDAL is THE command line tool for reading and writing raster and vector geospatial data formats. If you are using python you will probably want to use Rasterio which provides a pythonic wrapper for GDAL

  • GDAL and on twitter
  • GDAL is a dependency of Rasterio and can be difficult to build and install. I recommend using conda, brew (on OSX) or docker in these situations
  • GDAL docker quickstart: docker pull osgeo/gdal then docker run --rm -v $(pwd):/data/ osgeo/gdal gdalinfo /data/cog.tiff
  • Even Rouault maintains GDAL, please consider sponsoring him
  • Rasterio -> reads and writes GeoTIFF and other raster formats and provides a Python API based on Numpy N-dimensional arrays and GeoJSON. There are a variety of plugins that extend Rasterio functionality.
  • rio-cogeo -> Cloud Optimized GeoTIFF (COG) creation and validation plugin for Rasterio.
  • rioxarray -> geospatial xarray extension powered by rasterio
  • aws-lambda-docker-rasterio -> AWS Lambda Container Image with Python Rasterio for querying Cloud Optimised GeoTiffs. See this presentation
  • godal -> golang wrapper for GDAL
  • Write rasterio to xarray
  • Loam: A Client-Side GDAL Wrapper for Javascript
  • Short list of useful GDAL commands while working in data science for remote sensing
  • gdal-segment -> implements various segmentation algorithms over raster images
  • aws-gdal-robot -> A proof of concept implementation of running GDAL based jobs using AWS S3/Lambda/Batch
  • gdal2tiles -> A python library for generating map tiles based on gdal2tiles.py from GDAL project
  • gdal3.js -> Convert raster and vector geospatial data to various formats and coordinate systems entirely in the browser

Cloud Optimised GeoTiff (COG)

A Cloud Optimized GeoTIFF (COG) is a regular GeoTIFF that supports HTTP range requests, enabling downloading of specific tiles rather than the full file. COG generally work normally in GIS software such as QGIS, but are larger than regular GeoTIFFs

SpatioTemporal Asset Catalog specification (STAC)

The STAC specification provides a common metadata specification, API, and catalog format to describe geospatial assets, so they can more easily indexed and discovered.

OpenStreetMap

OpenStreetMap (OSM) is a map of the world, created by people like you and free to use under an open license. Quite a few publications use OSM data for annotations & ground truth. Note that the data is created by volunteers and the quality can be variable

QGIS

A popular open source alternative to ArcGIS, desktop appication written in python and extended with plugins

Parallel procesing with Dask

Dask provides advanced parallelism and distributed out-of-core computation with a dask.dataframe module designed to scale pandas.

Web apps

Flask is often used to serve up a simple web app based on templated HTML files

Jupyter

The Jupyter Notebook is a web-based interactive computing platform. There are many extensions which make it a powerful environment for analysing satellite imagery

  • jupyterlite -> JupyterLite is a JupyterLab distribution that runs entirely in the browser
  • jupyter_compare_view -> Blend Between Multiple Images
  • folium -> display interactive maps in Jupyter notebooks
  • ipyannotations -> Image annotations in python using jupyter notebooks
  • pigeonXT -> create custom image classification annotators within Jupyter notebooks
  • jupyter-innotater -> Inline data annotator for Jupyter notebooks
  • jupyter-bbox-widget -> A Jupyter widget for annotating images with bounding boxes
  • mapboxgl-jupyter -> Use Mapbox GL JS to visualize data in a Python Jupyter notebook
  • pylabel -> includes an image labeling tool that runs in a Jupyter notebook that can annotate images manually or perform automatic labeling using a pre-trained model
  • jupyterlab-s3-browser -> extension for browsing S3-compatible object storage
  • papermill -> Parameterize, execute, and analyze notebooks
  • pretty-jupyter -> Creates dynamic html report from jupyter notebook

Streamlit

Streamlit is an awesome python framework for creating apps with python. Additionally they will host the apps free of charge. Here I list resources which are EO related. Note that a component is an addon which extends Streamlits basic functionality

Julia language

Julia looks and feels a lot like Python, but can be much faster. Julia can call Python, C, and Fortran libraries and is capabale of C/Fortran speeds. Julia can be used in the familiar Jupyterlab notebook environment

Movers and shakers on Github

Companies & organisations on Github

For a full list of companies, on and off Github, checkout awesome-geospatial-companies. The following lists companies with interesting Github profiles

Courses

Books

Podcasts

Newsletters

Online communities

Jobs

Signup for the geospatial-jobs-newsletter and Pangeo discourse lists multiple jobs, global. List of job portals below: