/satellite-image-deep-learning

Resources for deep learning with satellite & aerial imagery

Apache License 2.0Apache-2.0

Introduction

This document lists resources for performing deep learning (DL) on satellite imagery. To a lesser extent classical Machine learning (ML, e.g. random forests) are also discussed, as are classical image processing techniques. Note there is a huge volume of academic literature published on these topics, and this repo does not seek to index them all but rather list approachable resources with published code that will benefit both the research and developer communities.

Table of contents

Techniques

This section explores the different deep and machine learning (ML) techniques applied to common problems in satellite imagery analysis. Good background reading is Deep learning in remote sensing applications: A meta-analysis and review

Classification

The classic cats vs dogs image classification task, which in the remote sensing domain is used to assign a label to an image, e.g. this is an image of a forest. The more complex case is applying multiple labels to an image. This approach of image level classification is not to be confused with pixel-level classification which is called semantic segmentation. In general, aerial images cover large geographical areas that include multiple classes of land, so treating this is as a classification problem is less common than using semantic segmentation.

Segmentation

Segmentation will assign a class label to each pixel in an image. Segmentation is typically grouped into semantic or instance segmentation. In semantic segmentation objects of the same class are assigned the same label, whilst in instance segmentation each object is assigned a unique label. Read this beginner’s guide to segmentation. Single class models are often trained for road or building segmentation, with multi class for land use/crop type classification. Image annotation can take long than for classification/object detection since every pixel must be annotated. Note that many articles which refer to 'hyperspectral land classification' are actually describing semantic segmentation.

Semantic segmentation

Almost always performed using U-Net. For multi/hyper-spectral imagery more classical techniques may be used (e.g. k-means).

Semantic segmentation - multiclass classification

Semantic segmentation - buildings, rooftops & solar panels

Semantic segmentation - roads

Semantic segmentation - vegitation & crop boundaries

Semantic segmentation - water & floods

Semantic segmentation - fire & burn areas

Semantic segmentation - glaciers

  • HED-UNet -> a model for simultaneous semantic segmentation and edge detection, examples provided are glacier fronts and building footprints using the Inria Aerial Image Labeling dataset
  • glacier_mapping -> Mapping glaciers in the Hindu Kush Himalaya, Landsat 7 images, Shapefile labels of the glaciers, Unet with dropout

Instance segmentation

In instance segmentation, each individual 'instance' of a segmented area is given a unique lable. For detection of very small objects this may a good approach, but it can struggle seperating individual objects that are closely spaced.

Object detection

Several different techniques can be used to count the number of objects in an image. The returned data can be an object count (regression), a bounding box around individual objects in an image (typically using Yolo or Faster R-CNN architectures), a pixel mask for each object (instance segmentation), key points for an an object (such as wing tips, nose and tail of an aircraft), or simply a classification for a sliding tile over an image. A good introduction to the challenge of performing object detection on aerial imagery is given in this paper. In summary, images are large and objects may comprise only a few pixels, easily confused with random features in background. For the same reason, object detection datasets are inherently imbalanced, since the area of background typically dominates over the area of the objects to be detected. In general object detecion performs well on large objects, and gets increasingly difficult as the objects get smaller & more densely packed. Model accuracy falls off rapidly as image resolution degrades, so it is common for object detection to use very high resolution imagery, e.g. 30cm RGB. A particular characteristic of aerial images is that objects can be oriented in any direction, so using rotated bounding boxes which aligning with the object can be crucial for extracting metrics such as the length and width of an object.

Object detection enhanced by super resolution

Object detection with rotated bounding boxes

  • OBBDetection -> an oriented object detection library, which is based on MMdetection
  • rotate-yolov3 -> Rotation object detection implemented with yolov3. Also see yolov3-polygon
  • DRBox -> for detection tasks where the objects are orientated arbitrarily, e.g. vehicles, ships and airplanes
  • s2anet -> Official code of the paper 'Align Deep Features for Oriented Object Detection'
  • CFC-Net -> Official implementation of "CFC-Net: A Critical Feature Capturing Network for Arbitrary-Oriented Object Detection in Remote Sensing Images"
  • ReDet -> Official code of the paper "ReDet: A Rotation-equivariant Detector for Aerial Object Detection"
  • BBAVectors-Oriented-Object-Detection -> Oriented Object Detection in Aerial Images with Box Boundary-Aware Vectors

Object detection - buildings, rooftops & solar panels

Object detection - ships & boats

Object detection - vehicles & trains

Object detection - planes & aircraft

Object detection - animals

  • cownter_strike -> counting cows, located with point-annotations, two models: CSRNet (a density-based method) & LCFCN (a detection-based method)

Counting trees

Oil storage tank detection & oil spills

Oil is stored in tanks at many points between extraction and sale, and the volume of oil in storage is an important economic indicator.

Cloud detection & removal

Generally treated as a semantic segmentation problem.

Change detection & time-series

Monitor water levels, coast lines, size of urban areas, wildfire damage. Note, clouds change often too..!

Wealth and economic activity

The goal is to predict economic activity from satellite imagery rather than conducting labour intensive ground surveys

Super-resolution

Super-resolution attempts to enhance the resolution of an imaging system, and can be applied as a pre-processing step to improve the detection of small objects. For an introduction to this topic read this excellent article. Note that super resolution techniques are generally grouped into single image super resolution (SISR) or a multi image super resolution (MISR) which is typically applied to video frames.

Single image super resolution (SISR)

Multi image super resolution (MISR)

Note that nearly all the MISR publications resulted from the PROBA-V Super Resolution competition

  • deepsum -> Deep neural network for Super-resolution of Unregistered Multitemporal images (ESA PROBA-V challenge)
  • 3DWDSRNet -> code to reproduce Satellite Image Multi-Frame Super Resolution (MISR) Using 3D Wide-Activation Neural Networks
  • RAMS -> Official TensorFlow code for paper Multi-Image Super Resolution of Remotely Sensed Images Using Residual Attention Deep Neural Networks
  • TR-MISR -> Transformer-based MISR framework for the the PROBA-V super-resolution challenge
  • HighRes-net -> Pytorch implementation of HighRes-net, a neural network for multi-frame super-resolution, trained and tested on the European Space Agency’s Kelvin competition
  • ProbaVref -> Repurposing the Proba-V challenge for reference-aware super resolution
  • The missing ingredient in deep multi-temporal satellite image super-resolution -> Permutation invariance harnesses the power of ensembles in a single model, with repo piunet

Image-to-image translation

Translate images e.g. from SAR to RGB.

GANS

Autoencoders, dimensionality reduction, image embeddings & similarity search

Few/one/zero/low shot learning

This is a class of techniques which attempt to make predictions for classes with few, one or even zero examples provided during training. In zero shot learning (ZSL) the model is assisted by the provision of auxiliary information which typically consists of descriptions/semantic attributes/word embeddings for both the seen and unseen classes at train time (ref). These approaches are particularly relevant to remote sensing, where there may be many examples of common classes, but few or even zero examples for other classes of interest.

Self/semi/un-supervised & contrastive learning

The terms self-supervised, semi-supervised, un-supervised, contrastive learning & SSL describe techniques using un-labelled data. In general, the more classical techniques such as k-means classification or PCA are referred to as unsupervised, whilst newer techniques using CNN feature extraction or autoencoders are referred to as self-supervised. Yann LeCun has described self-supervised/unsupervised learning as the 'base of the cake': If we think of our brain as a cake, then the cake base is unsupervised learning. The machine predicts any part of its input for any observed part, all without the use of labelled data. Supervised learning forms the icing on the cake, and reinforcement learning is the cherry on top.

Active learning

Supervised deep learning techniques typically require a huge number of annotated/labelled examples to provide a training dataset. However labelling at scale take significant time, expertise and resources. Active learning techniques aim to reduce the total amount of annotation that needs to be performed by selecting the most useful images to label from a large pool of unlabelled examples, thus reducing the time to generate training datasets. These processes may be referred to as Human-in-the-Loop Machine Learning

Mixed data learning

These techniques combine multiple data types, e.g. imagery and text data.

Image Captioning

Pansharpening

Image fusion of low res multispectral with high res pan band.

NVDI - vegetation index

General image quality

Image registration

Image registration is the process of transforming different sets of data into one coordinate system. Typical use is overlapping images taken at different times or with different cameras.

Multi-sensor/multi-modal fusion

Object tracking

Terrain mapping, Lidar & DEMs

Measure surface contours.

Thermal Infrared

SAR

Neural nets in space

Processing on board a satellite allows less data to be downlinked. e.g. super-resolution image might take 8 images to generate, then a single image is downlinked. Other applications include cloud detection and collision avoidance.

ML best practice

This section includes tips and ideas I have picked up from other practitioners including ai-fast-track, FraPochetti & the IceVision community

ML metrics

A number of metrics are common to all model types (but can have slightly different meanings in contexts such as object detection), whilst other metrics are very specific to particular classes of model. The correct choice of metric is particularly critical for imbalanced dataset problems, e.g. object detection

  • TP = true positive, FP = false positive, TN = true negative, FN = false negative
  • Precision is the % of correct positive predictions, calculated as precision = TP/(TP+FP)
  • Recall or true positive rate (TPR), is the % of true positives captured by the model, calculated as recall = TP/(TP+FN). Note that FN is not possible in object detection, so recall is not appropriate.
  • The F1 score (also called the F-score or the F-measure) is the harmonic mean of precision and recall, calculated as F1 = 2*(precision * recall)/(precision + recall). It conveys the balance between the precision and the recall. Ref
  • The false positive rate (FPR), calculated as FPR = FP/(FP+TN) is often plotted against recall/TPR in an ROC curve which shows how the TPR/FPR tradeoff varies with classification threshold. Lowering the classification threshold returns more true positives, but also more false positives. Note that since FN is not possible in object detection, ROC curves are not appropriate.
  • Precision-vs-recall curves visualise the tradeoff between making false positives and false negatives
  • Accuracy is the most commonly used metric in 'real life' but can be a highly misleading metric for imbalanced data sets.
  • IoU is an object detection specific metric, being the average intersect over union of prediction and ground truth bounding boxes for a given confidence threshold
  • mAP@0.5 is another object detection specific metric, being the mean value of the average precision for each class. @0.5 sets a threshold for how much of the predicted bounding box overlaps the ground truth bounding box, i.e. "minimum 50% overlap"
  • For more comprehensive definitions checkout Object-Detection-Metrics

Datasets

This section contains a short list of datasets relevant to deep learning, particularly those which come up regularly in the literature. For a more comprehensive list of datasets checkout awesome-satellite-imagery-datasets and review the long list of satellite missions with example imagery

Warning satellite image files can be LARGE, even a small data set may comprise 50 GB of imagery

Sentinel

Landsat

Maxar

Planet

UC Merced

EuroSAT

PatternNet

FAIR1M object detection dataset

DOTA object detection dataset

xView object detection dataset

AIRS (Aerial Imagery for Roof Segmentation)

  • https://www.airs-dataset.com
  • Public dataset for roof segmentation from very-high-resolution aerial imagery (7.5cm)
  • AIRS dataset covers almost the full area of Christchurch, the largest city in the South Island of New Zealand.
  • Also on Kaggle

Inria building/not building segmentation dataset

AICrowd building segmentation dataset

  • Dataset release as part of the mapping-challenge
  • 300x300 pixel RGB images with annotations in COCO format
  • Imagery appears to be global but with significant fraction from North America
  • Winning solution published by neptune.ai here, achieved precision 0.943 and recall 0.954 using Unet with Resnet.

Kaggle

Kaggle hosts over > 200 satellite image datasets, search results here. The kaggle blog is an interesting read.

Kaggle - Amazon from space - classification challenge

Kaggle - DSTL segmentation challenge

Kaggle - Airbus ship detection Challenge

Kaggle - Shipsnet classification dataset

Kaggle - Ships in Google Earth

Kaggle - Swimming pool and car detection using satellite imagery

Kaggle - Planesnet classification dataset

Kaggle - Draper challenge to place images in order of time

Kaggle - Dubai segmentation

Kaggle - Deepsat classification challenge

Not satellite but airborne imagery. Each sample image is 28x28 pixels and consists of 4 bands - red, green, blue and near infrared. The training and test labels are one-hot encoded 1x6 vectors. Each image patch is size normalized to 28x28 pixels. Data in .mat Matlab format. JPEG?

  • Sat4 500,000 image patches covering four broad land cover classes - barren land, trees, grassland and a class that consists of all land cover classes other than the above three
  • Sat6 405,000 image patches each of size 28x28 and covering 6 landcover classes - barren land, trees, grassland, roads, buildings and water bodies.
  • Deep Gradient Boosted Learning article

Kaggle - High resolution ship collections 2016 (HRSC2016)

Kaggle - Understanding Clouds from Satellite Images

In this challenge, you will build a model to classify cloud organization patterns from satellite images.

Kaggle - Airbus Aircraft Detection Dataset

Kaggle - Airbus oil storage detection dataset

Kaggle - Satellite images of hurricane damage

Kaggle - Austin Zoning Satellite Images

Kaggle - Statoil/C-CORE Iceberg Classifier Challenge

Kaggle - miscellaneous

SpaceNet

Tensorflow datasets

  • resisc45 -> RESISC45 dataset is a publicly available benchmark for Remote Sensing Image Scene Classification (RESISC), created by Northwestern Polytechnical University (NWPU). This dataset contains 31,500 images, covering 45 scene classes with 700 images in each class.
  • eurosat -> EuroSAT dataset is based on Sentinel-2 satellite images covering 13 spectral bands and consisting of 10 classes with 27000 labeled and geo-referenced samples.
  • BigEarthNet -> a large-scale Sentinel-2 land use classification dataset, consisting of 590,326 Sentinel-2 image patches. The image patch size on the ground is 1.2 x 1.2 km with variable image size depending on the channel resolution. This is a multi-label dataset with 43 imbalanced labels. Official website includes version of the dataset with Sentinel 1 & 2 chips
  • so2sat -> a dataset consisting of co-registered synthetic aperture radar and multispectral optical image patches acquired by Sentinel 1 & 2

AWS datasets

Microsoft

Google Earth Engine (GEE)

Since there is a whole community around GEE I will not reproduce it here but list very select references. Get started at https://developers.google.com/earth-engine/

Radiant Earth

DEM (digital elevation maps)

  • Shuttle Radar Topography Mission, search online at usgs.gov
  • Copernicus Digital Elevation Model (DEM) on S3, represents the surface of the Earth including buildings, infrastructure and vegetation. Data is provided as Cloud Optimized GeoTIFFs. link

Weather Datasets

Time series & change detection datasets

  • BreizhCrops -> A Time Series Dataset for Crop Type Mapping
  • The SeCo dataset contains image patches from Sentinel-2 tiles captured at different timestamps at each geographical location. Download SeCo here
  • Onera Satellite Change Detection Dataset comprises 24 pairs of multispectral images taken from the Sentinel-2 satellites between 2015 and 2018
  • SYSU-CD -> The dataset contains 20000 pairs of 0.5-m aerial images of size 256×256 taken between the years 2007 and 2014 in Hong Kong

UAV & Drone datasets

Synthetic data

Online platforms for analytics

  • This article discusses some of the available platforms
  • Pangeo -> There is no single software package called “pangeo”; rather, the Pangeo project serves as a coordination point between scientists, software, and computing infrastructure. Includes open source resources for parallel processing using Dask and Xarray. Pangeo recently announced their 2.0 goals: pivoting away from directly operating cloud-based JupyterHubs, and towards eductaion and research
  • Airbus Sandbox -> will provide access to imagery
  • Descartes Labs -> access to EO imagery from a variety of providers via python API
  • DigitalGlobe have a cloud hosted Jupyter notebook platform called GBDX. Cloud hosting means they can guarantee the infrastructure supports their algorithms, and they appear to be close/closer to deploying DL.
  • Planet have a Jupyter notebook platform which can be deployed locally.
  • eurodatacube.com -> data & platform for EO analytics in Jupyter env, paid
  • up42 is a developer platform and marketplace, offering all the building blocks for powerful, scalable geospatial products
  • Microsoft Planetary Computer -> direct Google Earth Engine competitor in the making?
  • eofactory.ai -> supports multi public and private data sources that can be used to analyse and extract information
  • mapflow.ai -> imagery analysis platform with its instant access to the major satellite imagery providers, models for extract building footprints etc & QGIS plugin
  • openeo by ESA data platform
  • Adam platform -> the Advanced geospatial Data Management platform (ADAM) is a tool to access a large variety and volume of global environmental data

Free online compute

A GPU is required for training deep learning models (but not necessarily for inferencing), and this section lists a couple of free Jupyter environments with GPU available. There is a good overview of online Jupyter development environments on the fastai site. I personally use Colab Pro with data hosted on Google Drive, or Sagemaker if I have very long running training jobs.

Google Colab

  • Collaboratory notebooks with GPU as a backend for free for 12 hours at a time. Note that the GPU may be shared with other users, so if you aren't getting good performance try reloading.
  • Also a pro tier for $10 a month -> https://colab.research.google.com/signup
  • Tensorflow, pytorch & fastai available but you may need to update them
  • Colab Alive is a chrome extension that keeps Colab notebooks alive.
  • colab-ssh -> lets you ssh to a colab instance like it’s an EC2 machine and install packages that require full linux functionality

Kaggle - also Google!

  • Free to use
  • GPU Kernels - may run for 1 hour
  • Tensorflow, pytorch & fastai available but you may need to update them
  • Advantage that many datasets are already available

AWS SageMaker Studio Lab

Others

State of the art engineering

  • Compute and data storage are moving to the cloud. Read how Planet and Airbus use the cloud
  • Google Earth Engine and Microsoft Planetary Computer are democratising access to massive compute platforms
  • No-code platforms and auto-ml are making ML techniques more accessible than ever
  • Custom hardware is being developed for rapid training and inferencing with deep learning models, both in the datacenter and at the edge
  • Supervised ML methods typically require large annotated datasets, but approaches such as self-supervised and active learning are offering alternatives pathways
  • Traditional data formats aren't designed for processing on the cloud, so new standards are evolving such as COGS and STAC
  • Computer vision traditionally delivered high performance image processing on a CPU by using compiled languages like C++, as used by OpenCV for example. The advent of GPUs are changing the paradigm, with alternatives optimised for GPU being created, such as Kornia
  • Whilst the combo of python and keras/tensorflow/pytorch are currently preeminent, new python libraries such as Jax and alternative languages such as Julia are showing serious promise

Cloud providers

An overview of the most relevant services provided by AWS, Google and Microsoft. Also consider one of the many smaller but more specialised platorms such as spell.ml or paperspace

AWS

Google cloud

  • For storage use Cloud Storage (AWS S3 equivalent)
  • For data warehousing use BigQuery (AWS Redshift equivalent). Visualize massive spatial datasets directly in BigQuery using CARTO
  • For model training use Vertex (AWS Sagemaker equivalent)
  • For containerised apps use Cloud Run (AWS App Runner equivalent but can scale to zero)

Microsoft Azure

  • Azure Orbital -> Satellite ground station and scheduling services for fast downlinking of data

Deploying models

This section discusses how to get a trained machine learning & specifically deep learning model into production. For an overview on serving deep learning models checkout Practical-Deep-Learning-on-the-Cloud. There are many options if you are happy to dedicate a server, although you may want a GPU for batch processing. For serverless use AWS lambda.

Rest API on dedicated server

A common approach to serving up deep learning model inference code is to wrap it in a rest API. The API can be implemented in python (flask or FastAPI), and hosted on a dedicated server e.g. EC2 instance. Note that making this a scalable solution will require significant experience.

Framework/provider specific model serving options

If you are happy to live with some lock-in, these are good options:

NVIDIA Triton server

Models in the browser

The model is run in the browser itself on live images, ensuring processing is always with the latest model available and removing the requirement for dedicated server side inferencing

Model optimisation for deployment

The general approaches are outlined in this article from NVIDIA which discusses fine tuning a model pre-trained on synthetic data (Rareplanes) with 10% real data, then pruning the model to reduce its size, before quantizing the model to improve inference speed. Training notebook here

Model monitoring

Once your model is deployed you will want to monitor for data errors, broken pipelines, and model performance degradation/drift ref

Image formats, data management and catalogues

Cloud Optimised GeoTiff (COG)

A Cloud Optimized GeoTIFF (COG) is a regular GeoTIFF that supports HTTP range requests, enabling downloading of specific tiles rather than the full file. COG generally work normally in GIS software such as QGIS, but are larger than regular GeoTIFFs

SpatioTemporal Asset Catalog specification (STAC)

The STAC specification provides a common metadata specification, API, and catalog format to describe geospatial assets, so they can more easily indexed and discovered.

Image annotation

For supervised machine learning, you will require annotated images. For example if you are performing object detection you will need to annotate images with bounding boxes. Check that your annotation tool of choice supports large image (likely geotiff) files, as not all will. Note that GeoJSON is widely used by remote sensing researchers but this annotation format is not commonly supported in general computer vision frameworks, and in practice you may have to convert the annotation format to use the data with your chosen framework. There are both closed and open source tools for creating and converting annotation formats. Some of these tools are simply for performing annotation, whilst others add features such as dataset management and versioning. Note that self-supervised and active learning approaches might circumvent the need to perform a large scale annotation exercise.

General purpose annotation tools

  • awesome-data-labeling -> long list of annotation tools
  • labelImg is the classic desktop tool, limited to bounding boxes for object detection. Also checkout roLabelImg which supports ROTATED rectangle regions, as often occurs in aerial imagery.
  • Labelme is a simple dektop app for polygonal annotation, but note it outputs annotations in a custom LabelMe JSON format which you will need to convert. Read Labelme Image Annotation for Geotiffs
  • Label Studio is a multi-type data labeling and annotation tool with standardized output format, syncing to buckets, and supports importing pre-annotations (create with a model). Checkout label-studio-converter for converting Label Studio annotations into common dataset formats
  • CVAT suports object detection, segmentation and classification via a local web app. There is an open issue to support large TIFF files. This article on Roboflow gives a good intro to CVAT.
  • Create your own annotation tool using Bokeh Holoviews
  • VoTT -> an electron app for building end to end Object Detection Models from Images and Videos, by Microsoft
  • Deeplabel is a cross-platform tool for annotating images with labelled bounding boxes. Deeplabel also supports running inference using state-of-the-art object detection models like Faster-RCNN and YOLOv4. With support out-of-the-box for CUDA, you can quickly label an entire dataset using an existing model.
  • Alturos.ImageAnnotation is a collaborative tool for labeling image data on S3 for yolo
  • rectlabel is a desktop app for MacOS to annotate images for bounding box object detection and segmentation, paid and free (rectlabel-lite) versions
  • pigeonXT can be used to create custom image classification annotators within Jupyter notebooks
  • ipyannotations -> Image annotations in python using jupyter notebooks
  • Label-Detect -> is a graphical image annotation tool and using this tool a user can also train and test large satellite images, fork of the popular labelImg tool
  • Swipe-Labeler -> Swipe Labeler is a Graphical User Interface based tool that allows rapid labeling of image data
  • SuperAnnotate can be run locally or used via a cloud service
  • dash_doodler -> A web application built with plotly/dash for image segmentation with minimal supervision
  • remo -> A webapp and Python library that lets you explore and control your image datasets
  • Roboflow can be used to convert between annotation formats & manage datasets, as well as train and deploy custom models. Free tier quite useful
  • supervise.ly is one of the more fully featured platforms, decent free tier
  • AWS supports image annotation via the Rekognition Custom Labels console
  • diffgram describes itself as a complete training data platform for machine learning delivered as a single application. Open source or available as hosted service, supports streaming data to pytorch & tensorflow
  • hasty.ai -> supports model assisted annotation & inferencing
  • TensorFlow Object Detection API provides a handy utility for object annotation within Google Colab notebooks. See usage here
  • coco-annotator
  • pylabel -> Python library for computer vision labeling tasks. The core functionality is to translate bounding box annotations between different formats-for example, from coco to yolo. PyLabel also includes an image labeling tool that runs in a Jupyter notebook that can annotate images manually or perform automatic labeling using a pre-trained model

Annotation tools with GEO features

Also check the section Image handling, manipulation & dataset creation

  • GroundWork is designed for annotating and labeling geospatial data like satellite imagery, from Azavea
  • labelbox.com -> free tier is quite generous, supports annotating Geotiffs & returning annotations with geospatial coordinates. Watch this webcast
  • iris -> Tool for manual image segmentation and classification of satellite imagery
  • If you are considering building an in house annotation platform read this article. Used PostGis database, GeoJson format and GIS standard in a stateless architecture

Annotation formats

Note there are many annotation formats, although PASCAL VOC and coco-json are the most commonly used.

  • PASCAL VOC format: XML files in the format used by ImageNet
  • coco-json format: JSON in the format used by the 2015 COCO dataset
  • YOLO Darknet TXT format: contains one text file per image, used by YOLO
  • Tensorflow TFRecord: a proprietary binary file format used by the Tensorflow Object Detection API
  • Many more formats listed here

Paid software

Many of these companies & products predate the open source software boom, and offer functionality which can be found in open source alternatives. However it is important to consider the licensing and support aspects before adopting an open source stack.

  • ENVI -> image processing and analysis
  • ERDAS IMAGINE -> remote sensing, photogrammetry, LiDAR analysis, basic vector analysis, and radar processing into a single product
  • Spacemetric Keystone -> transform unprocessed sensor data into quality geospatial imagery ready for analysis
  • microimages TNTgis -> advanced GIS, image processing, and geospatial analysis at an affordable price

ArcGIS

Arguably the most significant paid software for working with maps and geographic information

Open source software

A note on licensing: The two general types of licenses for open source are copyleft and permissive. Copyleft requires that subsequent derived software products also carry the license forward, e.g. the GNU Public License (GNU GPLv3). For permissive, options to modify and use the code as one please are more open, e.g. MIT & Apache 2. Checkout choosealicense.com/

QGIS

A popular open source alternative to ArcGIS, desktop appication written in python and extended with plugins

GDAL & Rasterio

So improtant this pair gets their own section. GDAL is THE command line tool for reading and writing raster and vector geospatial data formats. If you are using python you will probably want to use Rasterio which provides a pythonic wrapper for GDAL

General utilities

  • PyShp -> The Python Shapefile Library (PyShp) reads and writes ESRI Shapefiles in pure Python
  • s2p -> a Python library and command line tool that implements a stereo pipeline which produces elevation models from images taken by high resolution optical satellites such as Pléiades, WorldView, QuickBird, Spot or Ikonos
  • EarthPy -> A set of helper functions to make working with spatial data in open source tools easier. readExploratory Data Analysis (EDA) on Satellite Imagery Using EarthPy
  • pygeometa -> provides a lightweight and Pythonic approach for users to easily create geospatial metadata in standards-based formats using simple configuration files
  • pesto -> PESTO is designed to ease the process of packaging a Python algorithm as a processing web service into a docker image. It contains shell tools to generate all the boiler plate to build an OpenAPI processing web service compliant with the Geoprocessing-API. By Airbus Defence And Space
  • GEOS -> Google Earth Overlay Server (GEOS) is a python-based server for creating Google Earth overlays of tiled maps. Your can also display maps in the web browser, measure distances and print maps as high-quality PDF’s.
  • GeoDjango intends to be a world-class geographic Web framework. Its goal is to make it as easy as possible to build GIS Web applications and harness the power of spatially enabled data. Some features of GDAL are supported.
  • rasterstats -> summarize geospatial raster datasets based on vector geometries
  • turfpy -> a Python library for performing geospatial data analysis which reimplements turf.js
  • image-similarity-measures -> Implementation of eight evaluation metrics to access the similarity between two images. Blog post here
  • rsgislib -> Remote Sensing and GIS Software Library; python module tools for processing spatial and image data.
  • eo-learn is a collection of open source Python packages that have been developed to seamlessly access and process spatio-temporal image sequences acquired by any satellite fleet in a timely and automatic manner
  • RStoolbox: Tools for Remote Sensing Data Analysis in R
  • nd -> Framework for the analysis of n-dimensional, multivariate Earth Observation data, built on xarray
  • reverse-geocoder -> a fast, offline reverse geocoder in Python
  • MuseoToolBox -> a python library to simplify the use of raster/vector, especially for machine learning and remote sensing
  • py6s -> an interface to the Second Simulation of the Satellite Signal in the Solar Spectrum (6S) atmospheric Radiative Transfer Model
  • timvt -> PostGIS based Vector Tile server built on top of the modern and fast FastAPI framework
  • titiler -> A dynamic Web Map tile server using FastAPI
  • BRAILS -> an AI-based pipeline for city-scale building information modelling (BIM)
  • color-thief-py -> Grabs the dominant color or a representative color palette from an image

Low level numerical & data formats

  • xarray -> N-D labeled arrays and datasets. Read Handling multi-temporal satellite images with Xarray. Checkout xarray_leaflet for tiled map plotting
  • xarray-spatial -> Fast, Accurate Python library for Raster Operations. Implements algorithms using Numba and Dask, free of GDAL
  • xarray-beam -> Distributed Xarray with Apache Beam by Google
  • Geowombat -> geo-utilities applied to air- and space-borne imagery, uses Rasterio, Xarray and Dask for I/O and distributed computing with named coordinates
  • NumpyTiles -> a specification for providing multiband full-bit depth raster data in the browser
  • Zarr -> Zarr is a format for the storage of chunked, compressed, N-dimensional arrays. Zarr depends on NumPy

Image processing, handling, manipulation & dataset creation

  • Pillow is the Python Imaging Library -> this will be your go-to package for image manipulation in python
  • opencv-python is pre-built CPU-only OpenCV packages for Python
  • kornia is a differentiable computer vision library for PyTorch, like openCV but on the GPU. Perform image transformations, epipolar geometry, depth estimation, and low-level image processing such as filtering and edge detection that operate directly on tensors.
  • tifffile -> Read and write TIFF files
  • xtiff -> A small Python 3 library for writing multi-channel TIFF stacks
  • geotiff -> A noGDAL tool for reading and writing geotiff files
  • image_slicer -> Split images into tiles. Join the tiles back together.
  • tiler -> split images into tiles and merge tiles into a large image
  • geolabel-maker -> combine satellite or aerial imagery with vector spatial data to create your own ground-truth dataset in the COCO format for deep-learning models
  • felicette -> Satellite imagery for dummies. Generate JPEG earth imagery from coordinates/location name with publicly available satellite data.
  • imagehash -> Image hashes tell whether two images look nearly identical.
  • xbatcher -> Xbatcher is a small library for iterating xarray DataArrays in batches. The goal is to make it easy to feed xarray datasets to machine learning libraries such as Keras.
  • fake-geo-images -> A module to programmatically create geotiff images which can be used for unit tests
  • imagededup -> Finding duplicate images made easy! Uses perceptual hashing
  • rmstripes -> Remove stripes from images with a combined wavelet/FFT approach
  • activeloopai Hub -> The fastest way to store, access & manage datasets with version-control for PyTorch/TensorFlow. Works locally or on any cloud. Scalable data pipelines.
  • sewar -> All image quality metrics you need in one package
  • fiftyone -> open-source tool for building high quality datasets and computer vision models. Visualise labels, evaluate model predictions, explore scenarios of interest, identify failure modes, find annotation mistakes, and much more!
  • GeoTagged_ImageChip -> A simple script to create geo tagged image chips from high resolution RS iamges for training deep learning models such as Unet.
  • Satellite imagery label tool -> provides an easy way to collect a random sample of labels over a given scene of satellite imagery
  • DeepSatData -> Automatically create machine learning datasets from satellite images
  • image-reconstructor-patches -> Reconstruct Image from Patches with a Variable Stride
  • geotiff-crop-dataset -> A Pytorch Dataloader for tif image files that dynamically crops the image
  • Missing-Pixel-Filler -> given images that may contain missing data regions (like satellite imagery with swath gaps), returns these images with the regions filled
  • deepsentinel-osm -> A repository to generate land cover labels from OpenStreetMap
  • img2dataset -> Easily turn large sets of image urls to an image dataset. Can download, resize and package 100M urls in 20h on one machine
  • satproc -> Python library and CLI tools for processing geospatial imagery for ML
  • Sliding Window -> break large images into a series of smaller chunks
  • color_range_filter -> a script that allows us to find range of colors in images using openCV, and then convert them into geo vectors
  • eo4ai -> easy-to-use tools for preprocessing datasets for image segmentation tasks in Earth Observation
  • Train-Test-Validation-Dataset-Generation -> app to crop images and create small patches of a large image e.g. Satellite/Aerial Images, which will then be used for training and testing Deep Learning models specifically semantic segmentation models
  • rasterix -> a cross-platform utility built around the GDAL library and the Qt framework designed to process geospatial raster data
  • jimutmap -> get enormous amount of high resolution satellite images from apple / google maps quickly through multi-threading
  • Export thumbnails from Earth Engine
  • datumaro -> Dataset Management Framework, a Python library and a CLI tool to build, analyze and manage Computer Vision datasets
  • patchify -> A library that helps you split image into small, overlappable patches, and merge patches into original image
  • ohsome2label -> Historical OpenStreetMap (OSM) Objects to Machine Learning Training Samples
  • Label Maker -> downloads OpenStreetMap QA Tile information and satellite imagery tiles and saves them as an .npz file for use in machine learning training. This should be used instead of the deprecated skynet-data
  • sentinelPot -> a python package to preprocess sentinel-1&2 imagery
  • ImageAnalysis -> Aerial imagery analysis, processing, and presentation scripts.

Image augmentation packages

Image augmentation is a technique used to expand a training dataset in order to improve ability of the model to generalise

  • AugLy -> A data augmentations library for audio, image, text, and video. By Facebook
  • albumentations -> Fast image augmentation library and an easy-to-use wrapper around other libraries
  • FoHIS -> Towards Simulating Foggy and Hazy Images and Evaluating their Authenticity
  • Kornia provides augmentation on the GPU
  • toolbox by ming71 -> various cv tools, such as label tools, data augmentation, label conversion, etc.

Model tracking, versioning, specification & compilation

  • dvc -> a git extension to keep track of changes in data, source code, and ML models together
  • Weights and Biases -> keep track of your ML projects. Log hyperparameters and output metrics from your runs, then visualize and compare results and quickly share findings with your colleagues
  • geo-ml-model-catalog -> provides a common metadata definition for ML models that operate on geospatial data
  • hummingbird -> a library for compiling trained traditional ML models into tensor computations, e.g. scikit learn model to pytorch for fast inference on a GPU

Deep learning packages, frameworks & projects

  • rastervision -> An open source Python framework for building computer vision models on aerial, satellite, and other large imagery sets
  • torchrs -> PyTorch implementation of popular datasets and models in remote sensing tasksenhance) -> Enhance PyTorch vision for semantic segmentation, multi-channel images and TIF file torchgeo -> popular datasets, model architectures
  • DeepHyperX -> A Python/pytorch tool to perform deep learning experiments on various hyperspectral datasets
  • DELTA -> Deep Earth Learning, Tools, and Analysis, by NASA is a framework for deep learning on satellite imagery, based on Tensorflow & using MLflow for tracking experiments
  • Lightly is a computer vision framework for training deep learning models using self-supervised learning
  • Icevision offers a curated collection of hundreds of high-quality pre-trained models within an easy to use framework
  • pytorch_eo -> aims to make Deep Learning for Earth Observation data easy and accessible to real-world cases and research alike
  • NGVEO -> applying convolutional neural networks (CNN) to Earth Observation (EO) data from Sentinel 1 and 2 using python and PyTorch
  • chip-n-scale-queue-arranger by developmentseed -> an orchestration pipeline for running machine learning inference at scale. Supports fastai models
  • http://spaceml.org/ -> A Machine Learning toolbox and developer community building the next generation AI applications for space science and exploration
  • TorchSat is an open-source deep learning framework for satellite imagery analysis based on PyTorch (no activity since June 2020)
  • DeepNetsForEO -> Uses SegNET for working on remote sensing images using deep learning (no activity since 2019)
  • RoboSat -> semantic segmentation on aerial and satellite imagery. Extracts features such as: buildings, parking lots, roads, water, clouds (no longer maintained)
  • DeepOSM -> Train a deep learning net with OpenStreetMap features and satellite imagery (no activity since 2017)
  • mapwith.ai -> AI assisted mapping of roads with OpenStreetMap. Part of Open-Mapping-At-Facebook
  • sahi -> A vision library for performing sliced inference on large images/small objects
  • terragpu -> Python library to process and classify remote sensing imagery by means of GPUs and AI/ML

Data discovery and ingestion

  • landsat_ingestor -> Scripts and other artifacts for landsat data ingestion into Amazon public hosting
  • satpy -> a python library for reading and manipulating meteorological remote sensing data and writing it to various image and data file formats
  • GIBS-Downloader -> a command-line tool which facilitates the downloading of NASA satellite imagery and offers different functionalities in order to prepare the images for training in a machine learning pipeline
  • eodag -> Earth Observation Data Access Gateway
  • pylandsat -> Search, download, and preprocess Landsat imagery
  • sentinelsat -> Search and download Copernicus Sentinel satellite images
  • landsatxplore -> Search and download Landsat scenes from EarthExplorer
  • OpenSarToolkit -> High-level functionality for the inventory, download and pre-processing of Sentinel-1 data in the python language
  • lsru -> Query and Order Landsat Surface Reflectance data via ESPA

OpenStreetMap

OpenStreetMap (OSM) is a map of the world, created by people like you and free to use under an open license. Quite a few publications use OSM data for annotations & ground truth. Note that the data is created by volunteers and the quality can be variable

Graphing and visualisation

  • hvplot -> A high-level plotting API for the PyData ecosystem built on HoloViews. Allows overlaying data on map tiles, see Exploring USGS Terrain Data in COG format using hvPlot
  • Pyviz examples include several interesting geospatial visualisations
  • napari -> napari is a fast, interactive, multi-dimensional image viewer for Python. It’s designed for browsing, annotating, and analyzing large multi-dimensional images. By integrating closely with the Python ecosystem, napari can be easily coupled to leading machine learning and image analysis tools. Note that to view a 3GB COG I had to install the napari-tifffile-reader plugin.
  • pixel-adjust -> Interactively select and adjust specific pixels or regions within a single-band raster. Built with rasterio, matplotlib, and panel.
  • Plotly Dash can be used for making interactive dashboards
  • folium -> a python wrapper to the excellent leaflet.js which makes it easy to visualize data that’s been manipulated in Python on an interactive leaflet map. Also checkout the streamlit-folium component for adding folium maps to your streamlit apps
  • ipyearth -> An IPython Widget for Earth Maps
  • geopandas-view -> Interactive exploration of GeoPandas GeoDataFrames
  • geogif -> Turn xarray timestacks into GIFs
  • leafmap -> geospatial analysis and interactive mapping with minimal coding in a Jupyter environment
  • xmovie -> A simple way of creating movies from xarray objects
  • acquisition-time -> Drawing (Satellite) acquisition dates in a timeline
  • splot -> Lightweight plotting for geospatial analysis in PySAL
  • prettymaps -> A small set of Python functions to draw pretty maps from OpenStreetMap data
  • Tools to Design or Visualize Architecture of Neural Network
  • AstronomicAL -> An interactive dashboard for visualisation, integration and classification of data using Active Learning
  • pyodi -> A simple tool for explore your object detection dataset
  • Interactive-TSNE -> a tool that provides a way to visually view a PyTorch model's feature representation for better embedding space interpretability
  • fastgradio -> Build fast gradio demos of fastai learners
  • pysheds -> Simple and fast watershed delineation in python
  • mapboxgl-jupyter -> Use Mapbox GL JS to visualize data in a Python Jupyter notebook
  • cartoframes -> integrate CARTO maps, analysis, and data services into data science workflows
  • datashader -> create meaningful representations of large datasets quickly and flexibly. Read Creating Visual Narratives from Geospatial Data Using Open-Source Technology Maxar blog post
  • Kaleido -> Fast static image export for web-based visualization libraries with zero dependencies
  • flask-vector-tiles -> A simple Flask/leaflet based webapp for rendering vector tiles from PostGIS
  • Embedding Projector in Wandb -> allows users to plot multi-dimensional embeddings on a 2D plane using common dimension reduction algorithms like PCA, UMAP, and t-SNE

Streamlit

Streamlit is an awesome python framework for creating apps with python. Additionally they will host the apps free of charge. Here I list resources which are EO related. Note that a component is an addon which extends Streamlits basic functionality

Cluster computing with Dask

Algorithms

  • WaterDetect -> an end-to-end algorithm to generate open water cover mask, specially conceived for L2A Sentinel 2 imagery. It can also be used for Landsat 8 images and for other multispectral clustering/segmentation tasks.
  • GatorSense Hyperspectral Image Analysis Toolkit -> This repo contains algorithms for Anomaly Detectors, Classifiers, Dimensionality Reduction, Endmember Extraction, Signature Detectors, Spectral Indices
  • detectree -> Tree detection from aerial imagery
  • pylandstats -> compute landscape metrics
  • dg-calibration -> Coefficients and functions for calibrating DigitalGlobe imagery
  • python-fmask -> Implementation in Python of the cloud and shadow algorithms known collectively as Fmask
  • pyshepseg -> Python implementation of image segmentation algorithm of Shepherd et al (2019) Operational Large-Scale Segmentation of Imagery Based on Iterative Elimination.
  • Shadow-Detection-Algorithm-for-Aerial-and-Satellite-Images -> shadow detection and correction algorithm
  • faiss -> A library for efficient similarity search and clustering of dense vectors, e.g. image embeddings
  • awesome-spectral-indices -> A ready-to-use curated list of Spectral Indices for Remote Sensing applications
  • urban-footprinter -> A convolution-based approach to detect urban extents from raster datasets
  • ocean_color -> Tools and algorithms for drone and satellite based ocean color science
  • poliastro -> pure Python library for interactive Astrodynamics and Orbital Mechanics, with a focus on ease of use, speed, and quick visualization
  • acolite -> generic atmospheric correction module

Julia language

Julia looks and feels a lot like Python, but can be much faster. Julia can call Python, C, and Fortran libraries and is capabale of C/Fortran speeds. Julia can be used in the familiar Jupyterlab notebook environment

Movers and shakers on Github

Companies & organisations on Github

For a full list of companies, on and off Github, checkout awesome-geospatial-companies. The following lists companies with interesting Github profiles.

Courses

Books

Podcasts

Online communities

Jobs

Signup for the geospatial-jobs-newsletter and Pangeo discourse lists multiple jobs, global. List of companies job portals below:

About the author

My background is in optical physics, and I hold a PhD from Cambridge on the topic of localised surface Plasmons. Since academia I have held a variety of roles, including doing research at Sharp Labs Europe, developing optical systems at Surrey Satellites (SSTL), and working at an IOT startup. It was whilst at SSTL that I started this repository as a personal resource. Over time I have steadily gravitated towards data analytics and software engineering with python, and I now work as a senior data scientist at Satellite Vu. Please feel free to connect with me on Twitter & LinkedIn, and please do let me know if this repository is useful to your work.

Linkedin: robmarkcole Twitter Follow