Published in: 2021 2nd Global Conference for Advancement in Technology (GCAT) - Paper link
Date of Conference: 1-3 Oct. 2021
Date Added to IEEE Xplore: 13 November 2021
ISBN Information:
Electronic ISBN:978-1-6654-1836-2
Print on Demand(PoD) ISBN:978-1-6654-3070-8
INSPEC Accession Number: 21297225
DOI: 10.1109/GCAT52182.2021.9587677
Publisher: IEEE
Conference Location: Bangalore, India
Currently, the Indian Space Research Organization (ISRO) is carrying out various research projects regarding the planet Mars. Earlier, it deployed Mars orbiter in the year 2013 for collecting information which can be further used for conducting even more in-depth research about the planet’s topography, temperature, atmosphere, etc.
One such research area that ISRO is currently focusing on is estimating the depths of various valleys, canyon systems of the Martian planet. As per the current scenario, one such method for estimating depths is using Lidar technology.
We proposed a method where we use Generative Adversarial Networks to estimate such Depth Maps using just a single input image, without using hardwares like sterio, lidar, etc. This Depth Map can be further used for creating 3D models or simulations of the surface for researching and further planning of future missions.
For this project currently there exists no specific dataset, which consists of both the satellite images and the depth maps, hence we created our own dataset for this project. The dataset we created consists of two type of images -- Satellite images which were taken from ISRO’s Mars Orbiter in different lighting conditions.
- Depth image obtained from NASA’s MOLA map.
- We gathered 1:1 identical pair of images from Isro satellite images and Nasa's Mola Depth Map respectively, along Valles Marineris canyon system at the map scale of 53km .
- A total of 38 images were collected which were captured by MOM at different intervals of time in its orbits.
- Each image was augmented to a set of 10 images thus reaching a total count of 380 images.
- Different augmentation techniques were used such as :
- Resize
- Rotate
- Shift Scale
- Center Crop
- Horizontal flip
- Vertical flip
- Blur
- Brightness
- Contrasts
- Hue Saturation
- These images were randomly partitioned into training & testing .
- Each Training Image is of size 256x256.
We have used Pix2Pix model to solve this Task.
We have tested this model with two loss functions: Vanilla Gan Loss and Lsgan Loss.
@inproceedings{isola2017image,
title={Image-to-Image Translation with Conditional Adversarial Networks},
author={Isola, Phillip and Zhu, Jun-Yan and Zhou, Tinghui and Efros, Alexei A},
booktitle={Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on},
year={2017}
}