Denoise Autoencoder For 3D Cameras
Generating accurate depth frames from infra red images becomes a challenge due to environmental noises created from waves (e.g electromagnetic, sound, heat, etc). Knowing the exact depth of object in 3D cameras is critical to avoid drones and robots crashes. In this project I trained Conventional, Unet and GAN autoencoder networks on depth and infra red frames to remove unwanted noises and predict the exact placement of objects in each frame.
The goal is to train the neural network so that denoised frames look as ground truth frames as possible.
Use the package manager pip to install Keras and Tensorflow.
pip install tensorflow-gpu
pip install keras
.
├───cropped_images
│ ├───ir
│ ├───noisy
│ └───pure
├───cropped_tests
│ ├───depth
│ │ └───res-*
│ └───ir
│ └───left-*
├───denoised
├───diff_compare
│ ├───colored_diff_denoised
│ ├───colored_diff_tested
│ ├───diff_denoised
│ ├───diff_tested
│ └───logs
├───normalized
├───real_scenes_png
├───real_scenes_raw
├───tests
│ ├───depth
│ ├───ir
│ ├───masked depth
│ └───pure
└───train
├───ir
├───masked_noisy
├───masked_pure
├───noisy
└───pure
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.