This project demonstrates that we can use deep learning to compress images to very low bitrates and yet retain high qualities.This project was done as part of academic project for B.Tech degree by Abhishek Jha, Avik Banik, Soumitra Maity and Md. Akram Zaki of Kalyani Government Engineering College. This contains two already trained autoencoder and decoder model which we trained on kaggle. Currently it works only on PNG images.
After cloning the git repo. Use the package manager conda to install the required dependencies.
conda env create -f environment.yml
Run encode.py. This script will convert all the images in one directory and store the compressed version to out directory.
usage: encode.py [-h] [--model [MODEL]] [--image [IMAGE]] [--out [OUT]]
optional arguments:
-h, --help show this help message and exit
--model [MODEL] Path for model checkpoint file [default: ./out/main.tar]
--image [IMAGE] Directory which holds the images to be compressed [default:
./dataset/]
--out [OUT] Directory which will hold the compressed images [default:
./out/compressed/]
sample:
python encode.py --image ../../someFolderContainingImages --out ../someFolder
Run decode.py with the directory containing compressed files as parameter. The decoded files are saved to out/decompressed
.
usage: decode.py [-h] [--model [MODEL]] [--compressed [COMPRESSED]]
[--out [OUT]]
optional arguments:
-h, --help show this help message and exit
--model [MODEL] Path for model checkpoint file [default:
./out/main.tar]
--compressed [COMPRESSED]
Directory which holds the compressed files [default:
./out/compressed/]
--out [OUT] Directory which will hold the decompressed images
[default: ./out/decompressed/]
sample:
python decode.py --compressed ../../someFolderContainingFiles
Keep all the images in dataset folder and saves the training data for future training.For resuming training enter the checkpoint path parameter.
usage: train.py [-h] [--dataset-path [DATASET_PATH]]
[--checkpoint-path [CHECKPOINT_PATH]] [--stop-at [STOP_AT]]
[--save-at [SAVE_AT]]
optional arguments:
-h, --help show this help message and exit
--dataset-path [DATASET_PATH]
Root directory of Images
--checkpoint-path [CHECKPOINT_PATH]
Use to resume training from last checkpoint
--stop-at [STOP_AT] Epoch after you want to end training
--save-at [SAVE_AT] Directory where training state will be saved
sample:
python train.py --dataset-path ../input/ --stop-at 30 --save-at ./
We have created two models offering different compresison. You can use git to change the model.
git checkout model2 //for using model2
git checkout master //for using model1
python test.py
Please make sure to keep original images in dataset
folder and decompressed images in out/decompressed
before running test.py. Or keep some images in dataset
and run encode.py
followed by decode.py
with no parameters.
Contributions are welcome.