This repository is to do convolutional autoencoder with SetNet based on Cars Dataset from Stanford.
- Python 3.5
- PyTorch 0.4
We use the Cars Dataset, which contains 16,185 images of 196 classes of cars. The data is split into 8,144 training images and 8,041 testing images, where each class has been split roughly in a 50-50 split.
You can get it from Cars Dataset:
$ cd Autoencoder/data
$ wget http://imagenet.stanford.edu/internal/car196/cars_train.tgz
$ wget http://imagenet.stanford.edu/internal/car196/cars_test.tgz
$ wget --no-check-certificate https://ai.stanford.edu/~jkrause/cars/car_devkit.tgz
Extract 8,144 training images, and split them by 80:20 rule (6,515 for training, 1,629 for validation):
$ python pre_process.py
$ python train.py
Download pre-trained model weights into "models" folder then run:
$ python demo.py
Then check results in images folder, something like:
Input | Output |
---|---|