This is the course project of EECS 442 Computer Vision (2018 Winter), University of Michigan.
- Shengyi Qian (@JasonQSY)
- Linyi Jin (@jinlinyi)
- Yichen Yang (@yangych29)
The left image is our network input, which is a gray-scale synthetic image. The right image is the network output, the color follows https://en.wikipedia.org/wiki/Normal_mapping
The code is tested on python3.6. Required packages include
- pytorch
- opencv
- numpy
- scipy
- imageio
- tqdm
It is only tested on Ubuntu 16.04 LTS with CUDA. But it should be able to run on any Unix-like platform.
To set up,
mkdir exp
To start training a new model,
python train.py -e sn_full -t sn
To continue training model sn_full
,
python train.py -c sn_full -e sn_full -t sn
The training code would automatically save ${model}_${epoch}
under exp
. For example, if we train a model sn_full
for 10 epochs, there would be sn_full_1
, sn_full_2
, etc. under exp
. These snapshots are used for validation.
To generate the prediction, run
rm -rf save
mkdir save
python generate.py -c sn_full -e sn_full -t sn
Besides training and evaluation, we want to submit an ensemble of ConvNets to improve performance. These can be done by
python ensemble.py
We use a lot of code from umich-vl/pose-ae-train.