#AlexNet Implementation with Theano
Demonstration of training an AlexNet in python with Theano. Please see, and maybe kindly cite :) if you find it helpful, this technical report (link coming soon) for a high level description. theano_multi_gpu provides a toy example on how to use 2 GPUs to train a MLP on the mnist data.
Download ImageNet dataset and unzip image files.
This involves shuffling training images, generating data batches, computing the mean image and generating label files.
- Set paths in the preprocessing/paths.yaml. Each path is described in this file.
- Run preprocessing/generate_data.sh, which will call 3 python scripts and do all the mentioned steps. It runs for about 1~2 days. For a quick trial of the code, run preprocessing/generate_toy_data.sh, which takes ~10 minutes and proceed.
preprocessing/lists.txt is a static file that lists what files should be created by running generate_data.sh.
config.yaml contains common configurations for both the 1-GPU and 2-GPU version.
spec_1gpu.yaml and spec_2gpu.yaml contains different configurations for the 1-GPU and 2-GPU version respectively.
If you changed preprocessing/paths.yaml, make sure you change corresponding paths in config.yaml, spec_1gpu.yaml and spec_2gpu.yaml accordingly.
1-GPU version, run:
THEANO_FLAGS=mode=FAST_RUN,floatX=float32 python train.py
2-GPU version, run:
THEANO_FLAGS=mode=FAST_RUN,floatX=float32 python train_2gpu.py
Validation error and loss values are stored as weights_dir/val_record.npy
Here we do not set device to gpu in THEANO_FLAGS. Instead, users should control which GPU(s) to use in spec_1gpu.yaml and spec_2gpu.yaml.
Frédéric Bastien, for providing the example of Using Multiple GPUs
Lev Givon, for helping on inter process communication between 2 gpus with PyCUDA, Lev's original script https://gist.github.com/lebedov/6408165
Guangyu Sun, for help on debugging the code