A trained classifier that is able to detect face orientation. The model is based off a pre-trained resnet model that is transferred over. It is able to detect up and down very well but struggles a slight amount with right and left. This model could be improved further with more precise fine tuning.
Face Orientation Classifier
Up | Down | Left | Right | Total | |
---|---|---|---|---|---|
Train Accuracy | 100% | 100% | 96.8% | 90% | 96.8% |
Validation Accuracy | 96.8% | 100% | 93.8% | 81.2% | 93% |
- Download the CelebA Database. This should be saved as "img_align_celeba".
- Run the ImageProcessor.py to convert the data into the correct size and folders. Note you will need to create the "processedImages/" folders. This includes a train and validate folder with each label "up" , "down", "left" , and "right". See Command #1 for help.
- clone the GitHub repo for Tensorflow Models and follow the instructions in the official folder README. This should be in the same folder as this repo.
- Run the build_image_data.py command using Command #2. The "tfRecords/" will also have to be created. With the latest Tensorflow binaries you will need to add the following lines to line 77-78.
import tensorflow.compat.v1 as tf tf.disable_eager_execution()
- Download the latest model from the official/r1/resnet/ README. Rename this to "PreResNet/".
- Finally you can run the code using Command #3. You will need to make the following changes.
- Line 714 of resnet_run_loop.py change to
classifier.export_saved_model("results/", input_receiver_fn)
- The final line of imagenet_preprocessing.py change to
image = tf.image.decode_image(image_buffer, channels=num_channels , expand_animations=False) image.set_shape([output_height, output_width, num_channels])
- Line 714 of resnet_run_loop.py change to
- Explore the data using the Predict.ipynb
- mkdir -p processedImages/train/up && mkdir -p processedImages/train/down && mkdir -p processedImages/train/left && mkdir -p processedImages/train/right && mkdir -p processedImages/validate/up && mkdir -p processedImages/validate/down && mkdir -p processedImages/validate/left && mkdir -p processedImages/validate/right
- python models/research/inception/inception/data/build_image_data.py --train_directory processedImages/train/ --validation_directory processedImages/validate/ --output_directory tfrecords/ --labels_file labels.txt --train_shards 1024 --validation_shards 128
- python models/official/r1/resnet/imagenet_main.py --data_dir tfrecords/ --pretrained_model_checkpoint_path PreResNet --fine_tune False --rv 2 --export_dir results/ --train_epoch 10