/anime-face-detector

A Faster-RCNN based anime face detector implementation using tensorflow.

Primary LanguagePythonMIT LicenseMIT

Anime-Face-Detector

A Faster-RCNN based anime face detector.

This detector is trained on 6000 training samples and 641 testing samples, randomly selected from the dataset which is crawled from top 100 pixiv daily ranking.

Thanks to OpenCV based Anime face detector written by nagadomi, which helps labelling the data.

The original implementation of Faster-RCNN using Tensorflow can be found here

Dependencies

  • Python 3.6 or 3.7
  • tensorflow < 2.0
  • opencv-python
  • cython (optional, can be ignored with additional -nms-type PY_NMS argument)
  • Pre-trained ResNet101 model

Usage

  1. Clone this repository
    git clone https://github.com/qhgz2013/anime-face-detector.git
  2. Download the pre-trained model Google Drive: here Baidu Netdisk: here
  3. Unzip the model file into model directory
  4. Install python & pipenv
    pyenv install
    pipenv install
  5. Build the CPU NMS model (skip this step if use PY_NMS with argument: -nms-type PY_NMS)
    pipenv run setup
    If using Windows Power Shell, type cmd /C make.bat to run build script.
  6. Run the demo as you want
    • Visualize the result (without output path):
      pipenv run main -i /path/to/image.jpg
    • Save results to a json file
      pipenv run main -i /path/to/image.jpg -o /path/to/output.json
      Format: {"image_path": [{"score": predicted_probability, "bbox": [min_x, min_y, max_x, max_y]}, ...], ...} Sample output file:
      {"/path/to/image.jpg": [{"score": 0.9999708, "bbox": [551.3375, 314.50253, 729.2599, 485.25674]}]}
    • Detecting a whole directory with recursion
      pipenv run main -i /path/to/dir -o /path/to/output.json
    • Customize threshold
      pipenv run main -i /path/to/image.jpg -nms 0.3 -conf 0.8
    • Customize model path
      pipenv run main -i /path/to/image.jpg -model /path/to/model.ckpt
    • Customize nms type (supports CPU_NMS and PY_NMS, not supports GPU_NMS because of the complicated build process for Windows platform)
      pipenv run main -i /path/to/image.jpg -nms-type PY_NMS

Results

Mean AP for this model: 0.9086

Copyright info: 東方まとめ by 羽々斬

Copyright info: 【C94】桜と刀 by 幻像黒兎

Copyright info: アイドルマスター シンデレラガールズ by 我美蘭@1日目 東A-40a

About training

This model is directly trained by Faster-RCNN, with following argument:

python tools/trainval_net.py --weight data/imagenet_weights/res101.ckpt --imdb voc_2007_trainval --imdbval voc_2007_test --iters 60000 --cfg experiments/cfgs/res101.yml --net res101 --set ANCHOR_SCALES "[4,8,16,32]" ANCHOR_RATIOS "[1]" TRAIN.STEPSIZE "[50000]"

Dataset

We've uploaded the dataset to Google drive here, dataset structure is similar to VOC2007 (used in original Faster-RCNN implementation).

Citation and declaration

Feel free to cite this repo and dataset. This work is not related to my research team and lab, just my personal interest.