/affordance_net

AffordanceNet - Multiclass Instance Segmentation Framework - ICRA 2018

Primary LanguageJupyter NotebookOtherNOASSERTION

By Thanh-Toan Do*, Anh Nguyen*, Ian Reid (* equal contribution)

affordance-net

Contents

  1. Requirements
  2. Installation
  3. Demo
  4. Training
  5. Notes

Requirements

  1. Caffe

    • Caffe must be built with support for Python layers.
  2. Hardware

    • To train a full AffordanceNet, you'll need a GPU with ~11GB (e.g. Titan, K20, K40, Tesla, ...).
    • To test a full AffordanceNet, you'll need ~6GB GPU.
  3. [Optional] Robot Demo - This option is from the original repo. Since this repo is now used only as a submodule in the rail_part_affordance_detection repo, check out the main repo for more up-to-date robot demo.

Pre-installation Actions

This package has been tested on Ubuntu 18.04 with CUDA 9.2 and CUDA 10.1. Installing these versions of CUDA should ensure successful installation.

Installation

Caffe needs to be built from source since this package uses customized caffe functions.

  1. Clone the AffordanceNet repository into your $AffordanceNet_ROOT folder if you haven't already done so.

  2. Build Caffe and pycaffe from source:

    • cd $AffordanceNet_ROOT/caffe-affordance-net
    • Now follow the Caffe installation instructions to make sure you have all of the requirements installed.
      • Check out the step-by-step instructions for Ubuntu installation and make sure the dependencies under General dependencies, BLAS, and Python for Ubuntu (< 17.04) are installed. For BLAS, just use ATLAS.
      • Go back to the main page and follow the instructions to install necessary python packages using pip.
    • Copy an existing Makefile.config from $AffordanceNet_ROOT/caffe-affordance-net/makefile_config_template to $AffordanceNet_ROOT/caffe-affordance-net
    • make -j8
    • If run into problems, this webpage usually provides solutions.
    • make pycaffe
    • export PYTHONPATH=$AffordanceNet_ROOT/caffe-affordance-net/python:$PYTHONPATH
  3. Build the Cython modules:

    • cd $AffordanceNet_ROOT/lib
    • make
  4. Download pretrained weights (Google Drive, One Drive). This weight is trained on the training set of the IIT-AFF dataset:

    • Extract the file you downloaded to $AffordanceNet_ROOT
    • Make sure you have the caffemodel file like this: '$AffordanceNet_ROOT/pretrained/AffordanceNet_200K.caffemodel

Demo

After successfully completing installation, you'll be ready to run the demo.

  1. Demo on static images:

    • cd $AffordanceNet_ROOT/tools
    • python demo_img.py
    • You should see the detected objects and their affordances.
  2. (Optional) Demo on depth camera (such as Asus Xtion):

    • With AffordanceNet and the depth camera, you can easily select the interested object and its affordances for robotic applications such as grasping, pouring, etc.
    • First, launch your depth camera with ROS, OpenNI, etc.
    • cd $AffordanceNet_ROOT/tools
    • python demo_asus.py
    • You may want to change the object id and/or affordance id (line 380, 381 in demo_asus.py). Currently, we select the bottle and its grasp affordance.
    • The 3D grasp pose can be visualized with rviz. You should see something like this: affordance-net-asus

Training

  1. We train AffordanceNet on IIT-AFF dataset

    • We need to format IIT-AFF dataset as in Pascal-VOC dataset for training.
    • For your convinience, we did it for you. Just download this file (Google Drive, One Drive) and extract it into your $AffordanceNet_ROOT folder.
    • The extracted folder should contain three sub-folders: $AffordanceNet_ROOT/data/cache, $AffordanceNet_ROOT/data/imagenet_models, and $AffordanceNet_ROOT/data/VOCdevkit2012 .
  2. Train AffordanceNet:

    • cd $AffordanceNet_ROOT
    • ./experiments/scripts/faster_rcnn_end2end.sh [GPU_ID] [NET] [--set ...]
    • e.g.: ./experiments/scripts/faster_rcnn_end2end.sh 0 VGG16 pascal_voc
    • We use pascal_voc alias although we're training using the IIT-AFF dataset.

Notes

  1. AffordanceNet vs. Mask-RCNN: AffordanceNet can be considered as a general version of Mask-RCNN when we have multiple classes inside each instance.
  2. The current network achitecture is slightly diffrent from the paper, but it achieves the same accuracy.
  3. Train AffordanceNet on your data:
    • Format your images as in Pascal-VOC dataset (as in $AffordanceNet_ROOT/data/VOCdevkit2012 folder).
    • Prepare the affordance masks (as in $AffordanceNet_ROOT/data/cache folder): For each object in the image, we need to create a mask and save as a .sm file. See $AffordanceNet_ROOT/utils for details.

Debug

  1. If ImportError: No module named Image occurs when running demo, change import from import Image to from PIL import Image in $AffordanceNet_ROOT/lib/fast_rcnn/test.py

Citing AffordanceNet

If you find AffordanceNet useful in your research, please consider citing:

@inproceedings{AffordanceNet18,
  title={AffordanceNet: An End-to-End Deep Learning Approach for Object Affordance Detection},
  author={Do, Thanh-Toan and Nguyen, Anh and Reid, Ian},
  booktitle={International Conference on Robotics and Automation (ICRA)},
  year={2018}
}

If you use IIT-AFF dataset, please consider citing:

@inproceedings{Nguyen17,
  title={Object-Based Affordances Detection with Convolutional Neural Networks and Dense Conditional Random Fields},
  author={Nguyen, Anh and Kanoulas, Dimitrios and Caldwell, Darwin G and Tsagarakis, Nikos G},
  booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
  year={2017},
}

License

MIT License

Acknowledgement

This repo used a lot of source code from Faster-RCNN

Contact

If you have any questions or comments, please send us an email: thanh-toan.do@adelaide.edu.au and anh.nguyen@iit.it