/OakInk

[CVPR 2022] OakInk: A Large-scale Knowledge Repository for Understanding Hand-Object Interaction

Primary LanguageJupyter NotebookMIT LicenseMIT


Logo

A Large-scale Knowledge Repository for Understanding Hand-Object Interaction

CVPR, 2022
Lixin Yang* · Kailin Li* · Xinyu Zhan* · Fei Wu · Anran Xu . Liu Liu · Cewu Lu
\star = equal contribution

Paper PDF ArXiv PDF Project Page Youtube Video


This repo contains OakInk Toolkit oikit -- a Python package that provides data loading and visualization tools for the OakInk-Image and OakInk-Shape dataset.

Installation

We test the installation with:
Ubuntu PyTorch Python PyTorch

First, clone the repo:

$ git clone https://github.com/lixiny/OakInk.git
$ cd OakInk

There are two different ways to use oikit in your project: stand-alone and import-as-package.

stand-alone

For a good practice to use python package, we recommend users to use conda environment.
The stand-alone mode will create an isolated conda env called: oakink:

$ conda env create -f environment.yaml
$ pip install -r requirements.txt

import-as-package (recommended)

In most cases, users want to use oikit in other conda env.
To be able to import oikit, you need:

  1. activate the destination env (we suppose that python, cudatookit, and pytorch have already been installed)
  2. go to your OakInk directory and run:
$ pip install .

To test the installation is complete, run:

$ python -c "from oikit.oi_image import OakInkImage"

no error, no worry. Now you can use oikit in this env.

Download Dataset

  1. Download the OakInk dataset (containing the Image and Shape subsets) from the project site. Arrange all zip files into a folder: /path/to/oakink_data/ as follow:

     .
     ├── image
     │   ├── anno.zip
     │   ├── obj.zip
     │   └── stream_zipped
     │       ├── oakink_image_v2.z01
     │       ├── ...
     │       ├── oakink_image_v2.z10
     │       └── oakink_image_v2.zip
     └── shape
         ├── metaV2.zip
         ├── OakInkObjectsV2.zip
         ├── oakink_shape_v2.zip
         └── OakInkVirtualObjectsV2.zip
    
  2. Extract the files.

  • For the image/anno.zip, image/obj.zip and shape/*.zip, you can simply unzip it at the same level of the .zip file:
    $ unzip obj.zip
  • For the 11 split zip files in image/stream_zipped, you need to cd into the image/ directory, run:
    $ zip -F ./stream_zipped/oakink_image_v2.zip --out single-archive.zip
    This will combine the split zip files into a single .zip, at image/single-archive.zip. Finally, unzip the combined archive:
    $ unzip single-archive.zip
    After all the extractions are finished, you will have a your /path/to/oakink_data/ as the following structure:
    .
    ├── image
    │   ├── anno
    │   ├── obj
    │   └── stream_release_v2
    │       ├── A01001_0001_0000
    │       ├── A01001_0001_0001
    │       ├── A01001_0001_0002
    │       ├── ....
    │
    └── shape
        ├── metaV2
        ├── OakInkObjectsV2
        ├── oakink_shape_v2
        └── OakInkVirtualObjectsV2
    
  1. Set the environment variable $OAKINK_DIR to your dataset folder:

    $ export OAKINK_DIR=/path/to/oakink_data
  2. Download mano_v1_2.zip from the MANO website, unzip the file and create symlink in assets folder:

    $ mkdir assets
    $ ln -s path/to/mano_v1_2 assets/

Load Dataset and Visualize

we provide three scripts to provide basic usage for data loading and visualizing:

  1. visualize OakInk-Image set on sequence level:

    $ python scripts/viz_oakink_image_seq.py (--help)
  2. use OakInkImage to load data_split: all and visualize:

    $ python scripts/viz_oakink_image.py (--help)
  3. visualize OakInk-Shape set with object category and subject intent

    $ python scripts/viz_oakink_shape.py --categories teapot --intent_mode use (--help)

Citing OakInk Toolkit

If you find OakInk dataset and oikit useful for your research, please considering cite us:

@InProceedings{YangCVPR2022OakInk,
  author    = {Yang, Lixin and Li, Kailin and Zhan, Xinyu and Wu, Fei and Xu, Anran and Liu, Liu and Lu, Cewu},
  title     = {{OakInk}: A Large-Scale Knowledge Repository for Understanding Hand-Object Interaction},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year      = {2022},
}