Official code for the paper "Pose2Seg: Detection Free Human Instance Segmentation"[ProjectPage][arXiv] @ CVPR2019.
The OCHuman dataset proposed in our paper is released here
pip install cython matplotlib tqdm opencv-python scipy pyyaml numpy
pip install torchvision torch
cd ~/github-public/cocoapi/PythonAPI/
python setup.py build_ext install
cd -
-
COCO 2017
-
OCHuman
Note:
person_keypoints_(train/val)2017_pose2seg.json
is a subset of person_keypoints_(train/val)2017.json
(in COCO2017 Train/Val annotations). We choose those instances with both keypoint and segmentation annotations for our experiments.
The data
folder should be like this:
data
├── coco2017
│ ├── annotations
│ │ ├── person_keypoints_train2017_pose2seg.json
│ │ ├── person_keypoints_val2017_pose2seg.json
│ ├── train2017
│ │ ├── ####.jpg
│ ├── val2017
│ │ ├── ####.jpg
├── OCHuman
│ ├── annotations
│ │ ├── ochuman_coco_format_test_range_0.00_1.00.json
│ │ ├── ochuman_coco_format_val_range_0.00_1.00.json
│ ├── images
│ │ ├── ####.jpg
python train.py
Note: Currently we only support for single-gpu training.
This allows you to test the model on (1) COCOPersons val set and (2) OCHuman val & test set.
python test.py --weights last.pkl --coco --OCHuman
We retrained our model using this repo, and got similar results with our paper. The final weights can be download here.
This repo already contains a template file modeling/templates.json
which was used in our paper. But you are free to explore different cluster parameters as discussed in our paper. See visualize_cluster.ipynb for an example.