This repository contains the data and code for ECCV2020 paper Segmenting Transparent Objects in the Wild.
For downloading the data, you can refer to Trans10K Website.
- python 3
- torch = 1.1.0 (>1.1.0 with cause performance drop, we can't find the reason)
- torchvision
- pyyaml
- Pillow
- numpy
python setup.py develop
We provide the trained models and logs for TransLab. Google Drive
- put the images in './demo/imgs'
- download the trained model from Google Drive , and put it in './demo/16.pth'
- run this script
CUDA_VISIBLE_DEVICES=0 python -u ./tools/test_demo.py --config-file configs/trans10K/translab.yaml TEST.TEST_MODEL_PATH ./demo/16.pth DEMO_DIR ./demo/imgs
- the results are generated in './demo/results'
- create dirs './datasets/Trans10K'
- download the data from Trans10K Website.
- put the train/validation/test data under './datasets/Trans10K'. Data Structure is shown below.
Trans10K/
├── test
│ ├── easy
│ └── hard
├── train
│ ├── images
│ └── masks
└── validation
├── easy
└── hard
pretrained backbone models will be download automatically in pytorch default directory(~/.cache/torch/checkpoints/
).
Our experiments are based on one machine with 8 V100 GPUs(32g memory), if you face memory error, you can try the 'batchsize=4' version.
bash tools/dist_train.sh configs/trans10K/translab.yaml 8 TRAIN.MODEL_SAVE_DIR workdirs/translab_bs8
bash tools/dist_train.sh configs/trans10K/translab_bs4.yaml 8 TRAIN.MODEL_SAVE_DIR workdirs/translab_bs4
for example (batchsize=8)
CUDA_VISIBLE_DEVICES=0 python -u ./tools/test_translab.py --config-file configs/trans10K/translab.yaml TEST.TEST_MODEL_PATH workdirs/translab_bs8/16.pth
For academic use, this project is licensed under the Apache License - see the LICENSE file for details. For commercial use, please contact the authors.
Please consider citing our paper in your publications if the project helps your research. BibTeX reference is as follows.
@article{xie2020segmenting,
title={Segmenting Transparent Objects in the Wild},
author={Xie, Enze and Wang, Wenjia and Wang, Wenhai and Ding, Mingyu and Shen, Chunhua and Luo, Ping},
journal={arXiv preprint arXiv:2003.13948},
year={2020}
}