This repository contains the code of our foreground-aware stylization (FgSty) and consensus pseudo-labeling (CPL), and the synthesized dataset used in our experiments, ObMan-Ego (see DATASET.md). If you have some requests or questions, please contact the first author.
Foreground-Aware Stylization and Consensus Pseudo-Labeling for Domain Adaptation of First-Person Hand Segmentation
Takehiko Ohkawa, Takuma Yagi, Atsushi Hashimoto, Yoshitaka Ushiku, and Yoichi Sato
IEEE Access, 2021
Project page: https://tkhkaeio.github.io/projects/21_FgSty-CPL/
Python 3.7
PyTorch 1.6.0
Data directory structure should be
- root / source-dataset (e.g., EGTEA, Ego2Hands, ObMan-Ego)
- train
- trainannot (segmentation mask)
- test
- testannot (segmentation mask)
- root / target-datasets (e.g., GTEA, EDSH12, EDSH1K, UTG, YHG)
- train
- trainannot (segmentation mask)
- test
- testannot (segmentation mask)
For stylizing in an instance,
cd FSty
and run
python test.py --model /path/to/your/model
--content_image_path /path/to/your/content-image
--content_seg_path /path/to/your/content-mask
--style_image_path /path/to/your/style-image
--style_seg_path /path/to/your/style-mask
--output_image_path /path/to/your/output
(If you stylize the foreground only, see FgSty-Only.md.)
For stylizing in batches,
-
cd FgSty
and specify your data root directory inmake_script_rand.py
. -
Run
python make_script_rand.py
to create files with arguments for stylization. -
Run
./scripts/EGTEA_v1_test_part00x.sh
-
Please download pretrained models of RefineNet from [here] and set them to
CPL/pretrained_models
. -
cd CPL
and run
python train_refinenet.py --dataset /path/to/your/dataset
for the naive training on a single dataset, or run
python train_refinenet_CPL.py --dataset /path/to/your/style-adapted-dataset \
--src_dataset /path/to/your/source-dataset \
--trg_dataset /path/to/your/target-dataset \
--src_model_path /path/to/your/pretrained-source-model \
--eta 0
for the adaptation training based on the consensus scheme without adversarial adaptation.
Note: the training of CPL requires to use two GPUs.
-
cd CPL
and specify your data root directory intest_refinenet.py
. -
Run
python test_refinenet.py --dataset /path/to/your/target-dataset \
--model_path /path/to/your/test-model
FastPhotoStyle: https://github.com/NVIDIA/FastPhotoStyle
RefineNet: https://github.com/DrSleep/refinenet-pytorch
UMA: https://github.com/cai-mj/UMA