/PS-VAEs

Primary LanguagePythonMIT LicenseMIT

Partially-Shared Variational Auto Encoders

This is the implementation of Partially-Shared Variational Auto Encoders (PS-VAEs) for pose estimation and digits classification in Pytorch. The code is written by Ryuhei Takahashi and Atsushi Hashimoto. The work was accepted by ECCV 2020 Poster.

Pose Estimation Examples

What is Target Shift? What is the problem

Distribution deformation caused by adversarial training under different label distributions between the domains Target shift, also known as prior distribution shift, is a shift in label distribution p(y) between source and target domains.

In general unsupervised domain adapation settings, the shape of label distribution in the target domain dataset is unknown because the labels are inaccessible. Hence, it cannot be ensured that the label distributions in the source and target domains are identical. Despite of this fact, many UDA methods rely on adversarial training with a simple domain discreminator. Such methods try to match in the shape of two feature distributions. Because they use a common classifier/regressor between domains, such a method always forces the model to output the source label distributions for target domain dataset. In other words, it implicitly assumes the identical label distribution between the two domains.

Some other methods does not rely on such adversarial training, but often assumes the existence of category boundaries, entropy-based importance weight calculation, which are applicalbe only to clasiffication problems but not for regression.

The proposed method, PS-VAEs, is the only method that is applicable for both classification and regression problems under target shift without relying on any prior knowledge of domain-invariant sample similarity metric.

Network Architecture and Algorithm

The entire network architecture. The network architecture

Partially-shared Encoder and Decoder for label preserving domain conversion. Partially-shared Encoder/Decoder

Experimental Results

Pose Estimation

Accuracy in pixel

Joint-wise accuracy with a threshold of 10 pixels.

Digit Classification

Note that X% represents the strength of the target shift (larger is stronger). Quantitative Evaluation with digit UDA tasks.

Get Started (digit classification task)

dataset creation

% cd PS-VAEs/datasets
% bash make_digit_datasets.sh

train and test

% cd PS-VAEs
% bash digit_classification_task.sh
  • Edit digit_classification_task.sh directly for training models in different source-target pairs.

Get Started (pose estimation task)

dataset preparation

  1. download the source dataset (syn)

  2. down load 171204_pose in the CMU Panoptic Dataset and put it in PS-VAEs/datasets/syn/

  3. divide 171204_pose into train/test sets by the following script

    % cd ./PS-VAEs/datasets % mkdir rea % bash make_pose_datasets.sh {panoptice_dataset_dir}/171204_pose1 rea

  4. train and test % cd ./PS-VAEs % bash pose_estimation.sh

Pose Dataset

Paper and Citation

  • The paper on arXiv
  • Citation Info (If you use this code for your research, please cite our papers).
@InProceedings{takahashi2020partially,
  title={Partially-Shared Variational Auto-encoders for Unsupervised Domain Adaptation with Target Shift},
  author={Takahashi, Ryuhei and Hashimoto, Atsushi and Sonogashira, Motoharu and Iiyama, Masaaki},
  booktitle = {The European Conference on Computer Vision (ECCV)},
  year={2020}
}

License

The code in this repository is published under the MIT License.