This approach predicts samples with labels(!) from unseen domains. Other approaches (e.g., from unsupervised domain adapation) can only label samples from (unseen) domains. It learns a transformer that can transform between target domains.
The paper was accepted at IJCNN, 2022.
Paper: 'Domain Transformer: Predicting Samples of Unseen, Future Domains' by Johannes Schneider, IJCNN, 2022,
PDF: https://arxiv.org/abs/2106.06057
Licence: Use it however you like, but cite the paper :-)
Source code is in Pytorch. Computation takes a while. Run "runExp.py"