Research

Domain2Vec: Domain Embedding for Unsupervised Domain Adaptation

ECCV

Authors

Published on

08/28/2020

Categories

Computer Vision ECCV

Conventional unsupervised domain adaptation (UDA) studies the knowledge transfer between a limited number of domains. This neglects the more practical scenario where data are distributed in numerous different domains in the real world. A technique to measure domain similarity is critical for domain adaptation performance. To describe and learn relations between different domains, we propose a novel Domain2Vec model to provide vectorial representations of visual domains based on joint learning of feature disentanglement and Gram matrix. To evaluate the effectiveness of our Domain2Vec model, we create two large-scale cross-domain benchmarks. The first one is TinyDA, which contains 54 domains and about one million MNIST-style images. The second benchmark is DomainBank , which is collected from 56 existing vision datasets. We demonstrate that our embedding is capable of predicting domain similarities that match our intuition about visual relations between different domains. Extensive experiments are conducted to demonstrate the power of our new datasets in benchmarking state-of-the-art multi-source domain adaptation methods, as well as the advantage of our proposed model. Data and code are available at https://github.com/VisionLearningGroup/Domain2Vec

This paper has been published at ECCV 2020.

Please cite our work using the BibTeX below.

@inproceedings{peng2020domain2vec,
      title={Domain2Vec: Domain Embedding for Unsupervised Domain Adaptation}, 
      author={Xingchao Peng and Yichen Li and Kate Saenko},
      journal={European Conference on Computer Vision (ECCV)},
      year={2020}
}
Close Modal