Research

Contrast and Mix: Temporal Contrastive Video Domain Adaptation with Background Mixing

NeurIPS

Authors

Unsupervised domain adaptation which aims to adapt models trained on a labeled source domain to a completely unlabeled target domain has attracted much attention in recent years. While many domain adaptation techniques have been proposed for images, the problem of unsupervised domain adaptation in videos remains largely underexplored. In this paper, we introduce Contrast and Mix (CoMix), a new contrastive learning framework that aims to learn discriminative invariant feature representations for unsupervised video domain adaptation. First, unlike existing methods that rely on adversarial learning for feature alignment, we utilize temporal contrastive learning to bridge the domain gap by maximizing the similarity between encoded representations of an unlabeled video at two different speeds as well as minimizing the similarity between different videos played at different speeds. Second, we propose a novel extension to the temporal contrastive loss by using background mixing that allows additional positives per anchor, thus adapting contrastive learning to leverage action semantics shared across both domains. Moreover, we also integrate a supervised contrastive learning objective using target pseudo-labels to enhance discriminability of the latent space for video domain adaptation. Extensive experiments on several benchmark datasets demonstrate the superiority of our proposed approach over state-of-the-art methods.

This paper has been published at NeurIPS 2021

Please cite our work using the BibTeX below.

@inproceedings{
sahoo2021contrast,
title={Contrast and Mix: Temporal Contrastive Video Domain Adaptation with Background Mixing},
author={Aadarsh Sahoo and Rutav Shah and Rameswar Panda and Kate Saenko and Abir Das},
booktitle={Thirty-Fifth Conference on Neural Information Processing Systems},
year={2021},
url={https://openreview.net/forum?id=a1wQOh27zcy}
}
Close Modal