Research

Learning Cross-Modal Contrastive Features for Video Domain Adaptation

ICCV

Authors

Published on

10/17/2021

Learning transferable and domain adaptive feature representations from videos is important for video-relevant tasks such as action recognition. Existing video domain adaptation methods mainly rely on adversarial feature alignment, which has been derived from the RGB image space. However, video data is usually associated with multi-modal information, e.g., RGB and optical flow, and thus it remains a challenge to design a better method that considers the crossmodal inputs under the cross-domain adaptation setting. To this end, we propose a unified framework for video domain adaptation, which simultaneously regularizes cross-modal and cross-domain feature representations. Specifically, we treat each modality in a domain as a view and leverage the contrastive learning technique with properly designed sampling strategies. As a result, our objectives regularize feature spaces, which originally lack the connection across modalities or have less alignment across domains. We conduct experiments on domain adaptive action recognition benchmark datasets, i.e., UCF, HMDB and EPIC-Kitchens, and demonstrate the effectiveness of our individual components against state-of-the-art algorithms.

Please cite our work using the BibTeX below.

@InProceedings{Kim_2021_ICCV,
    author    = {Kim, Donghyun and Tsai, Yi-Hsuan and Zhuang, Bingbing and Yu, Xiang and Sclaroff, Stan and Saenko, Kate and Chandraker, Manmohan},
    title     = {Learning Cross-Modal Contrastive Features for Video Domain Adaptation},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2021},
    pages     = {13618-13627}
}
Close Modal