Contrastive Audio-Visual Masked Autoencoder
Authors
Authors
- Yuan Gong
- Andrew Rouditchenko
- Alexander Liu
- David Harwath
- Leonid Karlinsky
- Hilde Kuehne
- James Glass
Authors
- Yuan Gong
- Andrew Rouditchenko
- Alexander Liu
- David Harwath
- Leonid Karlinsky
- Hilde Kuehne
- James Glass
Published on
05/05/2023
Categories
In this paper, we first extend the recent Masked Auto-Encoder (MAE) model from a single modality to audio-visual multi-modalities. Subsequently, we propose the Contrastive Audio-Visual Masked Auto-Encoder (CAV-MAE) by combining contrastive learning and masked data modeling, two major self-supervised learning frameworks, to learn a joint and coordinated audio-visual representation. Our experiments show that the contrastive audio-visual correspondence learning objective not only enables the model to perform audio-visual retrieval tasks, but also helps the model learn a better joint representation. As a result, our fully self-supervised pretrained CAV-MAE achieves a new SOTA accuracy of 65.9% on VGGSound, and is comparable with the previous best supervised pretrained model on AudioSet in the audio-visual event classification task. Code and pretrained models are at https://github.com/yuangongnd/cav-mae.
Please cite our work using the BibTeX below.
@inproceedings{
gong2023contrastive,
title={Contrastive Audio-Visual Masked Autoencoder},
author={Yuan Gong and Andrew Rouditchenko and Alexander H. Liu and David Harwath and Leonid Karlinsky and Hilde Kuehne and James R. Glass},
booktitle={The Eleventh International Conference on Learning Representations },
year={2023},
url={https://openreview.net/forum?id=QPtMRyk5rb}
}