Research

Separating Skills and Concepts for Novel Visual Question Answering

CVPR

Authors

Published on

06/25/2021

Categories

CVPR

Generalization to out-of-distribution data has been a problem in Visual Question Answering (VQA) models. To measure generalization to novel questions, we propose to separate them into “skills” and “concepts”. “Skills” are visual tasks, such as counting or attribute recognition, and are applied to “concepts” mentioned in the question, such as objects and people. VQA methods should be able to compose skills and concepts in novel ways, regardless of whether the specific composition has been seen in training, yet we demonstrate that existing models have much to improve upon towards handling new compositions. We present a novel method for learning to compose skills and concepts that separates these two factors implicitly within a model by learning grounded concept representations and disentangling the encoding of skills from that of concepts. We enforce these properties with a novel contrastive learning procedure that does not rely on external annotations and can be learned from unlabeled image-question pairs. Experiments demonstrate the effectiveness of our approach for improving compositional and grounding performance.

This paper has been published at CVPR 2021

Please cite our work using the BibTeX below.

@InProceedings{Whitehead_2021_CVPR,
    author    = {Whitehead, Spencer and Wu, Hui and Ji, Heng and Feris, Rogerio and Saenko, Kate},
    title     = {Separating Skills and Concepts for Novel Visual Question Answering},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2021},
    pages     = {5632-5641}
}
Close Modal