Research

SimVQA: Exploring Simulated Environments for Visual Question Answering

CVPR

Authors

Published on

06/24/2022

Existing work on VQA explores data augmentation to achieve better generalization by perturbing images in the dataset or modifying existing questions and answers. While these methods exhibit good performance, the diversity of the questions and answers are constrained by the available images. In this work we explore using synthetic computergenerated data to fully control the visual and language space, allowing us to provide more diverse scenarios. We quantify the effectiveness of leveraging synthetic data for real-world VQA. By exploiting 3D and physics simulation platforms, we provide a pipeline to generate synthetic data to expand and replace type-specific questions and answers without risking exposure of sensitive or personal data that might be present in real images. We offer a comprehensive analysis while expanding existing hyper-realistic datasets to be used for VQA. We also propose Feature Swapping (FSWAP) – where we randomly switch object-level features during training to make a VQA model more domain invariant. We show that F-SWAP is effective for improving VQA models on real images without compromising on their accuracy to answer existing questions in the dataset.

Please cite our work using the BibTeX below.

@InProceedings{Cascante-Bonilla_2022_CVPR,
    author    = {Cascante-Bonilla, Paola and Wu, Hui and Wang, Letao and Feris, Rogerio S. and Ordonez, Vicente},
    title     = {SimVQA: Exploring Simulated Environments for Visual Question Answering},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2022},
    pages     = {5056-5066}
}
Close Modal