Research

Beyond RNNs: Positional Self-Attention with Co-Attention for Video Question Answering

Computer Vision

Authors

  • Xiangpeng Li
  • Jingkuan Song
  • Lianli Gao
  • Xianglong Liu
  • Wen-bing Huang
  • Xiangnan He
  • Chuang Gan

Published on

07/17/2019

Most of the recent progresses on visual question answering are based on recurrent neural networks (RNNs) with attention. Despite the success, these models are often timeconsuming and having difficulties in modeling long range dependencies due to the sequential nature of RNNs. We propose a new architecture, Positional Self-Attention with Coattention (PSAC), which does not require RNNs for video question answering. Specifically, inspired by the success of self-attention in machine translation task, we propose a Positional Self-Attention to calculate the response at each position by attending to all positions within the same sequence, and then add representations of absolute positions. Therefore, PSAC can exploit the global dependencies of question and temporal information in the video, and make the process of question and video encoding executed in parallel. Furthermore, in addition to attending to the video features relevant to the given questions (i.e., video attention), we utilize the co-attention mechanism by simultaneously modeling “what words to listen to” (question attention). To the best of our knowledge, this is the first work of replacing RNNs with selfattention for the task of visual question answering. Experimental results of four tasks on the benchmark dataset show that our model significantly outperforms the state-of-the-art on three tasks and attains comparable result on the Count task. Our model requires less computation time and achieves better performance compared with the RNNs-based methods. Additional ablation study demonstrates the effect of each component of our proposed model.

Please cite our work using the BibTeX below.

@article{article,
author = {Li, Xiangpeng and Song, Jingkuan and Gao, Lianli and Liu, Xianglong and Huang, Wenbing and He, Xiangnan and Gan, Chuang},
year = {2019},
month = {07},
pages = {8658-8665},
title = {Beyond RNNs: Positional Self-Attention with Co-Attention for Video Question Answering},
volume = {33},
journal = {Proceedings of the AAAI Conference on Artificial Intelligence},
doi = {10.1609/aaai.v33i01.33018658}
}
Close Modal