Research

Location-aware Graph Convolutional Networks for Video Question Answering

AAAI

Authors

  • Chuang Gan
  • Mingkui Tan
  • Qing Du
  • Runhao Zeng
  • Peihao Chen
  • Deng Huang

Published on

04/03/2020

We addressed the challenging task of video question answering, which requires machines to answer questions about videos in a natural language form. Previous state-of-the-art methods attempt to apply spatio-temporal attention mechanism on video frame features without explicitly modeling the location and relations among object interaction occurred in videos. However, the relations between object interaction and their location information are very critical for both action recognition and question reasoning. In this work, we propose to represent the contents in the video as a location-aware graph by incorporating the location information of an object into the graph construction. Here, each node is associated with an object represented by its appearance and location features. Based on the constructed graph, we propose to use graph convolution to infer both the category and temporal locations of an action. As the graph is built on objects, our method is able to focus on the foreground action contents for better video question answering. Lastly, we leverage an attention mechanism to combine the output of graph convolution and encoded question features for final answer reasoning. Extensive experiments demonstrate the effectiveness of the proposed methods. Specifically, our method significantly outperforms state-of-the-art methods on TGIF-QA, Youtube2Text-QA and MSVD-QA datasets.

Please cite our work using the BibTeX below.

@article{Huang_Chen_Zeng_Du_Tan_Gan_2020, 
title={Location-Aware Graph Convolutional Networks for Video Question Answering}, 
volume={34}, 
url={https://ojs.aaai.org/index.php/AAAI/article/view/6737}, 
DOI={10.1609/aaai.v34i07.6737}, 
number={07}, 
journal={Proceedings of the AAAI Conference on Artificial Intelligence}, 
author={Huang, Deng and Chen, Peihao and Zeng, Runhao and Du, Qing and Tan, Mingkui and Gan, Chuang}, 
year={2020}, 
month={Apr.}, 
pages={11021-11028} 
}
Close Modal