Attention Clusters: Purely Attention Based Local Feature Integration for Video Classification



  • Chuang Gan
  • Xiang Long
  • Gerard de Melo
  • Jiajun Wu
  • Xiao Liu
  • Shilei Wen

Published on


Recently, substantial research effort has focused on how to apply CNNs or RNNs to better capture temporal patterns in videos, so as to improve the accuracy of video classification. In this paper, however, we show that temporal information, especially longer-term patterns, may not be necessary to achieve competitive results on common trimmed video classification datasets. We investigate the potential of a purely attention based local feature integration. Accounting for the characteristics of such features in video classification, we propose a local feature integration framework based on attention clusters, and introduce a shifting operation to capture more diverse signals. We carefully analyze and compare the effect of different attention mechanisms, cluster sizes, and the use of the shifting operation, and also investigate the combination of attention clusters for multimodal integration. We demonstrate the effectiveness of our framework on three real-world video classification datasets. Our model achieves competitive results across all of these. In particular, on the large-scale Kinetics dataset, our framework obtains an excellent single model accuracy of 79.4% in terms of the top-1 and 94.0% in terms of the top-5 accuracy on the validation set.

Please cite our work using the BibTeX below.

author = {Long, Xiang and Gan, Chuang and de Melo, Gerard and Wu, Jiajun and Liu, Xiao and Wen, Shilei},
title = {Attention Clusters: Purely Attention Based Local Feature Integration for Video Classification},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
Close Modal