Research

Deep Analysis of CNN-based Spatio-temporal Representations for Action Recognition

CVPR

Authors

Published on

10/22/2020

Categories

Computer Vision CVPR

In recent years, a number of approaches based on 2D or 3D convolutional neural networks (CNN) have emerged for video action recognition, achieving state-of-the-art results on several large-scale benchmark datasets. In this paper, we carry out in-depth comparative analysis to better understand the differences between these approaches and the progress made by them. To this end, we develop an unified framework for both 2D-CNN and 3D-CNN action models, which enables us to remove bells and whistles and provides a common ground for fair comparison. We then conduct an effort towards a large-scale analysis involving over 300 action recognition models. Our comprehensive analysis reveals that a) a significant leap is made in efficiency for action recognition, but not in accuracy; b) 2D-CNN and 3D-CNN models behave similarly in terms of spatio-temporal representation abilities and transferability.

This paper has been published at CVPR 2021

Please cite our work using the BibTeX below.

@misc{chen2021deep,
      title={Deep Analysis of CNN-based Spatio-temporal Representations for Action Recognition}, 
      author={Chun-Fu Chen and Rameswar Panda and Kandan Ramakrishnan and Rogerio Feris and John Cohn and Aude Oliva and Quanfu Fan},
      year={2021},
      eprint={2010.11757},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
Close Modal