Research

TSM: Temporal Shift Module for Efficient Video Understanding

Efficient AI

Authors

Published on

11/20/2018

The explosive growth in video streaming gives rise to challenges on efficiently extracting the spatial-temporal information to perform video understanding at low computation cost. Conventional 2D CNNs are computationally cheap but cannot capture temporal relationships; 3D CNN based methods can achieve good performance but are computationally intensive, making it expensive to deploy. In this paper, we propose a generic and effective Temporal Shift Module (TSM) that enjoys both high efficiency and high performance. Specifically, it can achieve the performance of 3D CNN but maintain 2D CNN’s complexity. TSM shifts part of the channels along the temporal dimension, thus facilitate information exchanged among neighboring frames. It can be inserted into 2D CNNs to achieve temporal modeling at zero computation and zero parameters. We also extended TSM to online video recognition setting, which enables real-time low-latency online video recognition. On the Something-Something-V1 dataset which focuses on temporal modeling, we achieved better results than I3D family and ECO family using 6X and 2.7X fewer FLOPs respectively. Measured on P100 GPU, our single model achieved 1.8% higher accuracy at 9.5X lower latency and 12.7X higher throughput compared to I3D.

Please cite our work using the BibTeX below.

@article{DBLP:journals/corr/abs-1811-08383,
  author    = {Ji Lin and
               Chuang Gan and
               Song Han},
  title     = {Temporal Shift Module for Efficient Video Understanding},
  journal   = {CoRR},
  volume    = {abs/1811.08383},
  year      = {2018},
  url       = {http://arxiv.org/abs/1811.08383},
  archivePrefix = {arXiv},
  eprint    = {1811.08383},
  timestamp = {Mon, 26 Nov 2018 12:52:45 +0100},
  biburl    = {https://dblp.org/rec/journals/corr/abs-1811-08383.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}

Close Modal