Research

RSPNet: Relative Speed Perception for Unsupervised Video Representation Learning.

AAAI

Authors

  • Peihao Chen
  • Deng Huang
  • Dongliang He
  • Xiang Long
  • Runhao Zeng
  • Shilei Wen
  • Mingkui Tan
  • Chuang Gan

Published on

10/27/2020

Categories

AAAI Computer Vision

We study unsupervised video representation learning that seeks to learn both motion and appearance features from unlabeled video only, which can be reused for downstream tasks such as action recognition. This task, however, is extremely challenging due to 1) the highly complex spatial-temporal information in videos; and 2) the lack of labeled data for training. Unlike the representation learning for static images, it is difficult to construct a suitable self-supervised task to well model both motion and appearance features. More recently, several attempts have been made to learn video representation through video playback speed prediction. However, it is non-trivial to obtain precise speed labels for the videos. More critically, the learnt models may tend to focus on motion pattern and thus may not learn appearance features well. In this paper, we observe that the relative playback speed is more consistent with motion pattern, and thus provide more effective and stable supervision for representation learning. Therefore, we propose a new way to perceive the playback speed and exploit the relative speed between two video clips as labels. In this way, we are able to well perceive speed and learn better motion features. Moreover, to ensure the learning of appearance features, we further propose an appearance-focused task, where we enforce the model to perceive the appearance difference between two video clips. We show that optimizing the two tasks jointly consistently improves the performance on two downstream tasks, namely action recognition and video retrieval. Remarkably, for action recognition on UCF101 dataset, we achieve 93.7% accuracy without the use of labeled data for pre-training, which outperforms the ImageNet supervised pre-trained model.

This paper has been published at AAAI 2021

Please cite our work using the BibTeX below.

@misc{chen2021rspnet,
      title={RSPNet: Relative Speed Perception for Unsupervised Video Representation Learning}, 
      author={Peihao Chen and Deng Huang and Dongliang He and Xiang Long and Runhao Zeng and Shilei Wen and Mingkui Tan and Chuang Gan},
      year={2021},
      eprint={2011.07949},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
Close Modal