Research

How Transferable are Video Representations Based on Synthetic Data?

NeurIPS

Authors

Published on

12/09/2022

Categories

NeurIPS

Action recognition has improved dramatically with massive-scale video datasets. Yet, these datasets are accompanied with issues related to curation cost, privacy, ethics, bias, and copyright. Compared to that, only minor efforts have been devoted toward exploring the potential of synthetic video data. In this work, as a stepping stone towards addressing these shortcomings, we study the transferability of video representations learned solely from synthetically-generated video clips, instead of real data. We propose SynAPT, a novel benchmark for action recognition based on a combination of existing synthetic datasets, in which a model is pre-trained on synthetic videos rendered by various graphics simulators, and then transferred to a set of downstream action recognition datasets, containing different categories than the synthetic data. We provide an extensive baseline analysis on SynAPT revealing that the simulation-to-real gap is minor for datasets with low object and scene bias, where models pre-trained with synthetic data even outperform their real data counterparts. We posit that the gap between real and synthetic action representations can be attributed to contextual bias and static objects related to the action, instead of the temporal dynamics of the action itself. The SynAPT benchmark is available at https://github.com/mintjohnkim/SynAPT.

Please cite our work using the BibTeX below.

@inproceedings{
kim2022how,
title={How Transferable are Video Representations Based on Synthetic Data?},
author={Yo-whan Kim and Samarth Mishra and SouYoung Jin and Rameswar Panda and Hilde Kuehne and Leonid Karlinsky and Venkatesh Saligrama and Kate Saenko and Aude Oliva and Rogerio Feris},
booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2022},
url={https://openreview.net/forum?id=lRUCfzs5Hzg}
}
Close Modal