Moments in Time Dataset: one million videos for event understanding
Authors
Authors
- Mathew Monfort
- Alex Andonian
- Bolei Zhou
- Kandan Ramakrishnan
- Sarah Adel Bargal
- Tom Yan
- Lisa Brown
- Quanfu Fan
- Dan Gutfreund
- Carl Vondrick
- Aude Oliva
Authors
- Mathew Monfort
- Alex Andonian
- Bolei Zhou
- Kandan Ramakrishnan
- Sarah Adel Bargal
- Tom Yan
- Lisa Brown
- Quanfu Fan
- Dan Gutfreund
- Carl Vondrick
- Aude Oliva
Published on
02/24/2019
Categories
We present the Moments in Time Dataset, a large-scale human-annotated collection of one million short videos corresponding to dynamic events unfolding within three seconds. Modeling the spatial-audio-temporal dynamics even for actions occurring in 3 second videos poses many challenges: meaningful events do not include only people, but also objects, animals, and natural phenomena; visual and auditory events can be symmetrical or not in time (opening means closing in reverse order), and transient or sustained. We describe the annotation process of our dataset (each video is tagged with one action or activity label among 339 different classes), analyze its scale and diversity in comparison to other large-scale video datasets for action recognition, and report results of several baseline models addressing separately, and jointly, three modalities: spatial, temporal and auditory. The Moments in Time dataset, designed to have a large coverage and diversity of events in both visual and auditory modalities, can serve as a new challenge to develop models that scale to the level of complexity and abstract reasoning that a human processes on a daily basis.
This work was presented in IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) 2020.
Please cite our work using the BibTeX below.
@article{DBLP:journals/corr/abs-1801-03150,
author = {Mathew Monfort and
Bolei Zhou and
Sarah Adel Bargal and
Alex Andonian and
Tom Yan and
Kandan Ramakrishnan and
Lisa M. Brown and
Quanfu Fan and
Dan Gutfreund and
Carl Vondrick and
Aude Oliva},
title = {Moments in Time Dataset: one million videos for event understanding},
journal = {CoRR},
volume = {abs/1801.03150},
year = {2018},
url = {http://arxiv.org/abs/1801.03150},
archivePrefix = {arXiv},
eprint = {1801.03150},
timestamp = {Mon, 13 Aug 2018 16:48:27 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1801-03150.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}