Weakly Supervised Dense Event Captioning in Videos
Authors
Authors
- Xuguang Duan
- Wenbing Huang
- Chuang Gan
- Jingdong Wang
- Wenwu Zhu
- Junzhou Huang
Authors
- Xuguang Duan
- Wenbing Huang
- Chuang Gan
- Jingdong Wang
- Wenwu Zhu
- Junzhou Huang
Published on
12/10/2018
Categories
Dense event captioning aims to detect and describe all events of interest contained in a video. Despite the advanced development in this area, existing methods tackle this task by making use of dense temporal annotations, which is dramatically source-consuming. This paper formulates a new problem: weakly supervised dense event captioning, which does not require temporal segment annotations for model training. Our solution is based on the one-to-one correspondence assumption, each caption describes one temporal segment, and each temporal segment has one caption, which holds in current benchmark datasets and most real-world cases. We decompose the problem into a pair of dual problems: event captioning and sentence localization and present a cycle system to train our model. Extensive experimental results are provided to demonstrate the ability of our model on both dense event captioning and sentence localization in videos.
Please cite our work using the BibTeX below.
@article{DBLP:journals/corr/abs-1812-03849,
author = {Xuguang Duan and
Wen{-}bing Huang and
Chuang Gan and
Jingdong Wang and
Wenwu Zhu and
Junzhou Huang},
title = {Weakly Supervised Dense Event Captioning in Videos},
journal = {CoRR},
volume = {abs/1812.03849},
year = {2018},
url = {http://arxiv.org/abs/1812.03849},
archivePrefix = {arXiv},
eprint = {1812.03849},
timestamp = {Tue, 01 Jan 2019 15:01:25 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1812-03849.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}