Research

Look at What I’m Doing: Self-Supervised Spatial Grounding of Narrations in Instructional Videos

NeurIPS

Authors

  • Reuben Tan
  • Bryan Plummer
  • Kate Saenko
  • Hailin Jin
  • Bryan Russell

Published on

10/20/2021

We introduce the task of spatially localizing narrated interactions in videos. Key to our approach is the ability to learn to spatially localize interactions with self-supervision on a large corpus of videos with accompanying transcribed narrations. To achieve this goal, we propose a multilayer cross-modal attention network that enables effective optimization of a contrastive loss during training. We introduce a divided strategy that alternates between computing inter- and intra-modal attention across the visual and natural language modalities, which allows effective training via directly contrasting the two modalities’ representations. We demonstrate the effectiveness of our approach by self-training on the HowTo100M instructional video dataset and evaluating on a newly collected dataset of localized described interactions in the YouCook2 dataset. We show that our approach outperforms alternative baselines, including shallow co-attention and full cross-modal attention. We also apply our approach to grounding phrases in images with weak supervision on Flickr30K and show that stacking multiple attention layers is effective and, when combined with a word-to-region loss, achieves state of the art on recall-at-one and pointing hand accuracies.

This paper has been published at NeurIPS 2021

Please cite our work using the BibTeX below.

@misc{tan2021look,
      title={Look at What I'm Doing: Self-Supervised Spatial Grounding of Narrations in Instructional Videos}, 
      author={Reuben Tan and Bryan A. Plummer and Kate Saenko and Hailin Jin and Bryan Russell},
      year={2021},
      eprint={2110.10596},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
Close Modal