Bringing Image Scene Structure to Video via Frame-Clip Consistency of Object Tokens
Authors
Authors
- Leonid Karlinsky
- Elad Ben-Avraham
- Roei Herzig
- Karttikeya Mangalam
- Amir Bar
- Anna Rohrbach
- Trevor Darrell
- Amir Globerson
Bringing Image Scene Structure to Video via Frame-Clip Consistency of Object Tokens
Authors
- Leonid Karlinsky
- Elad Ben-Avraham
- Roei Herzig
- Karttikeya Mangalam
- Amir Bar
- Anna Rohrbach
- Trevor Darrell
- Amir Globerson
Published on
12/04/2022
Categories
Recent action recognition models have achieved impressive results by integrating objects, their locations and interactions. However, obtaining dense structured annotations for each frame is tedious and time-consuming, making these methods expensive to train and less scalable. On the other hand, one does often have access to a small set of annotated images, either within or outside the domain of interest. Here we ask how such images can be leveraged for downstream video understanding tasks. We propose a learning framework StructureViT (SViT for short), which demonstrates how utilizing the structure of a small number of images only available during training can improve a video model. SViT relies on two key insights. First, as both images and videos contain structured information, we enrich a transformer model with a set of object tokens that can be used across images and videos. Second, the scene representations of individual frames in video should “align” with those of still images. This is achieved via a FrameClip Consistency loss, which ensures the flow of structured information between images and videos. We explore a particular instantiation of scene structure, namely a Hand-Object Graph, consisting of hands and objects with their locations as nodes, and physical relations of contact/no-contact as edges. SViT shows strong performance improvements on multiple video understanding tasks and datasets, including the first place in the Ego4D CVPR’22 Point of No Return Temporal Localization Challenge. For code and pretrained models, visit the project page at https://eladb3.github.io/SViT/.
Please cite our work using the BibTeX below.
@inproceedings{
avraham2022bringing,
title={Bringing Image Scene Structure to Video via Frame-Clip Consistency of Object Tokens},
author={Elad Ben Avraham and Roei Herzig and Karttikeya Mangalam and Amir Bar and Anna Rohrbach and Leonid Karlinsky and Trevor Darrell and Amir Globerson},
booktitle={Advances in Neural Information Processing Systems},
editor={Alice H. Oh and Alekh Agarwal and Danielle Belgrave and Kyunghyun Cho},
year={2022},
url={https://openreview.net/forum?id=0JV4VVBsK6a}
}