Research

RegionViT: Regional-to-Local Attention for Vision Transformers

ICLR

Published on

04/29/2022

Categories

ICLR

Vision transformer (ViT) has recently shown its strong capability in achieving comparable results to convolutional neural networks (CNNs) on image classification. However, vanilla ViT simply inherits the same architecture from the natural language processing directly, which is often not optimized for vision applications. Motivated by this, in this paper, we propose a new architecture that adopts the pyramid structure and employ novel regional-to-local attention rather than global self-attention in vision transformers. More specifically, our model first generates regional tokens and local tokens from an image with different patch sizes, where each regional token is associated with a set of local tokens based on the spatial location. The regional-to-local attention includes two steps: first, the regional self-attention extracts global information among all regional tokens and then the local self-attention exchanges the information among one regional token and the associated local tokens via self-attention. Therefore, even though local self-attention confines the scope in a local region but it can still receive global information. Extensive experiments on four vision tasks, including image classification, object and keypoint detection, semantics segmentation and action recognition, show that our approach outperforms or is on par with state-of-the-art ViT variants including many concurrent works. Our source codes and models are available at https://github.com/IBM/RegionViT.

Please cite our work using the BibTeX below.

@inproceedings{
chen2022regionvit,
title={RegionViT: Regional-to-Local Attention for Vision Transformers},
author={Chun-Fu Chen and Rameswar Panda and Quanfu Fan},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=T__V3uLix7V}
}
Close Modal