Research

Equivariant Self-Supervised Learning: Encouraging Equivariance in Representations

ICLR

Authors

Published on

04/29/2022

Categories

ICLR

In state-of-the-art self-supervised learning (SSL) pre-training produces semantically good representations by encouraging them to be invariant under meaningful transformations prescribed from human knowledge. In fact, the property of invariance is a trivial instance of a broader class called equivariance, which can be intuitively understood as the property that representations transform according to the way the inputs transform. Here, we show that rather than using only invariance, pre-training that encourages non-trivial equivariance to some transformations, while maintaining invariance to other transformations, can be used to improve the semantic quality of representations. Specifically, we extend popular SSL methods to a more general framework which we name Equivariant SelfSupervised Learning (E-SSL). In E-SSL, a simple additional pre-training objective encourages equivariance by predicting the transformations applied to the input. We demonstrate E-SSL’s effectiveness empirically on several popular computer vision benchmarks, e.g. improving SimCLR to 72.5% linear probe accuracy on ImageNet. Furthermore, we demonstrate usefulness of E-SSL for applications beyond computer vision; in particular, we show its utility on regression problems in photonics science. Our code, datasets and pre-trained models are available at https://github.com/rdangovs/essl to aid further research in E-SSL.

Please cite our work using the BibTeX below.

@inproceedings{
dangovski2022equivariant,
title={Equivariant Self-Supervised Learning: Encouraging Equivariance in Representations},
author={Rumen Dangovski and Li Jing and Charlotte Loh and Seungwook Han and Akash Srivastava and Brian Cheung and Pulkit Agrawal and Marin Soljacic},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=gKLAAfiytI}
}
Close Modal