Learning to Grow Pretrained Models for Efficient Transformer Training
Authors
Authors
- Peihao Wang
- Rameswar Panda
- Lucas Torroba Hennigen
- Philip Greengard
- Leonid Karlinsky
- Rogerio Feris
- David Cox
- Zhangyang Wang
- Yoon Kim
Authors
- Peihao Wang
- Rameswar Panda
- Lucas Torroba Hennigen
- Philip Greengard
- Leonid Karlinsky
- Rogerio Feris
- David Cox
- Zhangyang Wang
- Yoon Kim
Published on
05/05/2023
Categories
Scaling transformers has led to significant breakthroughs in many domains, leading to a paradigm in which larger versions of existing models are trained and released on a periodic basis. New instances of such models are typically trained completely from scratch, despite the fact that they are often just scaled-up versions of their smaller counterparts. How can we use the implicit knowledge in the parameters of smaller, extant models to enable faster training of newer, larger models? This paper describes an approach for accelerating transformer training by learning to grow pretrained transformers, where we learn to linearly map the parameters of the smaller model to initialize the larger model. For tractable learning, we factorize the linear transformation as a composition of (linear) widthand depth-growth operators, and further employ a Kronecker factorization of these growth operators to encode architectural knowledge. Extensive experiments across both language and vision transformers demonstrate that our learned Linear Growth Operator (LiGO) can save up to 50% computational cost of training from scratch, while also consistently outperforming strong baselines that also reuse smaller pretrained models to initialize larger models.
Please cite our work using the BibTeX below.
@inproceedings{
wang2023learning,
title={Learning to Grow Pretrained Models for Efficient Transformer Training},
author={Peihao Wang and Rameswar Panda and Lucas Torroba Hennigen and Philip Greengard and Leonid Karlinsky and Rogerio Feris and David Daniel Cox and Zhangyang Wang and Yoon Kim},
booktitle={The Eleventh International Conference on Learning Representations },
year={2023},
url={https://openreview.net/forum?id=cDYRS5iZ16f}
}