Research

Learning New Tricks From Old Dogs: Multi-Source Transfer Learning From Pre-Trained Networks

NeurIPS

Authors

Published on

12/14/2019

Categories

NeurIPS

The advent of deep learning algorithms for mobile devices and sensors has led to a dramatic expansion in the availability and number of systems trained on a wide range of machine learning tasks, creating a host of opportunities and challenges in the realm of transfer learning. Currently, most transfer learning methods require some kind of control over the systems learned, either by enforcing constraints during the source training, or through the use of a joint optimization objective between tasks that requires all data be co-located for training. However, for practical, privacy, or other reasons, in a variety of applications we may have no control over the individual source task training, nor access to source training samples. Instead we only have access to features pre-trained on such data as the output of “black-boxes.” For such scenarios, we consider the multi-source learning problem of training a classifier using an ensemble of pre-trained neural networks for a set of classes that have not been observed by any of the source networks, and for which we have very few training samples. We show that by using these distributed networks as feature extractors, we can train an effective classifier in a computationally-efficient manner using tools from (nonlinear) maximal correlation analysis. In particular, we develop a method we refer to as maximal correlation weighting (MCW) to build the required target classifier from an appropriate weighting of the feature functions from the source networks. We illustrate the effectiveness of the resulting classifier on datasets derived from the CIFAR-100, Stanford Dogs, and Tiny ImageNet datasets, and, in addition, use the methodology to characterize the relative value of different source tasks in learning a target task.

This work is published in NeurIPS 2019.

Please cite our work using the BibTeX below.

@inproceedings{NEURIPS2019_6048ff4e,
 author = {Lee, Joshua and Sattigeri, Prasanna and Wornell, Gregory},
 booktitle = {Advances in Neural Information Processing Systems},
 editor = {H. Wallach and H. Larochelle and A. Beygelzimer and F. d\textquotesingle Alch\'{e}-Buc and E. Fox and R. Garnett},
 pages = {},
 publisher = {Curran Associates, Inc.},
 title = {Learning New Tricks From Old Dogs: Multi-Source Transfer Learning From Pre-Trained Networks},
 url = {https://proceedings.neurips.cc/paper/2019/file/6048ff4e8cb07aa60b6777b6f7384d52-Paper.pdf},
 volume = {32},
 year = {2019}
}
Close Modal