Research

On sensitivity of meta-learning to support data

NeurIPS

Authors

Meta-learning algorithms are widely used for few-shot learning. For example, image recognition systems that readily adapt to unseen classes after seeing only a few labeled examples. Despite their success, we show that modern meta-learning algorithms are extremely sensitive to the data used for adaptation, i.e. support data. In particular, we demonstrate the existence of (unaltered, in-distribution, natural) images that, when used for adaptation, yield accuracy as low as 4\% or as high as 95\% on standard few-shot image classification benchmarks. We explain our empirical findings in terms of class margins, which in turn suggests that robust and safe meta-learning requires larger margins than supervised learning.

Please cite our work using the BibTeX below.

@misc{https://doi.org/10.48550/arxiv.2110.13953,
  doi = {10.48550/ARXIV.2110.13953},
  
  url = {https://arxiv.org/abs/2110.13953},
  
  author = {Agarwal, Mayank and Yurochkin, Mikhail and Sun, Yuekai},
  
  keywords = {Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
  
  title = {On sensitivity of meta-learning to support data},
  
  publisher = {arXiv},
Close Modal