PromptBoosting: Black-Box Text Classification with Ten Forward Passes
Authors
Authors
- Bairu Hou
- Joe O'Connor
- Jacob Andreas
- Shiyu Chang
- Yang Zhang
Authors
- Bairu Hou
- Joe O'Connor
- Jacob Andreas
- Shiyu Chang
- Yang Zhang
Published on
07/29/2023
Categories
We describe PROMPTBOOSTING, a query-efficient procedure for building a text classifier from a neural language model (LM) without access to the LM’s parameters, gradients, or hidden representations. This form of “black-box” classifier training has become increasingly important as the cost of training and inference in large-scale LMs has grown. But existing black-box LM classifier learning approaches are themselves computationally inefficient, typically specializing LMs to the target task by searching in a large space of (discrete or continuous) prompts using zeroth-order optimization methods. Instead of directly optimizing in prompt space, PROMPTBOOSTING obtains a small pool of prompts via a gradient-free approach, and then constructs a large pool of weak learners by pairing these prompts with different elements of the LM’s output distribution. These weak learners are then ensembled using the ADABOOST algorithm. The entire learning process requires only a small number of forward passes per batch and no backward pass. Experiments show that PROMPTBOOSTING achieves state-of-the-art performance in multiple black-box few-shot classification tasks, and matches or outperforms full fine-tuning in both few-shot and standard learning paradigms, while training 10x faster than existing black-box methods.
This work was presented at ICML 2023.
Please cite our work using the BibTeX below.
@misc{hou2022promptboosting,
title={PromptBoosting: Black-Box Text Classification with Ten Forward Passes},
author={Bairu Hou and Joe O'Connor and Jacob Andreas and Shiyu Chang and Yang Zhang},
year={2022},
eprint={2212.09257},
archivePrefix={arXiv},
primaryClass={cs.CL}
}