Research

Hyper-Decision Transformer for Efficient Online Policy Adaptation

ICLR

Authors

Published on

05/05/2023

Categories

ICLR

Decision Transformers (DT) have demonstrated strong performances in offline reinforcement learning settings, but quickly adapting to unseen novel tasks remains challenging. To address this challenge, we propose a new framework, called Hyper-Decision Transformer (HDT), that can generalize to novel tasks from a handful of demonstrations in a data- and parameter-efficient manner. To achieve such a goal, we propose to augment the base DT with an adaptation module, whose parameters are initialized by a hyper-network. When encountering unseen tasks, the hyper-network takes a handful of demonstrations as inputs and initializes the adaptation module accordingly. This initialization enables HDT to efficiently adapt to novel tasks by only fine-tuning the adaptation module. We validate HDT’s generalization capability on object manipulation tasks. We find that with a single expert demonstration and fine-tuning only 0.5% of DT parameters, HDT adapts faster to unseen tasks than fine-tuning the whole DT model. Finally, we explore a more challenging setting where expert actions are not available, and we show that HDT outperforms state-of-the-art baselines in terms of task success rates by a large margin. Demos are available on our project page.

Please cite our work using the BibTeX below.

@inproceedings{
xu2023hyperdecision,
title={Hyper-Decision Transformer for Efficient Online Policy Adaptation},
author={Mengdi Xu and Yuchen Lu and Yikang Shen and Shun Zhang and Ding Zhao and Chuang Gan},
booktitle={The Eleventh International Conference on Learning Representations },
year={2023},
url={https://openreview.net/forum?id=AatUEvC-Wjv}
}
Close Modal