Research

Redeeming Intrinsic Rewards via Constrained Optimization

NeurIPS

Authors

Published on

12/04/2022

Categories

NeurIPS Optimization

We alleviate the performance drop resulting from the bias of intrinsic rewards while preserving the merits of intrinsic rewards.
Abstract: State-of-the-art reinforcement learning (RL) algorithms typically use random sampling (e.g., �-greedy) for exploration, but this method fails on hard exploration tasks like Montezuma’s Revenge. To address the challenge of exploration, prior works incentivize exploration by rewarding the agent when it visits novel states. Such intrinsic rewards (also called exploration bonus or curiosity) often lead to excellent performance on hard exploration tasks. However, on easy exploration tasks, the agent gets distracted by intrinsic rewards and performs unnecessary exploration even when sufficient task (also called extrinsic) reward is available. Consequently, such an overly curious agent performs worse than an agent trained with only task reward. Such inconsistency in performance across tasks prevents the widespread use of intrinsic rewards with RL algorithms. We propose a principled constrained optimization procedure called Extrinsic-Intrinsic Policy Optimization (EIPO) that automatically tunes the importance of the intrinsic reward: it suppresses the intrinsic reward when exploration is unnecessary and increases it when exploration is required. The results is superior exploration that does not require manual tuning in balancing the intrinsic reward against the task reward. Consistent performance gains across sixty-one ATARI games validate our claim. The code is available at https://github.com/Improbable-AI/eipo.

Please cite our work using the BibTeX below.

@inproceedings{
chen2022redeeming,
title={Redeeming intrinsic rewards via constrained policy optimization},
author={Eric R Chen and Zhang-Wei Hong and Joni Pajarinen and Pulkit Agrawal},
booktitle={Advances in Neural Information Processing Systems},
editor={Alice H. Oh and Alekh Agarwal and Danielle Belgrave and Kyunghyun Cho},
year={2022},
url={https://openreview.net/forum?id=36Yz37cEN_Q}
}
Close Modal