PerSim: Data-Efficient Offline Reinforcement Learning with Heterogeneous Agents via Personalized Simulators
Authors
Authors
- Anish Agarwal
- Abdullah Omar Alomar
- Varkey Alumootil
- Devavrat Shah
- Dennis Shen
- Zhi Xu
- Cindy Yang
Authors
- Anish Agarwal
- Abdullah Omar Alomar
- Varkey Alumootil
- Devavrat Shah
- Dennis Shen
- Zhi Xu
- Cindy Yang
Published on
12/14/2021
Categories
We consider offline reinforcement learning (RL) with heterogeneous agents under severe data scarcity, i.e., we only observe a single historical trajectory for every agent under an unknown, potentially sub-optimal policy. We find that the performance of state-of-the-art offline and model-based RL methods degrade significantly given such limited data availability, even for commonly perceived “solved” benchmark settings such as “MountainCar” and “CartPole”. To address this challenge, we propose PerSim, a model-based offline RL approach which first learns a personalized simulator for each agent by collectively using the historical trajectories across all agents, prior to learning a policy. We do so by positing that the transition dynamics across agents can be represented as a latent function of latent factors associated with agents, states, and actions; subsequently, we theoretically establish that this function is well-approximated by a “low-rank” decomposition of separable agent, state, and action latent functions. This representation suggests a simple, regularized neural network architecture to effectively learn the transition dynamics per agent, even with scarce, offline data. We perform extensive experiments across several benchmark environments and RL methods. The consistent improvement of our approach, measured in terms of both state dynamics prediction and eventual reward, confirms the efficacy of our framework in leveraging limited historical data to simultaneously learn personalized policies across agents.
Please cite our work using the BibTeX below.
@inproceedings{
agarwal2021persim,
title={PerSim: Data-Efficient Offline Reinforcement Learning with Heterogeneous Agents via Personalized Simulators},
author={Anish Agarwal and Abdullah Omar Alomar and Varkey Alumootil and Devavrat Shah and Dennis Shen and Zhi Xu and Cindy Yang},
booktitle={Advances in Neural Information Processing Systems},
editor={A. Beygelzimer and Y. Dauphin and P. Liang and J. Wortman Vaughan},
year={2021},
url={https://openreview.net/forum?id=v2w7CVZGHeA}
}