Research

Fast Convergence for Unstable Reinforcement Learning Problems by Logarithmic Mapping

ICML

Published on

07/23/2022

For many of the reinforcement learning applications, the system is assumed to be inherently stable and with bounded reward, state and action space. These are key requirements for the optimization convergence of classical reinforcement learning reward function with discount factors. Unfortunately, these assumptions do not hold true for many real world problems such as an unstable linear–quadratic regulator (LQR)1 . In this work, we propose new methods to stabilize and speed up the convergence of unstable reinforcement learning problems with the policy gradient methods. We provide theoretical insights on the efficiency of our methods. In practice, we achieve good experimental results over multiple examples where the vanilla methods mostly fail to converge due to system instability.

Please cite our work using the BibTeX below.

@inproceedings{
zhang2022,
title={ Fast Convergence for Unstable Reinforcement Learning Problems by Logarithmic Mapping},
author={Wang Zhang and Lam M. Nguyen and Subhro Das and Alexandre Megretski and Luca Daniel and Tsui-Wei Weng},
booktitle={Decision Awareness in Reinforcement Learning Workshop at ICML 2022},
year={2022},
url={https://openreview.net/forum?id=EVqdPBvYrvg}
}
Close Modal