A Closer Look at Deep Policy Gradients
Authors
Authors
- Andrew Ilyas
- Logan Engstrom
- Shibani Santurkar
- Dimitris Tsipras
- Firdaus Janoos
- Larry Rudolph
- Aleksander Madry
Authors
- Andrew Ilyas
- Logan Engstrom
- Shibani Santurkar
- Dimitris Tsipras
- Firdaus Janoos
- Larry Rudolph
- Aleksander Madry
Published on
09/25/2019
Categories
We study how the behavior of deep policy gradient algorithms reflects the conceptual framework motivating their development. To this end, we propose a fine-grained analysis of state-of-the-art methods based on key elements of this framework: gradient estimation, value prediction, and optimization landscapes. Our results show that the behavior of deep policy gradient algorithms often deviates from what their motivating framework would predict: surrogate rewards do not match the true reward landscape, learned value estimators fail to fit the true value function, and gradient estimates poorly correlate with the “true” gradient. The mismatch between predicted and empirical behavior we uncover highlights our poor understanding of current methods, and indicates the need to move beyond current benchmark-centric evaluation methods.
Please cite our work using the BibTeX below.
@inproceedings{
Ilyas2020A,
title={A Closer Look at Deep Policy Gradients},
author={Andrew Ilyas and Logan Engstrom and Shibani Santurkar and Dimitris Tsipras and Firdaus Janoos and Larry Rudolph and Aleksander Madry},
booktitle={International Conference on Learning Representations},
year={2020},
url={https://openreview.net/forum?id=ryxdEkHtPS}
}