Research

Topological Experience Replay

ICLR

Authors

Published on

04/29/2022

Categories

ICLR

State-of-the-art deep Q-learning methods update Q-values using state transition tuples sampled from the experience replay buffer. This strategy often uniformly and randomly samples or prioritizes data sampling based on measures such as the temporal difference (TD) error. Such sampling strategies can be inefficient at learning Q-function because a state’s Q-value depends on the Q-value of successor states. If the data sampling strategy ignores the precision of Q-value estimate of the next state, it can lead to useless and often incorrect updates to the Q-values. To mitigate this issue, we organize the agent’s experience into a graph that explicitly tracks the dependency between Q-values of states. Each edge in the graph represents a transition between two states by executing a single action. We perform value backups via a breadth-first search starting from the set of terminal states and successively moving backwards. We empirically show that our method is substantially more data-efficient than several baselines on a diverse range of goal-reaching tasks. Notably, the proposed method also outperforms baselines that consume more batches of training experience and operates from high-dimensional observational data such as images.

Please cite our work using the BibTeX below.

@inproceedings{
hong2022topological,
title={Topological Experience Replay},
author={Zhang-Wei Hong and Tao Chen and Yen-Chen Lin and Joni Pajarinen and Pulkit Agrawal},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=OXRZeMmOI7a}
}
Close Modal