Imitation Learning from Observations by Minimizing Inverse Dynamics Disagreement
Authors
Authors
- Chao Yang
- Xiaojian Ma
- Wenbing Huang
- Fuchun Sun
- Huaping Liu
- Junzhou Huang
- Chuang Gan
Authors
- Chao Yang
- Xiaojian Ma
- Wenbing Huang
- Fuchun Sun
- Huaping Liu
- Junzhou Huang
- Chuang Gan
Published on
12/14/2019
Categories
This paper studies Learning from Observations (LfO) for imitation learning with access to state-only demonstrations. In contrast to Learning from Demonstration (LfD) that involves both action and state supervision, LfO is more practical in leveraging previously inapplicable resources (e.g. videos), yet more challenging due to the incomplete expert guidance. In this paper, we investigate LfO and its difference with LfD in both theoretical and practical perspectives. We first prove that the gap between LfD and LfO actually lies in the disagreement of inverse dynamics models between the imitator and the expert, if following the modeling approach of GAIL [15]. More importantly, the upper bound of this gap is revealed by a negative causal entropy which can be minimized in a model-free way. We term our method as Inverse-Dynamics-Disagreement-Minimization (IDDM) which enhances the conventional LfO method through further bridging the gap to LfD. Considerable empirical results on challenging benchmarks indicate that our method attains consistent improvements over other LfO counterparts.
This work was published in NeurIPS 2019.
Please cite our work using the BibTeX below.
@inproceedings{DBLP:conf/nips/YangMHS0HG19,
author={Chao Yang and Xiaojian Ma and Wenbing Huang and Fuchun Sun and Huaping Liu and Junzhou Huang and Chuang Gan},
title={Imitation Learning from Observations by Minimizing Inverse Dynamics Disagreement},
year={2019},
cdate={1546300800000},
pages={239-249},
url={http://papers.nips.cc/paper/8317-imitation-learning-from-observations-by-minimizing-inverse-dynamics-disagreement},
booktitle={NeurIPS},
crossref={conf/nips/2019}
}