A Deep Reinforcement Learning Approach to First-Order Logic Theorem Proving
Authors
Authors
- Maxwell Crouse
- Ibrahim Abdelaziz
- Bassem Makni
- Spencer Whitehead
- Cristina Cornelio
- Pavan Kapanipathi
- Kavitha Srinivas
- Veronika Thost
- Michael Witbrock
- Achille Fokoue
Authors
- Maxwell Crouse
- Ibrahim Abdelaziz
- Bassem Makni
- Spencer Whitehead
- Cristina Cornelio
- Pavan Kapanipathi
- Kavitha Srinivas
- Veronika Thost
- Michael Witbrock
- Achille Fokoue
Published on
11/05/2019
Categories
Automated theorem provers have traditionally relied on manually tuned heuristics to guide how they perform proof search. Deep reinforcement learning has been proposed as a way to obviate the need for such heuristics, however, its deployment in automated theorem proving remains a challenge. In this paper we introduce TRAIL, a system that applies deep reinforcement learning to saturation-based theorem proving. TRAIL leverages (a) a novel neural representation of the state of a theorem prover and (b) a novel characterization of the inference selection process in terms of an attention-based action policy. We show through systematic analysis that these mechanisms allow TRAIL to significantly outperform previous reinforcement-learning-based theorem provers on two benchmark datasets for first-order logic automated theorem proving (proving around 15% more theorems).
Please cite our work using the BibTeX below.
@misc{crouse2020deep,
title={A Deep Reinforcement Learning Approach to First-Order Logic Theorem Proving},
author={Maxwell Crouse and Ibrahim Abdelaziz and Bassem Makni and Spencer Whitehead and Cristina Cornelio and Pavan Kapanipathi and Kavitha Srinivas and Veronika Thost and Michael Witbrock and Achille Fokoue},
year={2020},
eprint={1911.02065},
archivePrefix={arXiv},
primaryClass={cs.AI}
}