Sat 13 Jan 2018 15:00 - 15:30 at Crocker - Session 2 Chair(s): William E. Byrd

Inspired by the recent evolution of deep neural networks (DNNs) in machine learning, we explore their application to the problems that often appear in the research of programming languages (PL problems), such as program synthesis and automated theorem proving. As a first step towards this goal, we report the result of our preliminary experimental result of the application of DNNs to proof synthesis of the intuitionistic propositional logic. The idea is to view the proof-synthesis problem as a translation from a proposition to its proof. We train a seq2seq model, which is a popular model in neural machine translation, so that it generates a proof term from a given proposition. We empirically observe that, although the generated proof is correct only for approximately half of the propositions in the test data, a generated proof term is close to a correct one in terms of the tree edit distance of AST. Based on this observation, we implemented a proof-search procedure that uses the generated proof term as a hint, which outperforms a brute-force one. We discuss future research directions and the gaps to be filled between DNN and PL problems.

Abstract (obt18-paper8.pdf)546KiB