Michael Hersche, Mustafa Zeqiri, et al.
Nature Machine Intelligence
Automated Theorem Provers (ATPs) are widely used for the verification of logical statements. Explainability is one of the key advantages of ATPs: providing an expert readable proof path which shows the inference steps taken to conclude correctness. Conversely, Neuro-Symbolic Networks (NSNs) that perform theorem proving, do not have this capability. We propose a proof-tracing and filtering algorithm to provide explainable reasoning in the case of Logical Neural Networks(LNNs), a special type of Neural-Theorem Prover (NTP).
Michael Hersche, Mustafa Zeqiri, et al.
Nature Machine Intelligence
Owen Cornec, Rahul Nair, et al.
NeurIPS 2021
Gabriele Picco, Lam Thanh Hoang, et al.
EMNLP 2021
Akifumi Wachi, Yunyue Wei, et al.
NeurIPS 2021