George Kour, Samuel Ackerman, et al.
EMNLP 2022
Most existing approaches for Knowledge Base Question Answering focus on a specific underlying knowledge base either because of inherent assumptions in the approach, or because evaluating it on a different knowledge base requires non-trivial changes. However, many popular knowledge bases share similarities in their underlying schemas that can be leveraged to facilitate generalization across knowledge bases. To achieve this generalization, we introduce a Knowledge Base Question Answering framework based on a two-stage architecture that explicitly separates semantic parsing from the knowledge base interaction, facilitating transfer learning across datasets and knowledge graphs. We show that pretraining on datasets with a different underlying knowledge base can nevertheless provide significant performance gains and reduce sample complexity. Our approach, applicable to both weakly-supervised and strongly-supervised settings, achieves comparable or state-of-the-art performance for LC-QuAD (DBpedia), WebQSP (Freebase), SimpleQuestions (Wikidata) and MetaQA (Wikimovies-KG).
George Kour, Samuel Ackerman, et al.
EMNLP 2022
Maeda Hanafi, Yannis Katsis, et al.
EMNLP 2022
Nandana Mihindukulasooriya, Sarthak Dash, et al.
ISWC 2023
Arafat Sultan, Avi Sil, et al.
EMNLP 2022