Structured Context And High-coverage Grammar For Conversational Question Answering Over Knowledge Graphs | Awesome LLM Papers

Structured Context And High-coverage Grammar For Conversational Question Answering Over Knowledge Graphs

Pierre Marion, PaweΕ‚ Krzysztof Nowak, Francesco Piccinno Β· Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing Β· 2021

We tackle the problem of weakly-supervised conversational Question Answering over large Knowledge Graphs using a neural semantic parsing approach. We introduce a new Logical Form (LF) grammar that can model a wide range of queries on the graph while remaining sufficiently simple to generate supervision data efficiently. Our Transformer-based model takes a JSON-like structure as input, allowing us to easily incorporate both Knowledge Graph and conversational contexts. This structured input is transformed to lists of embeddings and then fed to standard attention layers. We validate our approach, both in terms of grammar coverage and LF execution accuracy, on two publicly available datasets, CSQA and ConvQuestions, both grounded in Wikidata. On CSQA, our approach increases the coverage from (80%) to (96.2%), and the LF execution accuracy from (70.6%) to (75.6%), with respect to previous state-of-the-art results. On ConvQuestions, we achieve competitive results with respect to the state-of-the-art.

Similar Work
Loading…