INTRODUCTION

Semantic parsing is a broad field with many approaches aiming to provide differing insights into the semantic meanings of sentences and represent these meanings in various ways. Our research focuses on two main approaches of semantic graph parsing, sequence-to-sequence-based and graph-based. Recent implementations of these approaches make use of neural networks such as Long Short-Term Memory or general Recurrent Neural networks, each having their own advantages and disadvantages.

Transformers are a newer kind of neural network that solve the problems faced by RNNs (vanishing gradients, limited context) and also provide better performance as they are easily parallelizable and require significantly less time to train. Additionally pre-training transformers has shown benefits in many NLP processing tasks but have yet to be fully explored for semantic graph parsing. Our research aims to address this gap in the literature.



PROJECT OBJECTIVES

We thus aim to answer the following research question in addition to our specific research questions proposed in our individual sections:

  • How does the F1 score of pre-trained transformers compare to the F1 score of RNN-based approaches to semantic graph parsing?