Institute for Web Science and Technologies · Universität Koblenz
Institute WeST

The translation of natural speech into first-order logic formulas using pre-trained language models

[go to overview]
Jona Löffler

The fine-tuning of pre-trained language models on specific tasks has been shown to be more effective than directly training a model on a given task. In this thesis, we try to fine-tune language models on the task of generating first-order logic formulas from a given input sentence in natural language. To achieve this, we create a training corpus of generated sentence-formula pairs, making use of lexical databases and knowledge graphs. This approach may prove itself useful for other NLP tasks, such as language understanding, automated reasoning or question answering.


18.02.21 - 10:15
via Big Blue Button