Open debates include so many arguments that sound decision making exceeds cognitive capabilities of the interested public or responsible experts. New arguments are continuously contributed (challenge C1), are often incomplete (C2), and knowledge about common facts or previous arguments is needed to understand them (C3). This project aims at investigating computational methods that i) continuously improve their capability to recognize arguments in ongoing debates, ii) align incomplete arguments with previous arguments and enrich them with automatically acquired background knowledge, and iii) constantly extend semantic knowledge bases with information required to understand arguments.
We achieve this by combining and advancing current state-of-the-art algorithms from the two research fields argument mining and knowledge graph construction. To deal with concept drifts in ongoing debates, we aim to advance argument mining methods with a knowledge-aware lifelong learning approach. We will investigate novel neural architectures for learning topic invariant argument features and the relation between arguments and debate topics, inject semantic knowledge into the neural network using knowledge graph embeddings and leverage self-training to continuously extend the training data. To cope with incomplete arguments, the retrieved arguments will be aligned with known arguments and enriched with background knowledge. We will link the entities of arguments to background knowledge by combining link discovery and keyword search. This linked background knowledge will be incorporated into incremental clustering methods for grouping similar arguments into argument clusters. Argumentative support and attack relations between these argument clusters will be determined using supervised learning. We aim to automatically acquire the required background knowledge by combining contemporary semantic knowledge bases containing encyclopedic and commonsense knowledge (Babelnet and ConceptNet) and focused knowledge extraction from unstructured Web corpora (Common Crawl). To integrate this background knowledge into machine learning models, we are going to adopt existing knowledge embedding techniques to support incremental training. Furthermore, this project focuses on developing novel annotation schemes and new benchmark corpora allowing us to evaluate our mining and alignment methods across topics, text types, and varying timestamps.
The outcome will be novel methods for obtaining an Open Argumentation Graph including semantically enriched groups of similar arguments from multiple textual sources linked with support and attack relations. To ensure a wide coverage of argumentation styles, we will apply our methods to different topics frequently discussed in online news and Twitter messages and conduct both component evaluation using annotated gold-data and crowd-based post-hoc evaluations.
Source of funding
- DFG- Deutsche Forschungsgemeinschaft
- Iryna Gurevych, TU Darmstadt
- Christian Stab, TU Darmstadt
I have studied computer science and computational linguistics at the Universität Erlangen-Nürnberg and at the University of Pennsylvania. I worked in the previous computational linguistics research group at the Universität Freiburg and did my Ph.D. in computer science in the faculty for technology in 1998. Afterwards I joined Universität Stuttgart, Institute IAT & Fraunhofer IAO, before I moved on to the Universität Karlsruhe (now: KIT), where I progressed from project lead, over lecturer and senior lecturer and did my habilitation in 2002. In 2004 I became professor for databases and information systems at Universität Koblenz-Landau, where I founded the Institute for Web Science and Technologies (WeST) in 2009. In parallel, I hold a Chair for Web and Computer Science at University of Southampton since March 2015.
Data represent the world on our computers. While the world is very intriguing, data may be quite boring, if one does not know what they mean. I am interested in making data more meaningful to find interesting insights in the world outside.
How does meaning arise?
- One can model data and information. Conceptual models and ontologies are the foundations for knowledge networks that enable the computer to treat data in a meaningful way.
- Text and data mining as well as information extraction find meaningful patterns in data (e.g. using ontology learning of text clustering) as well as connections between data and its use in context (e.g. using smartphones). Hence, knowledge networks may be found in data.
- Humans communicate information. In order to understand what data and information means, one has to understand social interactions. In the context of social network knowledge networks become meaningful for human consumption.
- Eventually meaning is nothing that exists in the void. Data and information must be communicated to people who may use insights into data and information. Interaction between humans and computers must happen in a way that matches the meaning of data and information.
The World Wide Web is the largest information construct made by mankind to convey meaningful data. Web Science is the discipline that considers how networks of people and knowledge in the Web arise, how humans deal with it and which consequences this has for all of us. The Web is a meaning machine that I want do understand by my research.
Where else you might find me?
I have been a research associate at WeST since October 2016. I am currently working in the DFG funded research project CoRg which aims at the construction of a cognitive computing system by modeling aspects of human reasoning like emotions and human interactions. I am also involved in the DFG funded research project EVOWIPE were we develop methods to intentionally forget parts of an ontology.
My research interests include artificial intelligence, in particular commonsense reasoning, the semantic web and logic (especially description logics).
Before joining the Institute for Web Science and Technologies, I was a member of the Artificial Intelligence working group at the University of Koblenz-Landau. In my dissertation, I developed methods for modifying the instance level of description logic knowledge bases and investigated precompilation techniques for description logic knowledge bases.
You can find more information on my homepage.
List of my publications at WeST see below. Further publications, e. g. at http://dblp.uni-trier.de/pers/hd/s/Schon:Claudia.