Institute for Web Science and Technologies · Universität Koblenz - Landau
Institute WeST

WeST presents four papers at the European Conference on Artificial Intelligence

[go to overview]

At this year’s European Conference on Artificial Intelligence (ECAI) four papers by WeST members have been accepted for publication. Organised by the European Coordinating Committee for Artifical Intelligence (AI), ECAI is Europe's premier conference for scientists to present and hear about the research in contemporary AI.

In their paper "Normalized Relevance Distance – A Stable Metric for Computing Semantic Relatedness over Reference Corpora" Christoph Schaefer, Daniel Hienert and Thomas Gottron propose a new method for measuring word relatedness. The method extends the Normalized Google Distance measure by replacing the hit counts with its tf-idf value summed over all documents that contain the term.

The paper "Coherence and Compatibility of Markov Logic Networks” by Matthias Thimm deals with the assessing of modelling quality of Markov Logic Networks (MLNs). An MLN is a probabilistic knowledge representation approach that is able to aggregate even inconsistent information. However, it may also exhibit unintuitive inferences. This paper proposes an approach to measure the coherence of an MLN in order to assess whether unintuitive inferences may occur.

The paper "Consolidation of Probabilistic Knowledge Bases by Inconsistency Minimization” by Nico Potyka and Matthias Thimm is about repairing inconsistencies in probabilistic knowledge bases. Inconsistencies appear frequently in knowledge representation approaches and, in particular, in probabilistic approaches. Here, the detection of inconsistencies by hand is difficult. This paper presents an approach for altering a probabilistic knowledge base, represented using probabilistic conditional logic, to allow for the application of model-based reasoning techniques.

In their paper "Probabilistic Argumentation with Incomplete Information” Anthony Hunter and Matthias Thimm present an approach for completing incomplete information in probabilistic argumentation frameworks. A Computational model of argumentation is a knowledge representation approach that focuses on the roles of arguments (justification of some statement) and attacks between arguments (inconsistencies between statements). This paper extends previous work on a probabilistic extension of these models and discusses the problem of dealing with insufficiently specified information.


Contact for inquiries: