You are here


Holger Heinz

Ontologies are not static and can change over time. New knowledge is added and existing knowledge is removed. At a later time it can happen that this knowledge must be added back into the ontologie, which can turn out to be expensive if no record of the knowledge has been kept. It is therefore necessary to have tools at hand that support this process. Since Description Logics provide the formal ground for reasoning in ontologies, there is a need for well-defined formal methods that can be used to express these changes.In this talk I give an overview of the current state of my masters's thesis, in which I am concerned with the definition of an operator for the retraction and recovery of information in the context of OWL 2 EL ontologies.

23.11.2017 - 10:15 h
Min Ke

onventional browsers with mouse and keyboard input mechanism are not adequate enough to provide means of interaction to motor-disabled people thus limiting them from accessing information on the Internet. There is a need for a web browser that they can interact with the help of assistive technologies. Evolution of eye-tracking systems and voice-command systems are playing an important role in bridging this gap and helping people with disability access content on systems equipped with such hardware. GazeTheWeb(GTW) browser is a hands-free browser controlled fully by eye tracker for disabled people. However, the problems of GTW are the structure of eyes yields a positional tolerance and a problem of being consciously controlled.

23.11.2017 - 10:15 h
Tjitze Rienstra

In recent years, probabilistic programming languages (PPLs) have become a popular tool in the field of Bayesian Machine Learning. Roughly speaking, PPLs allow one to specify and reason about probabilistic models, by writing programs that include probabilistic random choice constructs and observation statements.
In this talk I introduce a qualitative variant of a PPL called RankPL. RankPL can be used to reason about uncertainty expressible by distinguishing normal from exceptional events. This kind of uncertainty often appears in commonsense reasoning problems, where precise probabilities are unknown. Semantically, RankPL is based on a qualitative abstraction of probability theory called ranking theory.

16.11.2017 - 10:15 h

This thesis aims at exploring the potential of heuristic search algorithms for Abstract Argumentation. Therefore specific backtracking search algorithms are presented, which support the use of heuristics. Thereafter several heuristics are defined which have been implemented as part of this thesis. These will then be experimentally compared among each other and with other approaches to Abstract Argumentation. For different problems in Abstract Argumentation a suitable heuristic is suggested. For example heuristics which analyse paths inside the graph structure of an abstract argumentation framework have proven useful.

09.11.2017 - 10:15 h

The task of updating ontologies has gained increasing interest which among other things led to the introduction of SPARQL Update. OntoClean is a methodology for analyzing ontologies and can be used to justify certain modeling decisions and identify and explain common modeling mistakes. We introduce first ideas on how the notions behind OntoClean can be used to decide how to implement an update. We provide so-called OntoClean-guided semantics for SPARQL Update and argue that these often lead to the result intended by the user.

26.10.2017 - 10:15 h
Norbert Härig

Fake news are false and sometimes sensationalist information presented as fact and they often spread very fast on the internet via social networks like Facebook or Twitter. A possibility to identify such fake news may diminish the impact they can have. For this purpose fake news detection can be used. The term fake news detection describes the process of returning a label denoting whether a given input consists of fake news or authentic news. In this work we propose two main contributions: The first contribution is a labeled dataset of Twitter Tweets containing fake news and authentic news. Secondly we propose a web tool, which can be used to identify fake news and verify authentic Twitter Tweets based on machine learning algorithms and Twitter meta data.

19.10.2017 - 10:15 h
Leo Schäfer, Philipp Seifer

The purpose of this project was to develop a method to detect fake news on Twitter. To this end a dataset had to be collected and labelled, that could be used to train a machine-learning algorithm with a set of features. The resulting classification was used to detect fake tweets on twitter via a browser plugin

19.10.2017 - 10:15 h

In the last years, scalable RDF stores in the cloud have been developed. The distribution increases the complexity of RDF stores running on a single computer. In order to gain a deeper understanding of how, e.g., the data placement or the distributed query execution strategies affect the performance, we have developed the modular glass box profiling system Koral. With its help, it is possible to test the behaviour of already existing or newly created strategies tackling the challenges caused by the distribution in a realistic distributed RDF store. Thereby, the design goal of Koral is that only the evaluated component needs to be exchanged and the adaptation of other components is aimed to be minimal.

12.10.2017 - 10:15 h

With the release of SPARQL 1.1 in 2013 property paths were introduced, which make it possible to describe queries that do not explicitly define the length of the path that is traversed within an RDF graph. Already existing RDF stores were adapted to support property paths. In order to give an insight on how well the current implementations of property paths in RDF stores work, we introduce a benchmark for evaluating the support of property paths. In order to support realistic RDF graphs as well as arbitrarily scalable synthetic RDF graphs as benchmark dataset, a query generator was developed that creates queries from query templates. Furthermore, we present the results of our benchmark for 4 RDF stores frequently used in academia and industry.

05.10.2017 - 10:15 h
Alex Baier

The taxonomy is a fundamental component of an ontology. In a taxonomy, classes are arranged hierarchically linked by a subclass-of relation. Complete taxonomies have exactly one most common class, called the root class. In Wikidata, the root class is the class "entity". The root class is unique, as it is the only class, which has no superclasses in the taxonomy. However, Wikidata’s taxonomy is incomplete in regard to this property. Orphan classes are classes, which are not the root class, but still do not have superclasses. Thereby, orphan classes violate the uniqueness of the root class.

28.09.2017 - 10:15 h