The aim of the MAMEM project is to include motor impaired people in the digital world. Both eye tracking devices and EEG recorders are utilized to establish a non-intrusive communication channel between human and computer. Our first clinical trials in this year showed the feasibility of the developed system. In the second trial phase, which is scheduled for spring next year, thirty participants will have installed the system for one month at their homes. We will measure their usage of the system and how their social activity is impacted. This talk will give you an overview over the system, explain the engineering challenges for us and present our research interests.
Like other systems for automatic reasoning, argumentation approaches can suffer from “opacity.” We explore one of the few mixed approaches explaining, in natural language, the structure of arguments to ensure an understanding of their acceptability status. In particular, we will summarise the results described in , in which we assessed, by means of an experiment, the claim that computational models of argumentation provide support for complex decision making activities in part due to the close alignment between their semantics and human intuition. Results show a correspondence between the acceptability of arguments by human subjects and the justification status prescribed by the formal theory in the majority of the cases.
Ontologies are not static and can change over time. New knowledge is added and existing knowledge is removed. At a later time it can happen that this knowledge must be added back into the ontologie, which can turn out to be expensive if no record of the knowledge has been kept. It is therefore necessary to have tools at hand that support this process. Since Description Logics provide the formal ground for reasoning in ontologies, there is a need for well-defined formal methods that can be used to express these changes.In this talk I give an overview of the current state of my masters's thesis, in which I am concerned with the definition of an operator for the retraction and recovery of information in the context of OWL 2 EL ontologies.
onventional browsers with mouse and keyboard input mechanism are not adequate enough to provide means of interaction to motor-disabled people thus limiting them from accessing information on the Internet. There is a need for a web browser that they can interact with the help of assistive technologies. Evolution of eye-tracking systems and voice-command systems are playing an important role in bridging this gap and helping people with disability access content on systems equipped with such hardware. GazeTheWeb(GTW) browser is a hands-free browser controlled fully by eye tracker for disabled people. However, the problems of GTW are the structure of eyes yields a positional tolerance and a problem of being consciously controlled.
In recent years, probabilistic programming languages (PPLs) have become a popular tool in the field of Bayesian Machine Learning. Roughly speaking, PPLs allow one to specify and reason about probabilistic models, by writing programs that include probabilistic random choice constructs and observation statements.
In this talk I introduce a qualitative variant of a PPL called RankPL. RankPL can be used to reason about uncertainty expressible by distinguishing normal from exceptional events. This kind of uncertainty often appears in commonsense reasoning problems, where precise probabilities are unknown. Semantically, RankPL is based on a qualitative abstraction of probability theory called ranking theory.
This thesis aims at exploring the potential of heuristic search algorithms for Abstract Argumentation. Therefore specific backtracking search algorithms are presented, which support the use of heuristics. Thereafter several heuristics are defined which have been implemented as part of this thesis. These will then be experimentally compared among each other and with other approaches to Abstract Argumentation. For different problems in Abstract Argumentation a suitable heuristic is suggested. For example heuristics which analyse paths inside the graph structure of an abstract argumentation framework have proven useful.
The task of updating ontologies has gained increasing interest which among other things led to the introduction of SPARQL Update. OntoClean is a methodology for analyzing ontologies and can be used to justify certain modeling decisions and identify and explain common modeling mistakes. We introduce first ideas on how the notions behind OntoClean can be used to decide how to implement an update. We provide so-called OntoClean-guided semantics for SPARQL Update and argue that these often lead to the result intended by the user.
Fake news are false and sometimes sensationalist information presented as fact and they often spread very fast on the internet via social networks like Facebook or Twitter. A possibility to identify such fake news may diminish the impact they can have. For this purpose fake news detection can be used. The term fake news detection describes the process of returning a label denoting whether a given input consists of fake news or authentic news. In this work we propose two main contributions: The first contribution is a labeled dataset of Twitter Tweets containing fake news and authentic news. Secondly we propose a web tool, which can be used to identify fake news and verify authentic Twitter Tweets based on machine learning algorithms and Twitter meta data.
The purpose of this project was to develop a method to detect fake news on Twitter. To this end a dataset had to be collected and labelled, that could be used to train a machine-learning algorithm with a set of features. The resulting classification was used to detect fake tweets on twitter via a browser plugin
In the last years, scalable RDF stores in the cloud have been developed. The distribution increases the complexity of RDF stores running on a single computer. In order to gain a deeper understanding of how, e.g., the data placement or the distributed query execution strategies affect the performance, we have developed the modular glass box profiling system Koral. With its help, it is possible to test the behaviour of already existing or newly created strategies tackling the challenges caused by the distribution in a realistic distributed RDF store. Thereby, the design goal of Koral is that only the evaluated component needs to be exchanged and the adaptation of other components is aimed to be minimal.