Sie sind hier


GazeTheWeb has been a productive outcome of MAMEM project; so far it has received great visibility in research and technical community. However, the challenge is to transfer the technology to the end-users who would benefit from such novel applications. MAMEM exploitation plan is also centered towards this goal as GTW has been identified as the main exploitable asset of the project. The objective of this talk is to present the commercial use cases of GTW, initial analysis of accessible technology market, potential customers and some proposals for business strategy. I look forward for a relevant discussion with WeST colleagues about more innovative use cases, and sharing their ideas and experience on effective business and marketing plans.

14.12.2017 - 10:15 h
Hanadi Tamimi

Presenting an approach to enhance the screenshot visualizations of Web usability studies in eye tracking by linking gaze data to their intended fixed elements on scrollable Web content. The enhancements appeared to even outperform the Video visualizations in terms of time-consumption and analysis satisfaction.

11.12.2017 - 16:30 h

The aim of the MAMEM project is to include motor impaired people in the digital world. Both eye tracking devices and EEG recorders are utilized to establish a non-intrusive communication channel between human and computer. Our first clinical trials in this year showed the feasibility of the developed system. In the second trial phase, which is scheduled for spring next year, thirty participants will have installed the system for one month at their homes. We will measure their usage of the system and how their social activity is impacted. This talk will give you an overview over the system, explain the engineering challenges for us and present our research interests.

07.12.2017 - 10:15 h
Dr. Federico Cerutti

Like other systems for automatic reasoning, argumentation approaches can suffer from “opacity.” We explore one of the few mixed approaches explaining, in natural language, the structure of arguments to ensure an understanding of their acceptability status. In particular, we will summarise the results described in [1], in which we assessed, by means of an experiment, the claim that computational models of argumentation provide support for complex decision making activities in part due to the close alignment between their semantics and human intuition. Results show a correspondence between the acceptability of arguments by human subjects and the justification status prescribed by the formal theory in the majority of the cases.

30.11.2017 - 10:15 h
Min Ke

onventional browsers with mouse and keyboard input mechanism are not adequate enough to provide means of interaction to motor-disabled people thus limiting them from accessing information on the Internet. There is a need for a web browser that they can interact with the help of assistive technologies. Evolution of eye-tracking systems and voice-command systems are playing an important role in bridging this gap and helping people with disability access content on systems equipped with such hardware. GazeTheWeb(GTW) browser is a hands-free browser controlled fully by eye tracker for disabled people. However, the problems of GTW are the structure of eyes yields a positional tolerance and a problem of being consciously controlled.

23.11.2017 - 10:15 h
Holger Heinz

Ontologies are not static and can change over time. New knowledge is added and existing knowledge is removed. At a later time it can happen that this knowledge must be added back into the ontologie, which can turn out to be expensive if no record of the knowledge has been kept. It is therefore necessary to have tools at hand that support this process. Since Description Logics provide the formal ground for reasoning in ontologies, there is a need for well-defined formal methods that can be used to express these changes.In this talk I give an overview of the current state of my masters's thesis, in which I am concerned with the definition of an operator for the retraction and recovery of information in the context of OWL 2 EL ontologies.

23.11.2017 - 10:15 h
Tjitze Rienstra

In recent years, probabilistic programming languages (PPLs) have become a popular tool in the field of Bayesian Machine Learning. Roughly speaking, PPLs allow one to specify and reason about probabilistic models, by writing programs that include probabilistic random choice constructs and observation statements.
In this talk I introduce a qualitative variant of a PPL called RankPL. RankPL can be used to reason about uncertainty expressible by distinguishing normal from exceptional events. This kind of uncertainty often appears in commonsense reasoning problems, where precise probabilities are unknown. Semantically, RankPL is based on a qualitative abstraction of probability theory called ranking theory.

16.11.2017 - 10:15 h

This thesis aims at exploring the potential of heuristic search algorithms for Abstract Argumentation. Therefore specific backtracking search algorithms are presented, which support the use of heuristics. Thereafter several heuristics are defined which have been implemented as part of this thesis. These will then be experimentally compared among each other and with other approaches to Abstract Argumentation. For different problems in Abstract Argumentation a suitable heuristic is suggested. For example heuristics which analyse paths inside the graph structure of an abstract argumentation framework have proven useful.

09.11.2017 - 10:15 h

The task of updating ontologies has gained increasing interest which among other things led to the introduction of SPARQL Update. OntoClean is a methodology for analyzing ontologies and can be used to justify certain modeling decisions and identify and explain common modeling mistakes. We introduce first ideas on how the notions behind OntoClean can be used to decide how to implement an update. We provide so-called OntoClean-guided semantics for SPARQL Update and argue that these often lead to the result intended by the user.

26.10.2017 - 10:15 h
Leo Schäfer, Philipp Seifer

The purpose of this project was to develop a method to detect fake news on Twitter. To this end a dataset had to be collected and labelled, that could be used to train a machine-learning algorithm with a set of features. The resulting classification was used to detect fake tweets on twitter via a browser plugin

19.10.2017 - 10:15 h