The problem of finding commonalities in data occurs in many areas. The formal notion characterizing precisely such commonalities is known as least general generalization of descriptions. The presentation will revisit the notion of least general generalizations in RDF graphs and the conjunctive fragment of SPARQL with respect to RDFS background knowledge.
The talk is based on the ISWC 2017 paper "Learning Commonalities in SPARQL" as well as the technical report "Learning Commonalities in RDF and SPARQL", originally by S. El Hassad, Francois Goasdoué and Hélène Jaudoin.
The use of argumentation techniques allows to obtain classifiers, which are by design able to explain their decisions, and therefore addresses the recent need for Explainable AI: classifications are accompanied by a dialectical analysis showing why arguments for the conclusion are preferred to counterarguments. Argumentation techniques in machine learning also allows the easy integration of additional expert knowledge in form of arguments. In this talk, I give a brief overview on ongoing research in applying formal argumentation techniques to classification problems.
On the one hand, the demographic change and the shortage of medical staff (especially in rural areas) critically challenge healthcare systems in industrialised countries. On the other hand, the digitalisation of our society progresses with a tremendous speed, so that more and more health-related data are available in a digital form. For instance, people wear intelligent glasses or/and smart watches, provide digital data with standardised medical devices (e.g., blood pressure and blood sugar meters following the standard ISO/IEEE 11073) or/and deliver personal behavioural data by their smartphones.
Voice user interfaces (VUI) have crawled from being simple interactive voice responses to being a new entity in our homes and everyday life. They have crossed over from assistive technologies for the visually impaired to a mainstream technology being used in homes and on our mobile devices. VUIs are not just commands anymore but are designed to empathetically respond to our queries and be engaged in a conversation with us. This presentation looks into the development, challenges, concerns and design of voice user interfaces and how they can be used in unison with other input/output mechanisms to be multimodal in nature.
Bitcoins and Blockchains have become well known in recent months for their presence in the media. In this talk, we will look at the technical background of the Blockchain technology using Bitcoin as an example and take a brief look at the currently developing alternative approaches.
GazeTheWeb has been a productive outcome of MAMEM project; so far it has received great visibility in research and technical community. However, the challenge is to transfer the technology to the end-users who would benefit from such novel applications. MAMEM exploitation plan is also centered towards this goal as GTW has been identified as the main exploitable asset of the project. The objective of this talk is to present the commercial use cases of GTW, initial analysis of accessible technology market, potential customers and some proposals for business strategy. I look forward for a relevant discussion with WeST colleagues about more innovative use cases, and sharing their ideas and experience on effective business and marketing plans.
Presenting an approach to enhance the screenshot visualizations of Web usability studies in eye tracking by linking gaze data to their intended fixed elements on scrollable Web content. The enhancements appeared to even outperform the Video visualizations in terms of time-consumption and analysis satisfaction.
The aim of the MAMEM project is to include motor impaired people in the digital world. Both eye tracking devices and EEG recorders are utilized to establish a non-intrusive communication channel between human and computer. Our first clinical trials in this year showed the feasibility of the developed system. In the second trial phase, which is scheduled for spring next year, thirty participants will have installed the system for one month at their homes. We will measure their usage of the system and how their social activity is impacted. This talk will give you an overview over the system, explain the engineering challenges for us and present our research interests.
Like other systems for automatic reasoning, argumentation approaches can suffer from “opacity.” We explore one of the few mixed approaches explaining, in natural language, the structure of arguments to ensure an understanding of their acceptability status. In particular, we will summarise the results described in , in which we assessed, by means of an experiment, the claim that computational models of argumentation provide support for complex decision making activities in part due to the close alignment between their semantics and human intuition. Results show a correspondence between the acceptability of arguments by human subjects and the justification status prescribed by the formal theory in the majority of the cases.
Ontologies are not static and can change over time. New knowledge is added and existing knowledge is removed. At a later time it can happen that this knowledge must be added back into the ontologie, which can turn out to be expensive if no record of the knowledge has been kept. It is therefore necessary to have tools at hand that support this process. Since Description Logics provide the formal ground for reasoning in ontologies, there is a need for well-defined formal methods that can be used to express these changes.In this talk I give an overview of the current state of my masters's thesis, in which I am concerned with the definition of an operator for the retraction and recovery of information in the context of OWL 2 EL ontologies.