Analysing human activities becomes one of the important research topics in recent years due to the fast and dramatic development of security issues in public spaces. In crowded environments, it is highly important to analyse the activities that arise from people's mobilities. In other words, the activities that can be expressed by the sequential positions of people (trajectories). The reason for this is the theory that these trajectories symbolise high-level interpretation of human activities. However, several factors make studying people's activities based on their mobilities a challenging task. Specifically, the vast diversity of possible activities that people may perform yield to a very complex recognition because of the high inter-class similarity.
Misinformation generates misperceptions, which have affected policies in many domains, including economy, health, environment, and foreign policy. Co-Inform is about empowering citizens, journalists, and policymakers with co-created socio-technical solutions, to increase resilience to misinformation, and to generate more informed behaviours and policies.
Compliance knowledge bases contain declarative business logic, meant to ensure that business processes are aligned towards company goals and regulations. Maintaining the quality of respective knowledge bases is widely recognized as a challenging task. A significant problem here is a potential inconsistency of knowledge bases, as this impedes the actual use of respective artifacts. Such inconsistencies can result from the incremental, often collaborative, creation of said knowledge bases. We investigate how quantitative measures can be used to assess the severity of inconsistency for individual elements of compliance knowledge bases.
Scientific paper recommendation is a task that aims to enhance the exploitation of Digital Libraries (DL) and helps researchers to find relevant papers from a large pool of papers. However, reliable sources to model the researcher interests must be provided to have accurate recommendations.
In my research project, I focused on the extraction of the user topical interests from papers that the user is connected with (authored or rated) and also by using the social structure of the academic network of the user (relations among researchers in the same domain).
Credit bureaus gather, aggregate and analyse information about consumers and business entities in order to assess credit related risks. On a methodological and technical level this involves the integration and quality assurance of data from various sources, the analysis of incoming data streams and the ability to train and apply predictive models. In this talk we will give an overview of challenging tasks related to use cases in the credit bureau industry and illustrate some modern approaches in the field of machine learning and data mining to address these tasks. All information about the talk available at https://www.uni-koblenz-landau.de/de/koblenz/fb4/ifi/Kolloquien/kolloquium_Gottron.
Bibliographic information systems need to rely on metadata provided by various sources in various forms and with various quality. The talk gives some insights how such systems are maintained and improved. It shows how metadata can be automatically harvested from publisher websites and how the harvesting process can be steered. It also discusses some open sources of bibliographic metadata and how they can be used to enrich existing bibliographic data. The talk also presents some initial results on citation extraction from full-text documents based on ScienceParse.
The problem of finding commonalities in data occurs in many areas. The formal notion characterizing precisely such commonalities is known as least general generalization of descriptions. The presentation will revisit the notion of least general generalizations in RDF graphs and the conjunctive fragment of SPARQL with respect to RDFS background knowledge.
The talk is based on the ISWC 2017 paper "Learning Commonalities in SPARQL" as well as the technical report "Learning Commonalities in RDF and SPARQL", originally by S. El Hassad, Francois Goasdoué and Hélène Jaudoin.
The use of argumentation techniques allows to obtain classifiers, which are by design able to explain their decisions, and therefore addresses the recent need for Explainable AI: classifications are accompanied by a dialectical analysis showing why arguments for the conclusion are preferred to counterarguments. Argumentation techniques in machine learning also allows the easy integration of additional expert knowledge in form of arguments. In this talk, I give a brief overview on ongoing research in applying formal argumentation techniques to classification problems.
On the one hand, the demographic change and the shortage of medical staff (especially in rural areas) critically challenge healthcare systems in industrialised countries. On the other hand, the digitalisation of our society progresses with a tremendous speed, so that more and more health-related data are available in a digital form. For instance, people wear intelligent glasses or/and smart watches, provide digital data with standardised medical devices (e.g., blood pressure and blood sugar meters following the standard ISO/IEEE 11073) or/and deliver personal behavioural data by their smartphones.
Voice user interfaces (VUI) have crawled from being simple interactive voice responses to being a new entity in our homes and everyday life. They have crossed over from assistive technologies for the visually impaired to a mainstream technology being used in homes and on our mobile devices. VUIs are not just commands anymore but are designed to empathetically respond to our queries and be engaged in a conversation with us. This presentation looks into the development, challenges, concerns and design of voice user interfaces and how they can be used in unison with other input/output mechanisms to be multimodal in nature.