Retrieving passages instead of whole documents can help professionals to acquire new information faster. This is important in domains where time for research is limited and expensive. For example a medical doctor at
an hospital has usually less than an hour per day to look up fresh information for rare cases. We present an approach for retrieving relevant passages in a document collection which leverages two orthogonal semantic embeddings. For this occasion we demonstrate a first prototype implementation as described in . In this talk, we give an overview of our approach to learn a joint vector space representation of these embeddings. We plan to exploit this model to further improve passage and document retrieval tasks.
Eye tracking as a tool to quantify user attention plays a major role in research and application design. For Web page usability, it has become a prominent measure to assess which sections of a Web page are read, glanced or skipped. Such assessments primarily depend on the mapping of gaze data to a Web page representation. However, current representation methods, a virtual screenshot of the Web page or a video recording of the complete interaction session, suffer either from accuracy or scalability issues. We present a method that identifies fixed elements on Web pages and combines user viewport screenshots in relation to fixed elements for an enhanced representation of the page.
In this talk, the DFG funded project Cognitive Reasoning (CoRg) will be introduced. CoRg aims at the construction of a cognitive computing system. Cognitive computing addresses problems characterized by ambiguity and uncertainty, meaning that it is used to handle problems humans are confronted with in everyday life. When developing a cognitive computing system which is supposed to act human-like one cannot rely on utomated theorem proving techniques alone, since humans performing commonsense reasoning do not obey the rules of classical logics. This causes humans to be susceptible to logical fallacies, but on the other hand to draw useful conclusions automated reasoning systems are incapable of.
Formal Concept Analysis is a mathematically well-founded theory used for (among others) computing concept lattices from data. However, concept lattices induced via FCA tend to be overwhelming in size and complexity, potentially leading to unwarranted overhead in subsequent informational tasks.
This talk discusses a probabilistic approach to deriving concept lattice summarizations that are concise, yet still structurally sound and characteristic of the underlying dataset. The talk concludes in an outlook into future research directions.
One of the most important concerns in the field of adaptive information systems is to model human behaviour in order to offer personalisation and recommendation. This usually involves implicit or explicit knowledge about a user's preferences and behavioural patterns.
In this talk, I report on the lack of reliability of explicit user feedback and its interpretation in the light of system evaluation. By using probabilistic perspectives as used in metrology and physics as well as neuroscientific theories of the Bayesian brain, I will introduce novel user models with more empathy for the human nature. By means of user experiments and simulations, I will show that this information can be used to improve the standard collaborative filtering.
Computational models of argumentation are logic-based formalisms for knowledge representation that allow for the explicit modelling of automatic reasoning in terms of arguments, counterarguments, and their interplay. In this talk I give an overview on the core formalism in this area, abstract argumentation frameworks, and discuss its algorithmic issues.
While semantic data models are increasingly relevant for scientific and business tasks, working with semantic data still remains complex and error-prone. This, in part, is due to the inadequate integration of related technologies in common programming languages. The research language λDL was developed to remedy this concern, by introducing static checks: It uses description logics, the underlying formalism of OWL ontologies, to provide a type system for semantic data. This thesis is based on λDL and aims to transfer the approach to the functional programming language Scala and the widely used semantic query language SPARQL.
News streams have several challenges for the past, present, and future of events. The past hides relations among events and actors; the present reflects needs of news readers; and the future waits to be predicted. The thesis has three parts regarding these time periods: We discover news chains using zigzagged search in the past, select front-page of current news for public, and predict future public reactions to events.
User experience includes all the users' emotions, beliefs, preferences, perceptions, physical and psychological responses, behaviors and accomplishments that occur before, during and after use of a system or service. In this talk, I will give a brief introduction to user experience concepts, common methods and practices. Furthermore, I will present the case of eye-controlled applications: several gaze interaction approaches introduces novel ideas which are 'useful', but are they 'usable'? In this regard, I would discuss how the current research ignores some important aspect of user experience, limiting eye tracking as a tool for general usage. GazeTheWeb will be discussed in brief with respect to user experience.
The aim of the research project GazeMining by WeST and EYEVIDO GmbH is to capture Web sessions semantically and thus obtain a complete picture of visual content, attention and interaction. This talk will give you insight to our first ideas, how to utilize both the structural and the visual data of a Web page to achieve the described goal. We propose a layer definition from the structural data, segmentation of each user session into visual states for each layer, combination of the states across the users and eventually the extraction of components on the Web page, like carousels or menus. The talk is concluded with an open discussion to gather feedback about the presented approach and to think about further challenges and use cases.