Entities are commonly modeled by explicitly describing their features and their relations to other entities. The most extensive collections of such entity-centric information are large scale Knowledge Graphs (KGs) like DBpedia that describe the interdependency of millions of real word entities and abstract concepts.
One benefit of KG entities is that they are universal identifiers and thus provide a way to link content across languages and modalities once their occurrence in images and (multilingual) text has been annotated.
Argumentation networks and Bayesian networks are formalisms used in AI that serve different purposes. However they do share some conceptual similarities: they are both based on directed graphs, and edges represent relationships of influence among variables. In this talk I explore these similarities. Based on this, I propose a unifying perspective that forms the basis for a new approach to probabilistic argumentation.
I revisit the notion of "Bose-Einstein Condensation in Complex Networks" based on . In the fitness model for evolving networks, the rate at which existing nodes acquire new links is proportional to the node's fitness and degree. Networks that are described by the fitness model can be apped to an equilibrium Bose gas, thus allowing us to "reuse" conclusions in the well studied thermal-dynamics field, in particular Bose statistics.
Akin to their counterparts in physical systems, complex networks can undergoes Bose-Einstein condensation, well predicting the “winner-takes-all” phenomena observed in reality.
In distributed RDF stores the strategy how data is distributed over several compute nodes affects query performance. When hash-based data distribution strategies are used, the query workload tends to be equally balanced among all compute nodes while a relatively high number of intermediate results must be transferred between compute nodes. Graph-clustering-based approaches reduce the number of transferred intermediate results while the query workload becomes more imbalanced. This paper presents a novel data distribution strategy that combines the advantages of both strategies. To this end, we collocate the individuals of small sets of closely connected data items on compute nodes.
RDF offers a flexibility that is desireable in many situation such as data exchange, integration or knowledge representation. However, a program usually consumes precise substructures of such graph data. The
flexibility of RDF forces a developer to take special care when designing such programs in order to avoid runtime errors. Ideally, we could leverage type systems to avoid possible runtime errors. The Shape Constraint Language (SHACL) is a relatively new way of structural validation on RDF data sources. An Integration of SHACL into a type system can be used for an automatic detection of runtime errors.
Retrieving passages instead of whole documents can help professionals to acquire new information faster. This is important in domains where time for research is limited and expensive. For example a medical doctor at
an hospital has usually less than an hour per day to look up fresh information for rare cases. We present an approach for retrieving relevant passages in a document collection which leverages two orthogonal semantic embeddings. For this occasion we demonstrate a first prototype implementation as described in . In this talk, we give an overview of our approach to learn a joint vector space representation of these embeddings. We plan to exploit this model to further improve passage and document retrieval tasks.
To work with distributed ledger technologies such as blockchains, we need to understand the advantages and disadvantages of the different approaches. In this presentation we will look at three different concepts of distributed ledgers and how they can be used to ensure integrity for arbitrary data.
Eye tracking as a tool to quantify user attention plays a major role in research and application design. For Web page usability, it has become a prominent measure to assess which sections of a Web page are read, glanced or skipped. Such assessments primarily depend on the mapping of gaze data to a Web page representation. However, current representation methods, a virtual screenshot of the Web page or a video recording of the complete interaction session, suffer either from accuracy or scalability issues. We present a method that identifies fixed elements on Web pages and combines user viewport screenshots in relation to fixed elements for an enhanced representation of the page.
In this talk, the DFG funded project Cognitive Reasoning (CoRg) will be introduced. CoRg aims at the construction of a cognitive computing system. Cognitive computing addresses problems characterized by ambiguity and uncertainty, meaning that it is used to handle problems humans are confronted with in everyday life. When developing a cognitive computing system which is supposed to act human-like one cannot rely on utomated theorem proving techniques alone, since humans performing commonsense reasoning do not obey the rules of classical logics. This causes humans to be susceptible to logical fallacies, but on the other hand to draw useful conclusions automated reasoning systems are incapable of.
Formal Concept Analysis is a mathematically well-founded theory used for (among others) computing concept lattices from data. However, concept lattices induced via FCA tend to be overwhelming in size and complexity, potentially leading to unwarranted overhead in subsequent informational tasks.
This talk discusses a probabilistic approach to deriving concept lattice summarizations that are concise, yet still structurally sound and characteristic of the underlying dataset. The talk concludes in an outlook into future research directions.