ResearchSpace is an extensible collaborative research environment based on Linked Data and knowledge representation using CIDOC-CRM to provide the context and meaning required for scholarly knowledge building. ResearchSpace is implemented based on the metaphactory, metaphacts' end-to-end platform for creating and utilizing knowledge graphs.
The study of complex networks has received much attention over the past few decades, presenting a simple, yet efficient means of modelling and understanding complex systems. The majority of network science literature focuses on simple one-mode networks. In the real world, however, we often find systems that are best represented by bipartite networks that are commonly analysed by examination of their one-mode projection. One-mode projections, however, are naturally very dense and noisy networks and hence the most relevant information may be hidden. In this talk I present the motivation of my PhD thesis and summarise the research that I conducted during my candidature.
The Linked Data best practices for data publishing encourage the use of RDF to describe URI-identified resources on the Web. As those resources reflect things in the real world, which is without a doubt dynamic, the dynamics of Linked Data should not be neglected. In this talk I report on experimental work on dynamic Linked Data that is based on the Dynamic Linked Data Observatory, a long-term data collection of Linked Data on the Web. Moreover, I cover formal work to capture the dynamics of Linked Data with the aim to specify agents on the Linked Data web using rules. Last, I showcase applications based on the talk topics from the area of cyber-physical systems and the Web of Things.
In the last years, scalable RDF stores in the cloud have been developed, where graph data is distributed over compute and storage nodes for scaling efforts of query processing and memory needs. One main challenge in these RDF stores is the data placement strategy that can be formalized in terms of graph covers. These graph covers determine whether (a) the triples distribution is well-balanced over all storage nodes (storage balance) (b) different query results may be computed on several compute nodes in parallel (vertical parallelization) and (c) individual query results can be produced only from triples assigned to few - ideally one - storage node (horizontal containment).
In the last few years, machine learning techniques have been successfully applied to many application areas such as information retrieval , e-commerce, image processing , computational biology, and chemistry. To understand and explore the real datasets, we often apply machine learning techniques such as clustering or classification in a high dimensional space. However, developing these machine learning models on large data sets can be very time-consuming because of its high dimensionality. Dimensionality reduction is the most important technique in unsupervised learning, to get a meaningful structure or previously unknown patterns in the multivariate data.
Koldfish aims at providing means for consuming Linked Open Data comfortably. However, at present it is unable to incorporate semantic data stemming from, e.g., schema.org-annoted web sites. Rectifying this technical shortcoming may also open up opportunities for a refined conceptual indexing of data elements. This talk will layout the proposed change to the Koldfish system and discuss potential implications for its Schema Index.
For semantic analysis of activities and events in videos, it is important to capture the spatio-temporal relation among objects in the 3D space. This presentation introduces a probabilistic method that extracts 3D trajectories of objects from 2D videos, captured from a monocular moving camera. Compared to existing methods
that rely on restrictive assumptions, the presented method can extract 3D trajectories with much less restriction by adopting new example-based techniques which compensate the lack of information. Here, the focal length of the camera is estimated based on similar candidates, and used afterwards to compute depths of detected objects.
Ein solcher Ansatz für das Reasoning in der Abstrakten Argumentationssystemen nach Dung sind Backtracking-Algorithmen, die Extensionen aufzählen. Zur Optimierung dieser Algorithmen sollen verschiedene Heuristiken verglichen werden.
Approximate reasoning is regarded one of the most convincing approaches to reasoning with ontologies and knowledge graphs in applications. This talk has two parts. In the first part, I will explain why approximate reasoning might work and how to perform faithful approximate reasoning, i.e, approximate reasoning with some level of quality control. In the second part, I will share some further thoughts towards a new roadmap of approximate reasoning in the era of Knowledge Graph.
The concept of relevance was proposed to model different temporal effects in networks (e.g. "aging effect", i.e. how the interest of nodes decay over time), where the traditional preferential attachment (PA) model fails to explain. We analyze the citation data provided by American Physical Society (APS). We group papers by their final in-degrees (# of papers citing them), and do not observe an obvious decline of citations for the most cited papers (indegree>1000). This might be due to the fact that the size of the network (total # of papers) grows exponentially, which compensates the decay of papers' relevance. For the next step, we want to analyze different citation networks (ACM, DBLP).