There is an increasing availability of data encoded in the W3C standard Resource Description framework (RDF). In a near future, this may pose severe problem to classical approaches for query answering based on a single computing node. A natural approach to tackle this challenge is to resort to distributed RDF stores that combine several computing nodes in one virtual system. In general, distributed RDF stores splits an RDF graph into several partitions that are assigned to computing nodes. Hence, the partitioning strategy influences the efficiency of query execution. This is true as the computation of one single result can require triples stored on several different computing nodes.
Detection of bus lines from GPS data, captured by mobile phones, has many applications in urban transportation. Traditional approaches use distance measures between bus route data and the GPS history to identify the most likely bus lines that a user is traveling in. In the talk I am going to present a Markov model approach that allows to exploit precise detection of the "waiting at bus stop"-event in a natural way. This is joint work with Sven Milker, who is currently writing his master's thesis on this topic.
The Semantic Web aims at converting the current web into a web of data. All data in the Semantic Web - be it Linked Data or publicly available Triplestores - adhere to the same data model. They are also both queryable through an either Link-Traversal based query approach or through SPARQL.
Just to begin with I would start to highlight what was possibly going wrong. I am not sure whether it is meant just to analyse the project itself or the whole process of decision making including the decision whether to buy commercial of the shelf or to develop an individual software and further the outsource decision.
Summary of the three month internship at Hoffmann-La Roche including:
- Short Overview of the Company
- Current Terminology Service Architecture
- Short Overview of Semantic Web Stack
- Evaluation of RDF Triplestores (Short)
- Functions of Apache Jena as Semantic Middleware
- Possible new Architecture for Terminology Services
- Proof of Concept: Terminology Browser
With the growth of the LOD cloud in the last years, the number of interlinks (i.e. links between resources of different data sets) has also increased considerably. For example, DBpedia currently has a total of 39 million inlinks. In this talk, I will describe ongoing work on the analysis of existing interlinks on the LOD cloud. First, I will make a summary of statistics that other researchers have published. Second, I will report on our findings and future directions.
Eye tracking data is used to control a butterfly in the game Schau genau! The player collects flowers and classifies photographies of flowers for gathering points. We show in our work that besides the entertaining aspects of the game, the user acquires knowledge on plant species and generates information on the classified photos.
The availability of huge amount of graph-like data poses several data management challenges related to the representation, storage and querying of such data. On one hand, we have standards such as the Resource Description Framework and database solutions optimised for graph-like data. On the other hand, we have graph languages offering different trade-offs between expressiveness and complexity of query evaluation.
The way in which companies benefit from open source software (OSS) communities varies and corresponds with the business strategy they maintain. One way of establishing influence in OSS communities is by deploying own resources to an OSS project. Assigning own paid developers to work for an OSS project, like the Linux kernel project, is a suitable means to influence project work. On the other hand, the pertinent literature on user communities and governance in OSS maintains that a large proportion of influence individuals have in a community depends on their position in the community. In this talk, I will give an update on my ongoing work on analyzing firm-sponsored developers, which are active in the Linux kernel community.
The Multi-Agent Programming Contest is an international competition taking place every year with the goal to encourage multi-agent research. Our research lab was able to participate in it.