In recent years, answering keyword queries on graph-structured data has emerged as an important research topic. There are two main reasons behind that. On one hand, at the beginning of their search process, many users are uncertain what exactly they are looking for, or which terms should be used to express their query.
In order to leverage to potential for connected education unfree alternatives like coursera, udacitz, iversitz an many more are entering the market. Those platforms offer a great user experience while consuming and creating content. Typical Learn management systems like OLAT do not have such a great user experience. Both platforms do not really support creating courses in a collaborative way. Mediawiki a software used for wikipedia but also for the german ZUM-Wiki is a spread alternative for learning materials. The problem is that the main purpose of media wiki is to build an encyclopedia and lack user experience for creating and consuming courses.
There is an increasing availability of data encoded in the W3C standard Resource Description framework (RDF). In a near future, this may pose severe problem to classical approaches for query answering based on a single computing node. A natural approach to tackle this challenge is to resort to distributed RDF stores that combine several computing nodes in one virtual system. In general, distributed RDF stores splits an RDF graph into several partitions that are assigned to computing nodes. Hence, the partitioning strategy influences the efficiency of query execution. This is true as the computation of one single result can require triples stored on several different computing nodes.
Detection of bus lines from GPS data, captured by mobile phones, has many applications in urban transportation. Traditional approaches use distance measures between bus route data and the GPS history to identify the most likely bus lines that a user is traveling in. In the talk I am going to present a Markov model approach that allows to exploit precise detection of the "waiting at bus stop"-event in a natural way. This is joint work with Sven Milker, who is currently writing his master's thesis on this topic.
The Semantic Web aims at converting the current web into a web of data. All data in the Semantic Web - be it Linked Data or publicly available Triplestores - adhere to the same data model. They are also both queryable through an either Link-Traversal based query approach or through SPARQL.
Just to begin with I would start to highlight what was possibly going wrong. I am not sure whether it is meant just to analyse the project itself or the whole process of decision making including the decision whether to buy commercial of the shelf or to develop an individual software and further the outsource decision.
With the growth of the LOD cloud in the last years, the number of interlinks (i.e. links between resources of different data sets) has also increased considerably. For example, DBpedia currently has a total of 39 million inlinks. In this talk, I will describe ongoing work on the analysis of existing interlinks on the LOD cloud. First, I will make a summary of statistics that other researchers have published. Second, I will report on our findings and future directions.
Summary of the three month internship at Hoffmann-La Roche including:
- Short Overview of the Company
- Current Terminology Service Architecture
- Short Overview of Semantic Web Stack
- Evaluation of RDF Triplestores (Short)
- Functions of Apache Jena as Semantic Middleware
- Possible new Architecture for Terminology Services
- Proof of Concept: Terminology Browser
Eye tracking data is used to control a butterfly in the game Schau genau! The player collects flowers and classifies photographies of flowers for gathering points. We show in our work that besides the entertaining aspects of the game, the user acquires knowledge on plant species and generates information on the classified photos.
The availability of huge amount of graph-like data poses several data management challenges related to the representation, storage and querying of such data. On one hand, we have standards such as the Resource Description Framework and database solutions optimised for graph-like data. On the other hand, we have graph languages offering different trade-offs between expressiveness and complexity of query evaluation.