In the last years, scalable RDF stores in the cloud have been developed. The distribution increases the complexity of RDF stores running on a single computer. In order to gain a deeper understanding of how, e.g., the data placement or the distributed query execution strategies affect the performance, we have developed the modular glass box profiling system Koral. With its help, it is possible to test the behaviour of already existing or newly created strategies tackling the challenges caused by the distribution in a realistic distributed RDF store. Thereby, the design goal of Koral is that only the evaluated component needs to be exchanged and the adaptation of other components is aimed to be minimal.
With the release of SPARQL 1.1 in 2013 property paths were introduced, which make it possible to describe queries that do not explicitly define the length of the path that is traversed within an RDF graph. Already existing RDF stores were adapted to support property paths. In order to give an insight on how well the current implementations of property paths in RDF stores work, we introduce a benchmark for evaluating the support of property paths. In order to support realistic RDF graphs as well as arbitrarily scalable synthetic RDF graphs as benchmark dataset, a query generator was developed that creates queries from query templates. Furthermore, we present the results of our benchmark for 4 RDF stores frequently used in academia and industry.
The taxonomy is a fundamental component of an ontology. In a taxonomy, classes are arranged hierarchically linked by a subclass-of relation. Complete taxonomies have exactly one most common class, called the root class. In Wikidata, the root class is the class "entity". The root class is unique, as it is the only class, which has no superclasses in the taxonomy. However, Wikidata’s taxonomy is incomplete in regard to this property. Orphan classes are classes, which are not the root class, but still do not have superclasses. Thereby, orphan classes violate the uniqueness of the root class.
Last month saw the public release of the Starcraft II learning environment (SC2LE): a protocol with accompanying libraries enabling both writing scripted agents as well as training reinforcement learning models to play the video game Starcraft II. The AlphaGo-creators DeepMind have made it their goal to solve this task next. Starcraft II is game with multiple players featuring an only partially observed map, very large action and state spaces, and delayed credit assignment requiring long term strategy planning. It has also fostered a large competitive scene of professional human players. This talk will give a short introduction to the game, an overview over the provided APIs, a summarization of current state-of-the-art techniques, and present some new ideas for future work.
In der vorliegenden Arbeit werden verschiedene Reinforcement Learning-Algorithmen und Arten von Classifiern getestet und verglichen. Für den Vergleich werden die ausgewählten Algorithmen mit ihren jeweiligen Classifiern für ein gegebenes Problem einzeln optimiert, trainiert und später im direkten Vergleich gegenübergestellt. Das gegebene Problem ist eine Variante des klassischen Spiels "Tron" . Ein von den Regeln her Einfaches, aber trotzdem bzgl. des Zustandsraumes hochdimensionales Problem und dynamisch in der Entscheidungsfindung. Als Algorithmen werden REINFORCE mit baseline, Q-Learning, DQN und der A3C-Algorithmus ausgewählt. Als Classifier werden lineare Funktionsannäherungen und Convolutional Neural Networks verglichen.
The extraction of individual reference strings from the reference section of scientific publications is an important step in the citation extraction pipeline. Current approaches divide this task into two steps by first detecting the reference section areas and then grouping the text lines in such areas into reference strings. We propose a classification model that considers every line in a publication as a potential part of a reference string. By applying line-based conditional random fields rather than constructing the graphical model based on individual words, dependencies and patterns that are typical in reference sections provide strong features while the overall complexity of the model is reduced.
Terrabytes of geospatial data have been made freely available recently on the Web. For example, data from gazetteers such as Geonames, maps from geospatial search engines like Google Maps and OpenStreetMap, and user-contributed content form social networks such as Foursquare.
Dempster-Shafer is a popular evidence theory that allows for plausible reasoning with belief values, which are associated with a specific item of information. It creates a belief system for reasoning with the uncertain of the information. It may be applied to ontologies, for resolving inconsistencies that were introduced by new added assertions. For instance confidence values that are associated with assertions, may serve as evidences, ie belief values, and the belief system can be applied for resolving such an inconsistency
Every city that incorporates the water-element in its fabric is confronted with the fundamental requirement of developing policies for driving development in the surrounding area, while balancing between economic growth, protection of the environmental, and safeguarding social cohesion.
Pointing is an every-day computer interaction, but a challenge for gaze-controlled input mechanisms. Multiple factors as precision, accuracy and the conflict between the eye used both as sensor and controller limit efficiency and usability. I will present the state-of-the-art; our current approach of continuous zooming, which is enriched by the novel aspects of center offset and deviation; and introduce my idea to detect and fix potential accuracy drawbacks of an eye tracker calibration.