This thesis aims at exploring the potential of heuristic search algorithms for Abstract Argumentation. Therefore specific backtracking search algorithms are presented, which support the use of heuristics. Thereafter several heuristics are defined which have been implemented as part of this thesis. These will then be experimentally compared among each other and with other approaches to Abstract Argumentation. For different problems in Abstract Argumentation a suitable heuristic is suggested. For example heuristics which analyse paths inside the graph structure of an abstract argumentation framework have proven useful.
The task of updating ontologies has gained increasing interest which among other things led to the introduction of SPARQL Update. OntoClean is a methodology for analyzing ontologies and can be used to justify certain modeling decisions and identify and explain common modeling mistakes. We introduce first ideas on how the notions behind OntoClean can be used to decide how to implement an update. We provide so-called OntoClean-guided semantics for SPARQL Update and argue that these often lead to the result intended by the user.
The purpose of this project was to develop a method to detect fake news on Twitter. To this end a dataset had to be collected and labelled, that could be used to train a machine-learning algorithm with a set of features. The resulting classification was used to detect fake tweets on twitter via a browser plugin
Fake news are false and sometimes sensationalist information presented as fact and they often spread very fast on the internet via social networks like Facebook or Twitter. A possibility to identify such fake news may diminish the impact they can have. For this purpose fake news detection can be used. The term fake news detection describes the process of returning a label denoting whether a given input consists of fake news or authentic news. In this work we propose two main contributions: The first contribution is a labeled dataset of Twitter Tweets containing fake news and authentic news. Secondly we propose a web tool, which can be used to identify fake news and verify authentic Twitter Tweets based on machine learning algorithms and Twitter meta data.
In the last years, scalable RDF stores in the cloud have been developed. The distribution increases the complexity of RDF stores running on a single computer. In order to gain a deeper understanding of how, e.g., the data placement or the distributed query execution strategies affect the performance, we have developed the modular glass box profiling system Koral. With its help, it is possible to test the behaviour of already existing or newly created strategies tackling the challenges caused by the distribution in a realistic distributed RDF store. Thereby, the design goal of Koral is that only the evaluated component needs to be exchanged and the adaptation of other components is aimed to be minimal.
With the release of SPARQL 1.1 in 2013 property paths were introduced, which make it possible to describe queries that do not explicitly define the length of the path that is traversed within an RDF graph. Already existing RDF stores were adapted to support property paths. In order to give an insight on how well the current implementations of property paths in RDF stores work, we introduce a benchmark for evaluating the support of property paths. In order to support realistic RDF graphs as well as arbitrarily scalable synthetic RDF graphs as benchmark dataset, a query generator was developed that creates queries from query templates. Furthermore, we present the results of our benchmark for 4 RDF stores frequently used in academia and industry.
The taxonomy is a fundamental component of an ontology. In a taxonomy, classes are arranged hierarchically linked by a subclass-of relation. Complete taxonomies have exactly one most common class, called the root class. In Wikidata, the root class is the class "entity". The root class is unique, as it is the only class, which has no superclasses in the taxonomy. However, Wikidata’s taxonomy is incomplete in regard to this property. Orphan classes are classes, which are not the root class, but still do not have superclasses. Thereby, orphan classes violate the uniqueness of the root class.
Last month saw the public release of the Starcraft II learning environment (SC2LE): a protocol with accompanying libraries enabling both writing scripted agents as well as training reinforcement learning models to play the video game Starcraft II. The AlphaGo-creators DeepMind have made it their goal to solve this task next. Starcraft II is game with multiple players featuring an only partially observed map, very large action and state spaces, and delayed credit assignment requiring long term strategy planning. It has also fostered a large competitive scene of professional human players. This talk will give a short introduction to the game, an overview over the provided APIs, a summarization of current state-of-the-art techniques, and present some new ideas for future work.
In der vorliegenden Arbeit werden verschiedene Reinforcement Learning-Algorithmen und Arten von Classifiern getestet und verglichen. Für den Vergleich werden die ausgewählten Algorithmen mit ihren jeweiligen Classifiern für ein gegebenes Problem einzeln optimiert, trainiert und später im direkten Vergleich gegenübergestellt. Das gegebene Problem ist eine Variante des klassischen Spiels "Tron" . Ein von den Regeln her Einfaches, aber trotzdem bzgl. des Zustandsraumes hochdimensionales Problem und dynamisch in der Entscheidungsfindung. Als Algorithmen werden REINFORCE mit baseline, Q-Learning, DQN und der A3C-Algorithmus ausgewählt. Als Classifier werden lineare Funktionsannäherungen und Convolutional Neural Networks verglichen.
The extraction of individual reference strings from the reference section of scientific publications is an important step in the citation extraction pipeline. Current approaches divide this task into two steps by first detecting the reference section areas and then grouping the text lines in such areas into reference strings. We propose a classification model that considers every line in a publication as a potential part of a reference string. By applying line-based conditional random fields rather than constructing the graphical model based on individual words, dependencies and patterns that are typical in reference sections provide strong features while the overall complexity of the model is reduced.