Institute for Web Science and Technologies · Universität Koblenz - Landau
Institute WeST

Talks

Using neural networks for approximate approaches as a heuristic to exact methods with abstract argumentation frameworks.


22.07.21. argumentation is a method for providing abstractions of problems along three spectrums: arguments, attacks, and acceptability, the latter the most important property of a semantic. Exact approaches, often using reduction to some other formalism such as SAT and ASP, are known to be computationally hard and hence, difficult to solve for realistic models. In addressing those issues, this research firstly implements neural networks to predict credulous acceptability of abstract arguments, as a classification problem. Secondly, we propose to implement an efficient heuristic by using the approximate method to set warm-start point, minimize backtracks steps and maximize the performance of a complete solution, so-called DREED, to abstract argumentation problems. To the best of our knowledge, this combination has not been explored yet within the argumentation community. [read more...]

Deep Learning for Differential Diagnosis and Prediction in EHR Data


08.07.21. Over the last decade, the generation of massive Electronic Health Records (EHR) allowed researchers to explore the secondary use of these data in the field of biomedical informatics researches. Recent researches showed that deep learning models are efficient in collecting important features from EHR data and predict a disease diagnosis. However, these models performed inadequately when it comes to extracting important features from heterogeneous EHR data and predicting multiple disease outcomes. This thesis aims to provide a method to generalize different EHR structures and then train a deep learning model to predict multiple disease outcomes. Thereby, the model would help in differential diagnosis where multiple other disease outcomes are identified given some symptoms. To the best of my knowledge, this is the first time a deep learning model would be used in differential disease diagnosis prediction. [read more...]

Revisiting minimal admissible sets in abstract argumentation


01.07.21. We introduce elementary cores as sets of arguments of an abstract argumentation framework that are minimally admissible for each of its members. Elementary cores are used to decompose arbitrary admissible sets and characterise certain admissibility-based semantics. Elementary cores can then be used to explain the reasoning process behind these semantics using a simple rule transition system. [read more...]

Implementation of Control Argumentation Frameworks with Answer Set Programming


24.06.21. Is it possible to calculate how to win a debate? This seems to be an overly complicated problem in the beginning because of the uncertainty of arguments which can be used by any party. An answer to this question could perhaps be delivered by Dungs Argumentation Framework (AF) and one of its expansions: the Control Argumentation Framework (CAF). It is possible to primitively project the interaction of the arguments of an agent with arguments through CAFs that will be for surely used by its opponent and itself and also those which might will be used. This can be accomplished by splitting an Argumentation Framework in three parts: the fixed part, the uncertain part and the control part . The fixed part represents arguments and attacks that will surely be used. The uncertain part represents a given set of arguments and the attacks connect to those that may be used by the opponent and the control part represents the ones that can be used by the agent. The goal of this work is to implement a solver using answer set programming that search the spaces of arguments to find out if the agent is able or not to solve the problem of finding a combination of control arguments to achieve a desired outcome of an argumentation. [read more...]

Generating Counterfactual Images for Visual Question Answering by Editing Question-Critical Objects


17.06.21. While Visual Question Answering (VQA) systems improved significantly in recent years, they still tend to produce errors that are hard to reconstruct for human users. The lack of interpretability in black-box VQA models raises the necessity for discriminative explanations alongside the models’ outputs. This thesis aims at introducing a method to generate counterfactual images for an arbitrary VQA model. Given a question-image pair, the counterfactual generator should mask the question-critical objects in the image and then predict a minimal number of edits to the image such that the VQA model outputs a different answer. Thereby, the new image should contain semantically meaningful changes, be visually realistic, and remain unchanged in question-answer-irrelevant regions (e.g., the background). To the best of my knowledge, this is the first counterfactual image generator applied to VQA systems that does not apply edits to individual pixels but rather to a spatial mask without requiring additional manual annotations. [read more...]

Older Entries