DFG Project: CAR[go to overview]
Conditional Argumentative ReasoningBeing able to provide decision-support in the light of uncertain and contradictory information is one of the core functionalities of modern and future AI systems.
Being able to provide decision-support in the light of uncertain and contradictory information is one of the core functionalities of modern and future AI systems. This challenge calls for methods not only capable of handling huge amounts of data but, in addition, methods being able to reason symbolically with both defeasible rules mined from the data and arguments constructed from these rules. Within AI, the research area of formal argumentation has recently gained increasing attention. Computational models of formal argumentation are able to build, compare, and analyse arguments, thus providing an approach for rational decision-support in the light of contradictory information. In contrast, other research areas addressing similar problems—such as default reasoning, defeasible reasoning, and, in particular, conditional reasoning—focus on the role of rules when performing inference and particularly the uncertainty of the applicability of rules. In order to be able to address the challenge of handling both uncertain and contradictory information, both aspects have to be taken into account.
The project CAR aims at establishing a theoretical basis for integrative approaches of formal argumentation and rule-based reasoning. Technically, we will consider the approaches of Abstract Dialectical Frameworks (ADFs) and Conditional Logic (CL) and focus on the following two research questions. First, in an ADF, acceptance of arguments is defined through so-called acceptance conditions. One can interpret these acceptance conditions as rules and this yields a knowledge base in CL. Now one can apply reasoning mechanisms from CL - such as System Z - and compare the results with the original ADF reasoning mechanisms and, in particular, analyse the results in general argumentative terms. Second, any knowledge base in CL can be interpreted as an ADF in the same way. Now one can apply ADF reasoning mechanisms---such as stable semantics---and thus define a new reasoning mechanism for CL. Both translations and research questions provide ways to compare the different approaches. Investigating these will bring insights on how these two approaches relate and, more importantly, how they can benefit from each other. Both research areas developed diverse evaluation criteria---such as toy examples and rationality postulates---for concrete reasoning approaches and through our translations, new criteria will be available for both areas, respectively.
In this project, we will address both research questions outlined above in detail. More concretely, we will develop novel reasoning mechanisms for ADFs based on CL reasoning mechanisms and vice versa, and evaluate those and existing approaches with evaluation criteria made available by the other area, respectively.
For more details please visit the project website (car.mthimm.de).
- Operating Time: 10/2019 - 09/2021
- Source of Funding: DFG - Deutsche Forschungsgesellschaft
- Technische Universität Dortmund
- Web Site: http://car.mthimm.de