Classification is the problem of categorizing new observations by using a classifier learnt from already categorized examples. In general, the area of machine learning has brought forth a series of different approaches to deal with this problem, from decision trees to support vector machines and others. Recently, approaches to statistical relational learning even take the perspective of knowledge representation and reasoning into account by developing models on more formal logical and statistical grounds. In this project, we will significantly generalize this reasoning aspect of machine learning towards the use of computational models of argumentation, a popular approach to commonsense reasoning, for reasoning within machine learning. Consider e.g. the following two-step classification approach. In the first step, rule learning algorithms are used to extract frequent patterns and rules from a given data set. The output of this step comprises a huge number of rules (given fairly low confidence and support parameters) and these cannot directly be used for the purpose of classification as they are usually inconsistent with one another. Therefore, in the second step, we interpret these rules as the input for approaches to structured argumentation - more specifically ASPIC+, DeLP, ABA, and deductive argumentation - and probabilistic and other quantitative extensions of those. Using the argumentative inference procedures of these approaches and given a new observation, the classification of the new observation is determined by constructing arguments on top of these rules for the different classes and determining their justification status. More precisely, the project CAML will investigate radically novel machine learning approaches as the one outlined above in detail and develop the new field of "Argumentative Machine Learning" in general: a tight integration of "C"omputational "A"rgumentation und "M"achine "L"earning. This has several benefits. The use of argumentation techniques allows to obtain classifiers, which are by design able to explain their decisions, and therefore addresses the recent need for Explainable AI: classifications are accompanied by a dialectical analysis showing why arguments for the conclusion are preferred to counterarguments; this automatic deliberation, validation, reconstruction and synthesis of arguments helps in assessing trust in the classifier, which is fundamental if one plans to take action based on a prediction. Argumentation techniques in machine learning also allows the easy integration of additional expert knowledge in form of arguments. As there are many different approaches to structured argumentation that take different perspectives on the issue of argumentation, their application in machine learning will provide new insights on their usefulness and allows for a comparison between them on a different level.
For more details please visit the project website.
Mar 2018 - Feb 2021
Source of funding
DFG - Deutsche Forschungsgesellschaft
- Teckhnische Universität Darmstadt
Project home page