Institute for Web Science and Technologies · Universität Koblenz - Landau
Institute WeST

Interpretability in Machine Learning

[go to overview]
Zeyd Boukhers

Although we use evaluation metrics to evaluate machine learning models and compare between them, each of these metrics is an incomplete description of most real-world tasks, precisely in high-risk environments (e.g. health care). Therefore, and due to the expected extensive usage of machine learning in our daily life (e.g. autonomous driving, health care, etc), interpreting these models becomes more and more important. Interpretability in machine learning can be defined as the ability to answer the question “why?” which can help to understand how the model treats the problem and under which conditions it might fail. Since deep learning-based models are widely used compared to classical ones, I will give an overview of interpretability in deep learning models and explains one method that can interpret the prediction decision on images.

Video of the talk

03.09.20 - 10:15
via Big Blue Button