Institute for Web Science and Technologies · Universität Koblenz - Landau
Institute WeST

Forschungspraktikum/Projektpraktikum "Machine Learning Application"

[zur Übersicht]

Wintersemester 2020 / 2021

In this research lab, you are going to build a complete machine learning system following the generic pipeline in order to solve a specific problem. For each phase in this pipeline, you will adopt the methods and techniques being learnt in Machine Learning and Data Mining course. Therefore, completing this lecture is mandatory. Moreover, other fundamental approaches will also be used when necessary [1], including other sophisticated and modified approaches from the state-of-the-art.

Important Information

To:

Master and Bachelor students in:

  • Web Science
  • Computer Science
  • Computational Visualistics
  • Business Informatics

Kick-off / Introductory meeting

  • When: August 11 at 10:00 (your attendance is mandatory).
  • Where: Online via BBB (OpenOlat)
  • Slides

How to register?

  • Form a group of four people to work on a topic
  • Give a name to your group
  • Send (one) email to boukhers@uni-koblenz.de with the subject: MLA registration request (group) before the kick-off
  • Attend the introductory meeting
  • After the topic is assigned, write a proposal (up to two pages), describing your potential solution.
  • Register to the exam

Important note: If you could not form a group, you may still take part in the research lab. However, you will have to work with other people who couldn’t form groups. Please send an email to boukhers@uni-koblenz.de with the subject: MLA registration request (indivdual)

Exam

  • When:----
  • Where:----
  • Type: Presentation + Report + Software
  • Registration (Klips): Open from ---- to ---- (Do Not miss the deadline!)
  • Cancellation (Klips): Until ----

Topics

Interpretable Machine Learning Model

“Interpretability is the degree to which a human can understand the cause of a decision” [1]
“Interpretability is the degree to which a human can consistently predict the model’s result” [2]
“A model can be described interpretable if it can be entirely comprehended at once” [4]

Interpretability is the ability to answer the question “why” which can help to understand how the model treats the problem and under which conditions it might fail.

Why "interpretability"?

  • Trustfulness in high-risk environments [3]
  • Fairness
  • Debugging
  • Social acceptance of ML
  • etc.

Where interpretability is used in our daily lives?

There are many examples where explainability is used, especially in recommendation systems;

  • Frequently bought together by Amazon: When amazon recommends a product when you are searching for another one, it starts to give a reason why the recommended product might be suitable for you such as it is frequently bought together with the product you are searching for / buying.
  • Because you watched: it is adopted by several video streaming platforms such as Youtube and Netflix: the video is not just recommended to you but a reason is given which is a similarity between a video that you already watched.

In this research lab, we will focus on interpreting the behaviour of LSTM/CRF on reference parsing. We will re-implement an existing approach for reference parsing and interpret its result. Precisely, we are interested in a local interpretation that explains an individual prediction. Moreover, the explanation has to be human-friendly which means short explanations consisting of understandable causes. To this end, we will need to answer three questions: 1) Why is token x is classified as y: the model has to provide the main features (of the token in question or other tokens given that its class depends on the classes on neighbouring tokens) based on which it classifies x as y. 2) What is the minimum change in the features x or its neighbours can change the prediction from y to y’. 3) What is the closest example/cluster of examples from the training set that make the model learn the parameters responsible for its decision on x.

References

[1] Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1-38.

[2] Kim, B., Khanna, R., & Koyejo, O. O. (2016). Examples are not enough, learn to criticize! criticism for interpretability. In Advances in neural information processing systems (pp. 2280-2288).

[3] Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.

[4] Lipton, Z. C. (2018). The mythos of model interpretability. Queue, 16(3), 31-57.

Lehrende

  • boukhers@uni-koblenz.de
  • Wissenschaftlicher Mitarbeiter
  • B 104
  • +49 261 287-2765