Seminar "Knowledge Graph Embedding"[go to overview]
Summer Terms 2020
Knowledge graphs represent information as a a multi-relational graph
composed of entities (nodes) and relations (different types of edges).
Each edge is represented as a triple of the form
(head entity, relation,
tail entity), also called a fact, indicating that two entities are connected
by a specific relation (e.g.,
(AlfredHitchcock, DirectorOf, Psycho)).
This form of relational knowledge representation has a long history in logic and
More recently, it has also been the basis of the Semantic Web to create a web
of data that is readable by machines.
Knowledge graph embedding is to embed the components of a knowledge graph
including entities and relations into continuous vector spaces, so as to
simplify the manipulation while preserving the inherent structure of the graph.
For example, a common setup is to represent each entity
AlfredHitchcock) as an n-dimensional vector, such that vectors of
similar entities are close to each other in the embedding space.
Thus, knowledge graph embeddings allow to gain a new view on the data stored in
the graph and to use knowledge graphs as input for neural networks.
They have been used to implement a variety of downstream tasks such as knowledge
graph completion and question answering.
After the seminal TransE-Paper  in 2013 the task has recieved wide academic attention and a plethore of different knowledge graph embedding appoaches have been published. This seminar aims to get an overview over the published work by presentning related papers that have been grouped into topical trends. It is expected that further (original) literature will be consulted to investigate the topic. A good starting point are the surveys of  and .
Each participant will be assigned a topic in the area of knowledge graph embedding together with approx. 3 related paper references.
Each participant is expected to give an oral presentation (25 minutes presentation time, plus approx. 10 minutes discussion and 5 minutes of feedback) in which they summarize, compare and contrast the assigned papers. The presentation should aim to ensure that all attendees understand the basic idea of the algorithms presented. In this respect, motivation and examples and not mathematical details should be the focus of the presentation.
At the end of the semester a written paper (8-12 pages in the ICLR 2020 LaTeX template) on the assigned topic needs to be handed in. The structure of the paper should follow the structure of the presentation, but further elaborate its contents. Papers are not expected to present novel scientific findings but should rather review and contrast existing work.
The seminar will be organized as a weekly meeting during the lecture period with one talk per week on Wednesdays, 14:15-15:45 in K 107. The introductory meeting (mandatory) will be in the second week of the lecture peroid on 22 April 2020. Active participation in the discussions after each talk are required to pass the course. Weekly meetings will conclude before the start of the exam period.
To get an estimate on the number of participants early, interested students are asked to send a short informal email to the lecturers. Please indicate your course of study, semester count, and whether you completed the courses Machine Learning and/or Semantic Web (both not required). In case of a high number of interested students those who send an earlier email will be preferred.
The deadline for submission of written papers is 9 September 2020.
- Antoine Bordes, Nicolas Usunier, Alberto García-Durán, Jason Weston, Oksana Yakhnenko: Translating Embeddings for Modeling Multi-relational Data. NIPS 2013: 2787-2795
- Maximilian Nickel, Kevin Murphy, Volker Tresp, Evgeniy Gabrilovich: A Review of Relational Machine Learning for Knowledge Graphs. Proceedings of the IEEE 104(1): 11-33 (2016)
- Quan Wang, Zhendong Mao, Bin Wang, Li Guo: Knowledge Graph Embedding: A Survey of Approaches and Applications. IEEE Trans. Knowl. Data Eng. 29(12): 2724-2743 (2017)