Origins of Neural Word Embeddings[go to overview]
Word embeddings are mappings of words to dense, real-valued vectors. The word embeddings that were created as a by-product of training neural networks for the task of language modelling were shown to encoded certain syntactic and semantic regularities. This talk will give a quick introduction to neural networks, will explain how they can be applied to the task of language modelling, and will discuss semantic properties of word embeddings. Lastly, the goal of Lukas' thesis will be presented.
27.10.16 - 10:15