Incremental Embeddings?[go to overview]
Over the last years, vector space embeddings have evolved to be the popular approach for including semantic information in neural-network-based machine learning architectures. Word embeddings map words to a latent continuous vector space with the goal to have similar words appear close together in the vector space. Knowledge graph embeddings map semantic triples into vector spaces with the goal to predict unseen triples from this representation. However, almost all developed embedding approaches are only capable of offline training, i.e., should new training data become available or old invalid, embeddings will have to be retrained from scratch. This talk will present an idea to adapt existing embedding techniques to support incremental training, i.e., to be able to incorporate changes in training data in the learned representation much more efficiently then restarting training.
22.03.18 - 10:15