In this project, the aim is to develop a smart search engine for images by deeply learning their characteristics. Here, pre-trained Convolotional Neural Netweork (CNN)-based models will be used such as CaffeNet, AlexNet, VggNet and GoogleNet. For this, a deep knowledge about CNN is required, where I will give several lectures during this semester in the context of this project.
The main goal of this project is to make a system which is able to continuously learn from the user input in order to retrieve relevant images. At the beginning, the system is linked to a large-scale dataset, that consists of unlabelled images. Then, the user enters a keyword for a desired type of images. Here, the system is not supposed to know the keyword, so different images will be retrieved, and the user will select only the relevant one. The next retrieval iteration must contain more relevant images. By repeating this step, the user should get at the end only relevant images, where a good system should learn with fewer iterations. The result of this query is regarded to be used for subsequent ones.
Imagine this system will be used for many years, the same system will be able to retrieve relevant images of a large amount of keywords from the first query. This can be used to improve the retrieval process of videos in youtube, which currently fully relies on meta-data.
Teams of 4-8 students would work together on one project.
For more information, please send an email to firstname.lastname@example.org
The kick-off meeting will take place on 27.10.2017 at 10:00 in B017