Integrating Feedforward Design into a Generative Network to Synthesize Supplementary Training Data for Object Classification[go to overview]
A common problem in the context of machine learning is that large amounts of data are required, but are often difficult to collect. Although it is possible to generate supplementary data by means of machine learning methods, in particular artificial neural networks, large quantities of data are required for this task as well. This work is addressing this problem by implementing a technique called feedforward design which is an alternative to the backpropagation algorithm. The latter is very commonly applied to train neural networks. First, two feedforward architectures are presented which have already been introduced in the literature. These networks are capable of generating grayscale images of handwritten digits. The deeper of the two neural networks is further developed in the scope of this work with the aim of synthesizing images of higher complexity. More precisely, higher-resolution images, as well as color images are synthesized. Furthermore, a deeper architecture and a feedforward generative adversarial network are developed. For evaluation purposes, the influence of adding synthesized data to an original dataset on the training of a classifier is investigated. Additionally, a comparison between a feedforward generator and a generative adversarial network is performed, and some time measurements are conducted.
25.06.20 - 10:15
via Big Blue Button