Abstract

Generative models can have a high computational complexity, which is reflected in its implementation and training along with specialized hardware to implement and test. We propose a model with a simplified architecture of only one neural network with two layers. It can be conditioned to generate specific data with requested features. It is a simplified model as an alternative to models with higher complexity like variational autoencoders (VAEs) or generative adversarial networks (GANs). Unlike the mentioned models, this method works with only one network of two fully connected layers. This chapter presents a simplified architecture with the conditioning advantage of the generative adversarial networks with the ease of training of the autoencoders. It can be used for generative purposes of data with a straightforward implementation. It can run on CPU for rapid prototyping. Also, the latent space can be visualized as in the variational autoencoders, and it can have queries by conditioning and sampling from it. The learning error rate keeps on the 1% average when abstracting the data on only two dimensions, and it can generate interpolations over the latent space generating new images from the continuous space.

Keywords: generative models, data completion, conditional generation
