Attribute2Image
Attribute2Image: Conditional Image Generation from Visual Attributes. This paper investigates a novel problem of generating images from visual attributes. We model the image as a composite of foreground and background and develop a layered generative model with disentangled latent variables that can be learned end-to-end using a variational auto-encoder. We experiment with natural images of faces and birds and demonstrate that the proposed models are capable of generating realistic and diverse samples with disentangled latent representations. We use a general energy minimization algorithm for posterior inference of latent variables given novel images. Therefore, the learned generative models show excellent quantitative and visual results in the tasks of attribute-conditioned image reconstruction and completion.
Keywords for this software
References in zbMATH (referenced in 5 articles )
Showing results 1 to 5 of 5.
Sorted by year (- Tonolini, Francesco; Radford, Jack; Turpin, Alex; Faccio, Daniele; Murray-Smith, Roderick: Variational inference for computational imaging inverse problems (2020)
- Chen, Junfan; Zhang, Richong; Mao, Yongyi; Wang, Binfeng; Qiao, Jianhang: A conditional VAE-based conversation model (2019)
- Shen, Yuming; Liu, Li; Shao, Ling: Unsupervised binary representation learning with deep variational networks (2019)
- Lock, Eric F.; Li, Gen: Supervised multiway factorization (2018)
- Lin, Daoyu; Wang, Yang; Xu, Guangluan; Li, Jun; Fu, Kun: Transform a simple sketch to a Chinese painting by a multiscale deep neural network (2017)