Learned latent space models a continuous bi-directional aging process
head shape deformation as well as appearance changes across a wide range of ages.
We propose a novel generative adversarial network architecture that consists of a single conditional generator and a single discriminator. The conditional generator is responsible for transitions across age groups, and consists of three parts: identity encoder, a mapping network, and a decoder. We assume that while a person’s appearance changes with age their identity remains fixed. Therefore, we encode age and identity in separate paths. Each age group is represented by a unique pre-defined distribution. When given a target age, we sample from the respective age group distribution and assign it a vector age code. The age code is sent to a mapping network, that maps it into a learned unified age latent space. The resulting latent space approximates continuous age transformations. The input image is processed separately by the identity encoder to extract identity features. The decoder takes the mapping network output, a target age latent vector, and injects it to the identity features using modulated convolutions, originally proposed in StyleGAN2 [19]. During training, we use an additional age encoder to relate between real and generated images to the pre-defined distributions of their respective age class.
Dataset: FFHQ-Aging -> crowd-sourcing platform to annotate gender and age cluster for all 70, 000 images on FFHQ We defined 10 age clusters that capture both geometric and appearance changes throughout a person’s life: 0–2, 3–6, 7–9, 10–14, 15–19, 20–29, 30–39, 40–49, 50–69 and 70+