AttGAN: Facial Attribute Editing by Only Changing What You Want

0428

Fig. 1. Facial attribute editing results from our AttGAN. Zoom in for better resolution./ Fig. 2. Overview of our AttGAN, which contains three main components at training: the attribute classification constraint, the reconstruction learning and the adversarial learning. The attribute classification constraint guarantees the correct attribute manipulation on the generated image. The reconstruction learning aims at preserving the attribute-excluding details. The adversarial learning is employed for visually realistic generation.
  1. Reconstruction learning
  2. Adversarial learning
Fig. 3. Illustration of AttGAN extension for attribute style manipulation. (a) shows the extended framework based on the original AttGAN. θ denotes the style controllers and Q denotes the style predictor. (b) shows the visual effect of changing attribute style by varying θ.
Fig. 4. Results of single facial attribute editing. For each specified attribute, the facial attribute editing here is to invert it, e.g., to edit female to male, male to female, mouth open to mouth close, and mouth close to mouth open etc.
Fig. 5. Comparisons of multiple facial attribute editing among our AttGAN, VAE/GAN [7] and IcGAN [8]. For each specified attribute combination, the facial attribute editing here is to invert each attribute in that combination.
Fig. 7. Exemplar results of attribute style manipulation by using our extended AttGAN.
Fig. 8. Comparisons among StarGAN [16], VAE/GAN [7], IcGAN [8] and our AttGAN in terms of (a) facial attribute editing accuracy and (b) preservation error of the other attributes. //Fig. 9. Comparisons among Fader Networks [13], Shen et al. [10], CycleGAN [21] and our AttGAN in terms of (a) facial attribute editing accuracy and (b) preservation error of the other attributes.
Fig. 10. Effect of different combinations of the four components. // Fig. 11. Exploration of AttGAN on image style translation. The diagonal ones are the inputs