Quantitative comparisons on FFHQ, LSUN Bedroom and LSUN Church. P and R in the table respectively denote precision and recall.
Our method improves StyleGAN2 in large datasets in terms of FID and recall. Combined with GGDR (Ours* in the table),
GLeaD could further introduce significant gains, achieving new state-of-the-art performance on various datasets.
Synthesized images by our models respectively trained on FFHQ, LSUN Bedroom and Church.
Reconstruction results of real and synthesized input images.
These results indicate that our D could learn features
aligned with the domain of G, matching our motivation.
Comment: Proposes to leverage the feature map of G to supervise the output features of D.