621851 (1) [Avatar] Offline
#1
Thank you for your contributions on making GANs easier to understand.

I found your implementation of the DCGAN in chapter 4 very clear and informative. One point on that though, is it not beneficial to add a dropout layer at the end of the discriminator? It's cited in a few places as being an important trick, for instance in Francois Chollet's book "Deep learning with Python". In the Keras-Gan git that you referenced, they actually add quite a few dropout layers in their model.

Anyway, I would be keenly interested to see your take on a WGAN-GP implementation. I'm not a fan of the Keras-GAN implementation, as they seem to make quite a few mistakes. For instance, in the original paper they do not recommend adding batchnorm layers in the discriminator network, and instead advocate the use of layernorm.

I'm slowly starting to understand WGAN-GP, it seems quite important considering the widespread adoption of this modified loss function in the literature. If possible I think it would be highly informative to see how exactly you modify your existing DCGAN network and convert it into a WGAN-GP. Would it be possible to include this in your git-repository?

I'm very much looking forward to chapter 6, and again stress that a good understanding of WGAN-GP would be highly informative as progressive GAN also uses the WGAN-GP loss function.

Thanks again and best regards,

-Jason.