The Author Online Book Forums are Moving

The Author Online Book Forums will soon redirect to Manning's liveBook and liveVideo. All book forum content will migrate to liveBook's discussion forum and all video forum content will migrate to liveVideo. Log in to liveBook or liveVideo with your Manning credentials to join the discussion!

Thank you for your engagement in the AoF over the years! We look forward to offering you a more enhanced forum experience.

621851 (1) [Avatar] Offline
#1
Thank you for your contributions on making GANs easier to understand.

I found your implementation of the DCGAN in chapter 4 very clear and informative. One point on that though, is it not beneficial to add a dropout layer at the end of the discriminator? It's cited in a few places as being an important trick, for instance in Francois Chollet's book "Deep learning with Python". In the Keras-Gan git that you referenced, they actually add quite a few dropout layers in their model.

Anyway, I would be keenly interested to see your take on a WGAN-GP implementation. I'm not a fan of the Keras-GAN implementation, as they seem to make quite a few mistakes. For instance, in the original paper they do not recommend adding batchnorm layers in the discriminator network, and instead advocate the use of layernorm.

I'm slowly starting to understand WGAN-GP, it seems quite important considering the widespread adoption of this modified loss function in the literature. If possible I think it would be highly informative to see how exactly you modify your existing DCGAN network and convert it into a WGAN-GP. Would it be possible to include this in your git-repository?

I'm very much looking forward to chapter 6, and again stress that a good understanding of WGAN-GP would be highly informative as progressive GAN also uses the WGAN-GP loss function.

Thanks again and best regards,

-Jason.
Vladimir Bok (10) [Avatar] Offline
#2
Hi Jason,

Thank you for the feedback and kind words. Please see my responses inline:

I found your implementation of the DCGAN in chapter 4 very clear and informative. One point on that though, is it not beneficial to add a dropout layer at the end of the discriminator? It's cited in a few places as being an important trick, for instance in Francois Chollet's book "Deep learning with Python". In the Keras-Gan git that you referenced, they actually add quite a few dropout layers in their model.

Dropout is indeed an effective regularization technique that in many cases improves the training process and outcomes. It was omitted from Chapter 4 to keep the chapter focused. That said, dropout is covered in Chapter 7, where we discuss the use of GANs in semi-supervised learning.

Anyway, I would be keenly interested to see your take on a WGAN-GP implementation. I'm not a fan of the Keras-GAN implementation, as they seem to make quite a few mistakes. For instance, in the original paper they do not recommend adding batchnorm layers in the discriminator network, and instead advocate the use of layernorm.

I'm slowly starting to understand WGAN-GP, it seems quite important considering the widespread adoption of this modified loss function in the literature. If possible I think it would be highly informative to see how exactly you modify your existing DCGAN network and convert it into a WGAN-GP. Would it be possible to include this in your git-repository?

Thank you for the suggestion. Having the opportunity to get feedback from engaged readers like you before the book goes to print is what will make the content the best it can be. I will include an implementation of WGAN in the book’s Appendix.

Please do not hesitate to reach out with any other feedback or suggestions.

Thank you,
Vladimir // vladimirbok.com