The Author Online Book Forums are Moving

The Author Online Book Forums will soon redirect to Manning's liveBook and liveVideo. All book forum content will migrate to liveBook's discussion forum and all video forum content will migrate to liveVideo. Log in to liveBook or liveVideo with your Manning credentials to join the discussion!

Thank you for your engagement in the AoF over the years! We look forward to offering you a more enhanced forum experience.

blackflicker (2) [Avatar] Offline
#1
I hope the book will include and explain the inner workings of state-of-the-art image recognition with deep learning technologies exist today. Like, google, facebook, clarifai etc. That would be very very great.

Most of the books and resources are there to implement toy examples like cat vs dog or recognizing handwritten digits etc (although they're classic ML topics in earlier days).

Examples which would use latest datasets for image recognition would be best like inception-v4 from google. And how to use them with popular python frameworks like keras.

Also, the examples for RNNs and other new advancements would be great too.
Andrew Trask (Grokking Deep Learning Author) (26) [Avatar] Offline
#2
I've been putting some thought into this lately. The big problem with using the big image datasets (and the state-of-the-art approaches) is that not only are they complex, but 99% of the readers won't have powerful enough hardware to train them (a cluster of high-end GPUs). Furthermore, many state-of-the-art approaches are just a temporary fad that won't stand the test of time.

I'm actually considering focusing more on expanding the book to include the more general methods that state-of-the-art approaches seek to solve (memory, attention, representation), thus giving the best of both worlds. It introduces something that people can actually train on their laptops, while also introducing concepts that will stand the test of time as important to the DL community, whereas something like "Deep Residual Networks" could be replaced again at ICML this year. Coverage on a trendy topic like Deep Residual Nets might be better suited for a blogpost.

I'd love to hear anyone's thoughts on the matter here.
Rich E (10) [Avatar] Offline
#3
Totally agree on staying away from covering fads in depth. It's for the same reason why something like "Design of Everyday Things" is still relevant decades later. Udacity and all those guys can cover latest trends online, which is a better media for constantly changing solutions to the core problems.

Your book seems to be a great primer on understanding the core problems. If you can match how these relate to some current fads without going to much in depth, the book will have more staying power.
DHS (4) [Avatar] Offline
#4
Andrew Trask (Grokking Deep Learning Author) wrote:I've been putting some thought into this lately. The big problem with using the big image datasets (and the state-of-the-art approaches) is that not only are they complex, but 99% of the readers won't have powerful enough hardware to train them (a cluster of high-end GPUs). Furthermore, many state-of-the-art approaches are just a temporary fad that won't stand the test of time.

I'm actually considering focusing more on expanding the book to include the more general methods that state-of-the-art approaches seek to solve (memory, attention, representation), thus giving the best of both worlds. It introduces something that people can actually train on their laptops, while also introducing concepts that will stand the test of time as important to the DL community, whereas something like "Deep Residual Networks" could be replaced again at ICML this year. Coverage on a trendy topic like Deep Residual Nets might be better suited for a blogpost.

I'd love to hear anyone's thoughts on the matter here.


I'm also in total agreement. The approaches involving memory, attention, and representation align with more general strategies developed by biological intelligence through evolution to solve real-world vision problems unlike the more brute-forced approaches requiring immense amounts of data and computing power.