428125 (18) [Avatar] Offline
#1
We really want this book to be written by you and not by us. Send us your thoughts, diagrams, ideas. Improve on our explanations. Let's get the hive mind buzzing. If your idea fits in the book, we'll be sure attribute it to you in whatever way you'd, like links to your blog, github repor, and even your favorite cause or nonprofit.
255505 (2) [Avatar] Offline
#2
Bought the book. First 3 chapters are very excellent. Excited about completing NLP journey with you guys. When can we expect next chapters? Can't Wait ...!


Regards
Manav
428125 (18) [Avatar] Offline
#3
We're blazing a trail for your journey right now. Cole's Chapter 9 on Neural Nets (mainly CNNs) is in review at Manning now. Sorry for the skipping around, but Cole did a great job of making his Neural Nets "stand on their own." And I just finished reading Hannes' Chapter 6 on Word2Vec. Word2vec is so powerful, and Hannes has made it a lot easier to grasp.

And I just pushed the latest code we're working on to `pypi` to a package called `nlpia`, if you'd like to get a head start on the upcoming chapters that use it. Check it out (`pip install nlpia`) and let us know what you think. It's a work-in-progess, so you can send us Pull Requests or feature ideas at github.com/totalgood/nlpia

Do you have any ideas for how we could improve the way we explain things? Are there any applications for NLP in your world that you'd like us to touch on in future chapters?
18474 (1) [Avatar] Offline
#4
Many thanks for this very exciting book to have and, as previously mentioned, we can't wait to read the remaining chapters.

When written by 2 co-authors with experience in deep learning, one has to expect (and hope!) more state-of-the-art, deep neural net-based focus to addressing NLP challenges.

Having this in mind, a humble suggestion is to dedicate a chapter that practically discusses recent approaches to language models (e.g. dependeny parsing, part-of-speech tagging, NER etc) leveraging recent deep learning frameworks such as SpaCy and Google's SyntaxNet / Parsey McParseface models.

The discussion would definitely become even more interesting if the book can include some 'put-it-all-together' novelties on how to integrate the above suggested language model outputs with word embeddings (e.g. Word2vec, GloVe) and attention models in a hollistic deep learning approach to text classification.

Best wishes
428125 (18) [Avatar] Offline
#5
Excellent suggestions. We were thinking the same thing. A chapter on Word2vec is on the way as well as two on CNNs and RNNs. We reviewed the ToC last week and decided to retain and beef up the chapter on "grammar-based approaches" like SpaCy and SyntaxNet and plan to find a place in the book for a discussion of character-based models like FastText as well. We are trying to integrate all the NLP options into a cohesive picture of where in an NLP pipeline they fit, and what applications they enable.
351915 (5) [Avatar] Offline
#6
A question I have is whether functional programming languages having immutable data (though possibly with re-bounding variables) are suitable for natural language processing, or whether it is way better to use imperative programming languages (please, do not consider the existence of libraries).
428125 (18) [Avatar] Offline
#7
Functional programming patterns don't appear much in the early examples of the book. We're hoping that imperative and object-oriented programming logic will be accessible to a wider audience. And functional programming is rarely used in the machine learning texts and courses that I see. However, functional programming may have advantages for scalability. So look to businesses and open source projects where scalability is the dominant challenge (Google, Microsoft, Facebook, Spotify, Twitter, Apple) and you might find functional programming examples there.

And machine learning seems to me to be about allowing software to write itself, change its own state. So object oriented patterns may be a better fit for the problem.

Please share any functional programming NLP examples you find and we'll be happy to discuss them in the "mind expansion" chapter about scaling an NLP pipeline.
468265 (3) [Avatar] Offline
#8
I am loving this book and can't wait for any new chapter!

I have a question I could not learn reading this chapters, maybe will come on later chapters... but really can't wait to know more! smilie

Regarding the Figure 1.1, I understand Regular Expressions and Rule-Based NLP (grammar) are inside Finite-State Machine, but where would you put Machine Learning and Deep Learning algorithms? Are they FSM with tons of possible states?
If they are a way to choose the next state, are they a broader type of Turing Machine?

And... what about Neural Turing Machines? What is their relationship/differences with this Figure 1.1 and how they impact NLP?

Thanks for this great work!
428125 (18) [Avatar] Offline
#9
Wow, that's deep. I should try to explain that better.

Regular expressions are a programming language for defining and running a particular kind of FSM.
But python (and unix/Linux/posix) regexes take some liberties with some of the rules that FSMs must obey, like no "lookahead." So they aren't exactly FSMs, but if you need them to be, you can just not use those features in Python's 're' package. The reason you might want them to be true FSMs, is that FSMs always make a decision, match or not match. But poorly-written regexes can run forever or crash. Kyle Gorman is working on a whole book to explain FSMs with python examples.

Neural Turing Machines are new to me. Until I've implemented and tested some examples (chapter 12) I can't say much, authoritatively, about them. My vague understanding is that they are neural networks that can learn to write algorithms/programs that a Turing machine can run. NTMs are programs trained to write programs based on examples (data) that you, the programmer's trainer, feed into it. Deep.

And that brings us to machine learning, another self-programming algorithm. But machine learning came before NTMs (I think). It's simpler. If you can reduce a problem to a mathematical function, like multiplying and adding a bunch of numbers (the dot product of a matrix of weights with a vector of input values) that makes it much easier to find the right "program" that solves your problem. You just have a computer search for the best possible "weights" to use in that formula based on how close the output is to the examples you train it with (the training dataset, which pairs input vectors with desired output vectors). It's trial and error programming of a very simple program, without loops or recursion any other fancy programming tricks, just math. It's how a computer does TDD on it's own smilie The trick is writing algorithms that make good guesses at the weights and don't take too much data to figure out a good set of math formula coefficients (weights).

Machine learning is what most of the rest of NLPIA is about. You can see some rough examples in the python package 'nlpia' as we build examples for the book. Just keep pulling the latest source code, because we're updating it weekly. (http://github.com/totalgood/nlpia). And send us pull requests for NLP machine learning problems or programs you are working on.
442004 (2) [Avatar] Offline
#10
Great Book.. please send more chapters ASAP!
This book is great. Love the detailed info on explaining the algorithms!
Question: Will this book cover NER as well in the same detail that TDif was covered?

and when will new chapters be released?

thanks,
Marcelo.
428125 (18) [Avatar] Offline
#11
Absolutely! NER is going to be covered in a chapter dedicated entirely to "information extraction." We'll show you how to extract names of people, places, and things, plus dates, prices, zipcodes, addresses, etc. The idea is to have a chapter that can show you how to use grammars (patterns) to extract information and create "structured data" from unstructure natural language documents or statements.

Example applications:
1. A chatbot can use information extraction to find answers to questions.
2. A twiter bot, like openchat, that we helped Zak Kent build for PyCon to promote OpenSpaces by extracting times and room numbers from tweets by PyCon attendees and retweeting reminders of these events. So far it's working well!

Manning says they are going to release the chapter on semantic analysis at the end of this week.
428125 (18) [Avatar] Offline
#12
Update from Manning Editors: Chapter 4 on Semantic Analysis will be on MEAP early next week.
RationalPi (3) [Avatar] Offline
#13
Looks like an interesting book!

What languages and frameworks will be covered for doing NLP at scale? Scala? Java? Python? UIMA? Spark? TensorFlow? Others?

Thanks
Josh
428125 (18) [Avatar] Offline
#14
The only language you will have to use is **Python**. We'll use Python bindings to pull the strings on compiled C/C++ libraries throughout the book. No javascript (node.js) packages are required for this book. Some of the packages/frameworks/libraries that we use to help with scaling under the hood: Atlas, Tensorflow, Theano, CUDA, CUDAnn.

And a Java VM is only required for one particular application unrelated to scaling. We're using sutime for datetime extraction within openchat, a twitter bot "attending" PyCon this weekend in Portland. We weren't able to find an equivalent pip-installable binary with a Python API. But we may be able to achieve equivalent performance (NLP accuracy and throughput) with a package like SpaCy. If so, we'll eliminate this lonely Java JAR dependency.

The python packages that we will help you use to scale your pipeline include keras, gensim, SpaCy, and numpy. They each enable parallelization and/or out of memory computation in different ways. And they gracefully fall back to single-core computation for the reader that doesn't have access to an environment configured to enable mulit-core/multi-threaded/GPU processing with these packages. We will only briefly mention approaches for cluster and Cloud computing (AWS Lambda, Kubernetes, Spark, Google Compute and TPUs), which is not the focus of this book.

We will provide the steps we used to configure an Ubuntu environment to take advantage of these scaling features in a single high performance GPU + 16 core server, but "your mileage may vary" in your environment.
299827 (1) [Avatar] Offline
#15
428125 wrote:Update from Manning Editors: Chapter 4 on Semantic Analysis will be on MEAP early next week.


An update on this?
428125 (18) [Avatar] Offline
#16
I guess I can't promise anything from Manning anymore. All I know is that chapters 4 and 5 are in their hands (ready for MEAP) and 6-9 will be finalized and in their hands this week.
468265 (3) [Avatar] Offline
#17
Thanks for the update!
476584 (3) [Avatar] Offline
#18
I am loving this book and can't wait for any new chapter!

I have a question I could not learn reading this chapters, maybe will come on later chapters... but really can't wait to know more! smile
lukewrites (1) [Avatar] Offline
#19
Hi, hoping to see an update to the MEAP soon!

One suggestion I would make is defining "overfitting" when you first mention the concept. I came to the book with a linguistics background (rather than a data science/ML), and had no idea what that meant.

Thanks, and keep up the great work, I'm loving the book so far!