The Author Online Book Forums are Moving

The Author Online Book Forums will soon redirect to Manning's liveBook and liveVideo. All book forum content will migrate to liveBook's discussion forum and all video forum content will migrate to liveVideo. Log in to liveBook or liveVideo with your Manning credentials to join the discussion!

Thank you for your engagement in the AoF over the years! We look forward to offering you a more enhanced forum experience.

273177 (1) [Avatar] Offline
#1
I'm reading through the section on Bayesian modelling (5.1.1) and following fine up to the example of Ping and Greg (https://livebook.manning.com#!/book/machine-learning-systems/chapter-5/point-2869-44-44-0). At this point there's a couple of issues:
- P(V) is asserted to be 0.2+0.1+0.2=0.5 without justification
- the entire calculation does not match my understanding of Baye's rule (with the 'naive' assumption)

Let me write capitals (F, G, C) for the cases that the matched bears agree (on food, going out, and cubs respectively) and lower case when they disagree. So 'P(f | M)' is the probability of the pair of bears disagreeing on their favourite food given that they are a match.

So for Ping and Greg V = f, G, c (they only agree on going out). Given the instances in table 5.1 P(f, G, c) = 0.2, so
P(M | f, G, c) = P(f | M) * P(G | M) * P(c | M) * P(M) / P(f, G, c)
= 0.5 * 1 * 0 * 0.6 / 0.2
= 0

The implementation seems to be consistent with my calculation for P(V).

Am I off track here, or is there an issue with this example (and possibly with the implementation that follows)?