ArthurZC23/Machine-Learning-A-Probabilistic-Perspective-Solutions

Solution to 2.6

Closed this issue · 3 comments

When E1 and E2 are conditionally independent given H, we can derive p(e1,e2) as follows:

p(e1,e2) = sum(p(e1,e2|h')p(h')) = sum(p(e1|h')p(e2|h')p(h')) over all h' in support of H.

Thus, (iii) is sufficient for 2.6(b)

Hey @andrewgilbert12. The question asks for the sufficient set of numbers to compute the probability vector P(H | E1, E2). So, you are absolutely right. The joint distribution can be decomposed in terms of P(E1|H), P(E2|H) and P(H). Thus (iii) is the sufficient set of numbers for 2.6 (b). Thank you for the correction. I will give a shout out to you in the new version of the solution :).

Sounds good! And thanks for the excellent resources, they've been very helpful in working through the problems! :)

You're welcome! I've pushed the new version of the question to the repository. So the issue is resolved and I will close it, ok? Enjoy your studies :)