Monday, 27 December 2010

pr.probability - Can you explain a step in an expectation maximization algorithm in a Nature article?

These numbers are the normalized likelihoods that the results given in the 10 toss vector
are obtained from the current distributions the coin A (or respectively B).



I'll work out the first two rows for illustration:



The guessed Bernoulli parameter for type A is 0.6 and for type B is 0.5.
According to the binomial distribution formula,
the unnormalized likelihood for obtaining 5H 5T are
From A:



L_A = C(10,5)(0.6)^5(0.4)^5



where C(10,5) is the binomial coefficient 10!/5!5!



Similarly from B we obtain:



L_B = C(10,5)(0.5)^5(0.5)^5



The normalized likelihoods are obtained as



For A: L_A/(L_A+L_B) = 0.4491



For B: L_B/(L_A+L_B) = 0.5509



For the second case 9H 1T



L_A = C(10,9)(0.6)^9(0.4)^1



L_B = C(10,9)(0.5)^9(0.5)^9



The normalized likelihoods:



For A: L_A/(L_A+L_B) = 0.8050



For B: L_B/(L_A+L_B) = 0.1950

No comments:

Post a Comment