News feed
Log in to your course web
You are not logged in KTH, so we cannot customize the content.
In the News feed, you find updates for pages, schedule and posts from teachers (when aimed also at earlier registered students).
December 2015
Show more similar (1)
October 2013
Note: for those who have thier course project presentations on Thursday 17 October 2013, the location has been changed to the following:
Location:
Conference Room SIP (Floor 3)
Osquldas väg 10, 100 44 Stockholm
Note: for those who have thier course project presentations on Friday 18 October 2013, the location has been changed to the following:
Location:
Conference Room SIP (Floor 3)
Osquldas väg 10, 100 44 Stockholm
October 2012
Bayesian learning:
My understanding of probabilities is that:
P[A|B]= integral{ P[A|C,B] * P[C|B] dC} (integration with respect to C)
But expression 8.10 claims that:
P[A|B]= integral{ P[A|C] * P[C|B] dC} (where A=X_t+1 , B=X_obs , C=W)
is 8.10 a special case?
in that case I would guess that the information from all X_obs are included in W after training, and therefore they are redundant and either one of them can be removed from P[A|C,B]; i.e. P[A|B]=P[A|C]
please help, thanks
Hello Oussama,
Your understanding is correct. Equation (8.10) is a special case, like you suggest.
Specifically, all observations X_t are conditionally independent given w. Hence P[X_{T+1} | W=w,x_obs] = P[X_{T+1} | W=w] in this case.
Best,
Gustav