orthonormal comments on Open Thread: March 2010 - Less Wrong

5 Post author: AdeleneDawner 01 March 2010 09:25AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (658)

You are viewing a single comment's thread. Show more comments above.

Comment author: orthonormal 08 March 2010 10:11:34PM *  2 points [-]

Let

  • AN = "Grandma calls on Thursday of week N",
  • BN = "Grandma comes on Friday of week N".

A toy version of my prior could be reasonably close to the following:

P(AN)=p, P(AN,BN)=pq, P(~AN,BN)=(1-p)r

where

  • the distribution of p is uniform on [0,1]
  • the distribution of q is concentrated near 1 (distribution proportional to f(x)=x on [0,1], let's say)
  • the distribution of r is concentrated near 0 (distribution proportional to f(x)=1-x on [0,1], let's say)

Thus, the joint probability distribution of (p,q,r) is given by 4q(1-r) once we normalize. Now, how does the evidence affect this? The likelihood ratio for (A1,B1,A2,B2) is proportional to (pq)^2, so after multiplying and renormalizing, we get a joint probability distribution of 24p^2q^3(1-r). Thus P(~A3|A1,B1,A2,B2)=1/4 and P(~A3,B3|A1,B1,A2,B2)=1/12, so I wind up with a 1 in 3 chance that Grandma will come on Friday, if I've done all my math correctly.

Of course, this is all just a toy model, as I shouldn't assume things like "different weeks are independent", but to first order, this looks like the right behavior.

Comment author: orthonormal 09 March 2010 08:42:33AM 1 point [-]

I should have realized this sooner: P(B3|~A3) is just the updated value of r, which isn't affected at all by (A1,B1,A2,B2). So of course the answer according to this model should be 1/3, as it's the expected value of r in the prior distribution.

Still, it was a good exercise to actually work out a Bayesian update on a continuous prior. I suggest everyone try it for themselves at least once!