You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

FAWS comments on Probability puzzle - Less Wrong Discussion

7 Post author: malthrin 28 November 2011 09:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (26)

You are viewing a single comment's thread. Show more comments above.

Comment author: faul_sname 29 November 2011 05:26:26AM 0 points [-]

This seems to be a good opportunity to use a trick I figured out for approximating the probability of a binary event (I'm sure I'm not the first to discover it). The trick goes as follow: start with a probability distribution P(x) = 1. For each time the even happens, multiply P(x) by x, and for each time it doesn't, multiply by 1-x. After doing this for your data, renorm it so that the area beneath the curve is 1: you can do that by dividing your function by its integral from 0 to 1. If you have a range of outcomes, you can multiply by the function representing that to get expected utility. In this case, the function representing the outcomes can be modeled by O(x) = 4x - 3. If we assume the coin is biased towards heads (if it's biased towards tails, just substitute H and T in the following reasoning), we find through some math and puzzling that the formula that determines expected value for the sequence of coins is (N-4*T-4)/N+2) where H is the number of heads and T is the number of tails, and N represents the number of tosses. The probability of reaching any state is given by 1/N. For the first two flips, your expected value is negative (-$1 and -$.033 for the first and second respectively) no matter what happens, so we can count these as sunk costs. So our expected value is as follows here:

It took me 19 flips to come up with a positive expected value, though my answer with the original question is "as many as the person offering this can be convinced to give me".

Comment author: FAWS 29 November 2011 04:54:58PM *  2 points [-]

This seems to be a good opportunity to use a trick I figured out for approximating the probability of a binary event

That's just Bayesian updating. P(H|E)=P(H)*P(E|H)/P(E)

P(x)=P(H), x=P(E|H), and the integral of the previous step is P(E), since that's the whole point of your probability function.

Comment author: faul_sname 29 November 2011 05:57:36PM 0 points [-]

So that's why it works... I knew it probably tied into Bayes, but I didn't know exactly how.