You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Elo comments on Open Thread August 31 - September 6 - Less Wrong Discussion

5 Post author: Elo 30 August 2015 09:26PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (326)

You are viewing a single comment's thread. Show more comments above.

Comment author: Elo 02 September 2015 12:09:41AM 0 points [-]

Based on simple coin flip; other games:

  • Several coins;
  • scissors paper rock (and then iterated)

I am sure there are more small games that have a similar "known" problem space.

Comment author: evand 02 September 2015 06:39:12PM 0 points [-]

What change would you make that results in multiple rounds being required?

For example, if each player flips multiple coins, and then we share probability estimates for "all coins heads" or "majority of coins heads" or expectations for number of heads, in each case the first time I share my summary, I am sharing info that exactly tells the other player what information I have (and vice versa). So we will agree exactly from the second round onwards.

Comment author: Elo 02 September 2015 07:06:33PM *  0 points [-]

example I was thinking:

each player flips 3(? 10) coins of their own. (giving them various possibilities on what they think the whole coin-space looks like) They present their 90%, 99% confidence intervals on there being more than 4 (9) heads. Round 2 repeat. (also make statements based on what they think the state of play is ++ try to get to the answer before the other person. So make statements that can be misleading maybe?)

Not sure how easy it is to tease out that information for a human. maybe a computer could solve it. but not so much a human...

"I flipped 10 coins; My 90% confidence that there are at least 7 of each heads and tails is 90%. 99% confidence is 60%."

confidence for "at least 10 heads and 6 tails" etc.

Comment author: evand 02 September 2015 07:39:43PM 0 points [-]

Here's how that goes. I flip 3 coins. Say I get 2 heads. My probability estimate for "there are 4+ heads total" is now 4/8 (the probability that 2 or 3 of your coins are heads). For the full set of outcomes I can have, the options are: (0H, 0/8) (1H, 1/8) (2H, 4/8) (3H, 7/8). You perform the same reasoning. Then we each share our probability estimates with the other. Say that on the first round, we each share estimates of 50%. Then we can each deduce that the other saw exactly two heads, and on the second round (and forever after) both our estimates become 100%. For all possible outcomes, my first round probability tells you exactly how many heads I flipped, and vice versa; as soon as we share probabilities once, we both know the answer and agree.

(Also, you're not using "confidence interval" in the correct manner. A confidence interval is defined over an expectation, not a posterior probability.)

I still don't see any version of this that's simpler than Finney's that actually makes use of multiple rounds, and when I fix the math on Finney's version it's decidedly not simple.

Comment author: Elo 02 September 2015 11:30:30PM 0 points [-]

My version of making this work would be choosing to only share limited information.

i.e. estimates of 33% heads. or estimates of >10% heads and >80% tails. Where they don't sum to 100%, and will be harder to work out the "unknown space" in the middle. Limiting the prediction set to partial information. Also playing with multiple people should make it more complicated. Also an optional number of coin flips (optional to the person flipping coins and unknown to others within parameters)