Dagon comments on Updating, part 1: When can you change your mind? The binary model - Less Wrong

11 Post author: PhilGoetz 13 May 2010 05:55PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (153)

You are viewing a single comment's thread.

Comment author: Dagon 13 May 2010 08:38:00PM 2 points [-]

Is there any real-group analog to the answer to problem t becoming mutual knowledge to the entire group? I can't think of a single disagreement here EVER to which the answer has been revealed. Further, I don't expect much revelation until Omega actually shows up.

Comment author: thomblake 13 May 2010 09:25:11PM 2 points [-]

Drawing Two Aces might count.

A bunch of people got the wrong answer, and it was presumed to be against your naive intuitions if you don't know how to do the math. But any doubters understood the right answer once it was pointed out.

Comment author: PhilGoetz 13 May 2010 10:57:50PM 1 point [-]

Thanks for recollecting that. That was a case where someone wrote a program to compute the answer, which could be taken as definitive.

I just counted up the first answers people gave, and their initial answers were 29 to 3 in favor of the correct answer. So there wasn't much disagreement to begin with.

Comment author: Dagon 14 May 2010 02:50:41PM 0 points [-]

I don't think that qualified. There was no revelation, just an agreement on process and on result. That was not a question analogous to PhilGoetz's model, where some agents had more accurate estimates, and you use the result to determine how accurate they might be on other topics.

Comment author: PhilGoetz 13 May 2010 09:11:00PM *  1 point [-]

I can't think of a single disagreement here to which the answer has been revealed, either. But - spoiler alert - having the answers to numerous problems revealed to at least some of the agents is the only factor I've found that can get the simulated agents to improve their beliefs.

It's difficult to apply the simulation results to people, who can, in theory, be convinced of something by following a logical argument. The reasons why I think we can model that with a simple per-person accuracy level might need a post of their own.

Comment author: PhilGoetz 14 May 2010 03:29:08PM 1 point [-]

having the answers to numerous problems revealed to at least some of the agents is the only factor I've found that can get the simulated agents to improve their beliefs.

Oops - that statement was based on a bug in my program.

Comment author: RobinZ 13 May 2010 09:35:59PM *  0 points [-]

The usual situation does involve agents changing their answers as time passes differentially towards "true" - your model is extremely simplified, but [edit: may be] accurate enough for the purpose.