pricetheoryeconomist comments on Beauty quips, "I'd shut up and multiply!" - Less Wrong

6 Post author: neq1 07 May 2010 02:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (335)

You are viewing a single comment's thread.

Comment author: pricetheoryeconomist 09 May 2010 01:39:53PM *  4 points [-]

A reasonable an idea for this and other problems that don't' seem to suffer from ugly asymptotics would simply to mechanically test it.

That is to say that it may be more efficient, requiring less brain power, to believe the results of repeated simulations. After going through the Monty Hall tree and statistics with people who can't really understand either, then end up believing the results of a simulation whose code is straightforward to read, I advocate this method--empirical verification over intuition or mathematics that are fallible (because you yourself are fallible in your understanding, not because they contain a contradiction).

Comment author: Morendil 09 May 2010 03:25:53PM *  2 points [-]

This is an interesting idea, that appeals to me owing to my earlier angle of attack on intuitions about "subjective anticipation".

The question then becomes, how would we program a robot to answer the kind of question that was asked of Sleeping Beauty?

This comment suggests one concrete way of operationalizing the term "credence". It could be a wrong way, but at least it is a concrete suggestion, something I think is lacking in other parts of this discussion. What is our criterion for judging either answer a "wrong" answer? More specifically still, how do we distinguish between a robot correctly programmed to answer this kind of question, and one that is buggy?

As in the robot-and-copying example, I suspect that which of 1/2 or 1/3 is the "correct" answer in fact depends on what (heretofore implicit) goals, epistemic or instrumental, we decide to program the robot to have.

Comment author: thomblake 10 May 2010 02:21:26PM 2 points [-]

As in the robot-and-copying example, I suspect that which of 1/2 or 1/3 is the "correct" answer in fact depends on what (heretofore implicit) goals, epistemic or instrumental, we decide to program the robot to have.

And I think this is roughly equivalent to the suggestion that the payoff matters.

Comment author: casebash 09 January 2016 02:44:41PM 0 points [-]

Depending on what you're testing and a decent level of maths ability, empirics doesn't help you here.