Wei_Dai comments on Advancing Certainty - Less Wrong

34 Post author: komponisto 18 January 2010 09:51AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (108)

You are viewing a single comment's thread.

Comment author: Wei_Dai 18 January 2010 11:35:48PM *  15 points [-]

I'd like to recast the problem this way: we know we're running on error-prone hardware, but standard probability theory assumes that we're running on errorless hardware, and seems to fail, at least in some situations, when running on error-prone hardware. What is the right probability theory and/or decision theory for running on error-prone hardware?

ETA: Consider ciphergoth's example:

do you think you could make a million statements along the lines of "I will not win the lottery" and not be wrong once? If not, you can't justify not playing the lottery, can you?

This kind of reasoning can be derived from standard probability theory and would work fine on someone running errorless hardware. But it doesn't work for us.

We need to investigate this problem systematically, and not just make arguments about whether we're too confident or not confident enough, trying to push the public consensus back and forth. The right answer might be completely different, like perhaps we need different kinds or multiple levels of confidence, or upper and lower bounds on probability estimates.

Comment author: MichaelVassar 19 January 2010 05:17:29AM 3 points [-]

I think that standard probability theory assumes a known ontology and infinite computing power. We should ideally also be able to produce a probability theory for agents with realistically necessary constraints but without the special constraints that we have.