Wei_Dai comments on Advancing Certainty - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (108)
I'd like to recast the problem this way: we know we're running on error-prone hardware, but standard probability theory assumes that we're running on errorless hardware, and seems to fail, at least in some situations, when running on error-prone hardware. What is the right probability theory and/or decision theory for running on error-prone hardware?
ETA: Consider ciphergoth's example:
This kind of reasoning can be derived from standard probability theory and would work fine on someone running errorless hardware. But it doesn't work for us.
We need to investigate this problem systematically, and not just make arguments about whether we're too confident or not confident enough, trying to push the public consensus back and forth. The right answer might be completely different, like perhaps we need different kinds or multiple levels of confidence, or upper and lower bounds on probability estimates.
I think that standard probability theory assumes a known ontology and infinite computing power. We should ideally also be able to produce a probability theory for agents with realistically necessary constraints but without the special constraints that we have.