timtyler comments on Should we discount extraordinary implications? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (107)
The verbal offer isn't actually relevant to the problem, it's just there to dramatize the situation.
Please formulate that maxim precisely enough to program into an AI in a way that solves the problem. Because the best way we currently have of formulating it, i.e., Bayseanism with quasi-Solomonoff priors doesn't solve it.
The idea of devoting more resources to investigating claims when they involve potential costs is involves decision theory rather than just mere prediction. However, vanilla reinforcement learning should handle this OK. Agents that don't investigate extraordinary claims will be exploited and suffer - and a conventional reinforcement learning agent can be expected to pick up on this just fine. Of course I can't supply source code - or else we would be done - but that's the general idea.
All claims involve decision theory in the sense that you're presumably going to act on them at some point.
Would these agents also learn to pick up pennies in front of steam rollers? In fact, falling for Pascal's mugging is just the extreme case of refusing to pick up pennies in front of a steam roller, the question is where you draw a line dividing the two.
That depends on its utility function.
The line (if any) is drawn as a consequence of specifying a utility function.