Vaniver comments on Tallinn-Evans $125,000 Singularity Challenge - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (369)
They're aware of this and have written about it. The argument is "just because something looks like a known fallacy doesn't mean it's fallacious." If you wanted to reason about existential risks (that is, small probabilities that all humans will die), could you come up with a way to discuss them that didn't sound like Pascal's Wager? If so, I would honestly greatly enjoy hearing it, so I have something to contrast to their method.
It's not clear to me that it's as easily, and I think that's where your counterargument breaks down. If they have a 2e-6 chance of making things better and a 1e-6 chance of making things worse, then they're still ahead by 1e-6. With Pascal's Wager, you don't have any external information about which god is actually going to be doing the judging; with SIAI, you do have some information about whether or not Friendliness is better than Unfriendliness. It's like instead of picking Jesus instead of Buddha, praying to the set of all benevolent gods; there's still a chance malevolent god is the one you end up with, but it's a better bet than picking solo (and you're screwed anyway if you get a malevolent god).
I agree with you that it's not clear that SIAI actually increases the chance of FAI occurring but I think it more likely that a non-zero effect is positive rather than negative.