TheOtherDave comments on Tallinn-Evans $125,000 Singularity Challenge - Less Wrong

27 Post author: Kaj_Sotala 26 December 2010 11:21AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (369)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 28 December 2010 09:48:15PM -2 points [-]

They have worked out this math, and it's available in most of their promotional stuff that I've seen. Their argument is essentially "instead of operating on the level of individuals, we will either save all of humanity, present and future, or not." And so if another $500 gives SIAI an additional 1 out of 7 billion chance of succeeding, then it's a better bet than giving $500 to get one guaranteed life (and that only looks at present lives).

Their logic is unsound, due to the arbitrary premise; their argument has a striking resemblance to Pascal's Wager. Pascal argued that if belief in God provided the most miniscule increase in the likelihood of being heaven bound, worship was prudent in light of heaven's infinite rewards. One of the argument's fatal flaws is that there is no reason to think worshipping this god will avoid reprisals by the real god—or any number of equally improbable alternative outcomes.

The Singularity Institute imputes only finite utiles, but the flaw is the same. It could as easily come to pass that the Institute's activities make matters worse. They aren't entitled to assume their efforts to control matters won't have effects the reverse of the ones intended, any more than Pascal had the right to assume worshipping this god isn't precisely what will send one to hell. We just don't know (can't know) about god's nature by merely postulating his possible existence: we can't know that the miniscule effects don't run the other way. Similarly if not exactly the same, there's no reason to think whatever miniscule probability the Singularity Institute assigns to the hopeful outcome is a better estimate than would be had by postulating reverse miniscule effects.

When the only reason an expectation seems to have any probability lies in its extreme tininess, the reverse outcome must be allowed the same benefit, canceling them out.

Comment author: TheOtherDave 28 December 2010 10:47:21PM 7 points [-]

I usually think about this, not as expected utility calculations based on negligible probabilities of vast outcomes being just as likely as their negations, but as them being altogether unreliable, because our numerical intuitions outside the ranges we're calibrated for are unreliable.

For example, when trying to evaluate the plausibility of an extra $500 giving SIAI an extra 1 out of 7 billion chance of succeeding, there is something in my mind that wants to say "well, geez, 1e-10 is such a tiny number, why not?"

Which demonstrates that my brain isn't calibrated to work with numbers in that range, which is no surprise.

So I do best to set aside my unreliable numerical intuitions and look for other tools with which to evaluate that claim.