timtyler comments on St. Petersburg Mugging Implies You Have Bounded Utility - Less Wrong

10 Post author: TimFreeman 07 June 2011 03:06PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (163)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 07 June 2011 04:57:35PM *  5 points [-]

This problem is the reason for most of the headache that LW is causing me and I appreciate any attention it receives.

Note that when GiveWell, a charity evaluation service, interviewed the SIAI, they hinted at the possibility that one could consider the SIAI to be a sort of Pascal's Mugging:

GiveWell: OK. Well that's where I stand - I accept a lot of the controversial premises of your mission, but I'm a pretty long way from sold that you have the right team or the right approach. Now some have argued to me that I don't need to be sold - that even at an infinitesimal probability of success, your project is worthwhile. I see that as a Pascal's Mugging and don't accept it; I wouldn't endorse your project unless it passed the basic hurdles of credibility and workable approach as well as potentially astronomically beneficial goal.

Could this be part of the reason why Eliezer Yudkowsky wrote that the SIAI is only a worthwhile charity if the odds of being wiped out by AI are larger than 1%?

And I don’t think the odds of us being wiped out by badly done AI are small. I think they’re easily larger than 10%. And if you can carry a qualitative argument that the probability is under, say, 1%, then that means AI is probably the wrong use of marginal resources – not because global warming is more important, of course, but because other ignored existential risks like nanotech would be more important. I am not trying to play burden-of-proof tennis. If the chances are under 1%, that’s low enough, we’ll drop the AI business from consideration until everything more realistic has been handled.

Even mathematicians like John Baez are troubled by the unbounded maximization of expected utility.

Could it be that we do not have bounded utility but rather only accept a limited degree of uncertainty?

Comment author: timtyler 07 June 2011 06:03:27PM *  1 point [-]

The SIAI seems to be progressing slowly. It is difficult to see how their "trust us" approach will get anywhere. The plan of writing code in secret in a basement looks pretty crazy to me. On the more positive side, they do have some money and some attention.

...but overall - why consider the possibility of the SIAI taking over the world? That is not looking as though it is too likely an outcome.