TimFreeman comments on St. Petersburg Mugging Implies You Have Bounded Utility - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (163)
This problem is the reason for most of the headache that LW is causing me and I appreciate any attention it receives.
Note that when GiveWell, a charity evaluation service, interviewed the SIAI, they hinted at the possibility that one could consider the SIAI to be a sort of Pascal's Mugging:
Could this be part of the reason why Eliezer Yudkowsky wrote that the SIAI is only a worthwhile charity if the odds of being wiped out by AI are larger than 1%?
Even mathematicians like John Baez are troubled by the unbounded maximization of expected utility.
Could it be that we do not have bounded utility but rather only accept a limited degree of uncertainty?
Me too. Would vote you up twice if I could.
I don't think he mentioned "unbounded" in the post you're citing. He talked about risk aversion, and that can be encoded by changing the utility function.