XiXiDu comments on St. Petersburg Mugging Implies You Have Bounded Utility - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (163)
This problem is the reason for most of the headache that LW is causing me and I appreciate any attention it receives.
Note that when GiveWell, a charity evaluation service, interviewed the SIAI, they hinted at the possibility that one could consider the SIAI to be a sort of Pascal's Mugging:
Could this be part of the reason why Eliezer Yudkowsky wrote that the SIAI is only a worthwhile charity if the odds of being wiped out by AI are larger than 1%?
Even mathematicians like John Baez are troubled by the unbounded maximization of expected utility.
Could it be that we do not have bounded utility but rather only accept a limited degree of uncertainty?
When people buy insurance, they often plan for events that are less probable than 1%. The intuitive difficulty here is not that you act on an event with probability of 1%, but that you act on an event where the probability (be it 1% or 10% or 0.1%) is estimated intuitively, so that you have no frequency statistics to rely on, and there remains great uncertainty about the value of the probability.
People fear acting on uncertainty that is about to be resolved, for if it's resolved not in their favor, they will be faced with wide agreement that in retrospect their action was wrong. Furthermore, if the action is aimed to mitigate an improbable risk, they even expect that the uncertainty will resolve not in their favor. But this consideration doesn't make the estimated probability any lower, and estimation is the best we have.
The analogy with insurance isn't exact. One could argue (though I think one would be wrong) that diminishing returns related to bounded utility start setting in on scales larger than the kinds of events people typically insure against, but smaller than whatever fraction of astronomical waste justifies investing in combating 1% existential risk probabilities.
Me too. Would vote you up twice if I could.
I don't think he mentioned "unbounded" in the post you're citing. He talked about risk aversion, and that can be encoded by changing the utility function.
The SIAI seems to be progressing slowly. It is difficult to see how their "trust us" approach will get anywhere. The plan of writing code in secret in a basement looks pretty crazy to me. On the more positive side, they do have some money and some attention.
...but overall - why consider the possibility of the SIAI taking over the world? That is not looking as though it is too likely an outcome.