timtyler comments on St. Petersburg Mugging Implies You Have Bounded Utility - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (163)
This problem is the reason for most of the headache that LW is causing me and I appreciate any attention it receives.
Note that when GiveWell, a charity evaluation service, interviewed the SIAI, they hinted at the possibility that one could consider the SIAI to be a sort of Pascal's Mugging:
Could this be part of the reason why Eliezer Yudkowsky wrote that the SIAI is only a worthwhile charity if the odds of being wiped out by AI are larger than 1%?
Even mathematicians like John Baez are troubled by the unbounded maximization of expected utility.
Could it be that we do not have bounded utility but rather only accept a limited degree of uncertainty?
The SIAI seems to be progressing slowly. It is difficult to see how their "trust us" approach will get anywhere. The plan of writing code in secret in a basement looks pretty crazy to me. On the more positive side, they do have some money and some attention.
...but overall - why consider the possibility of the SIAI taking over the world? That is not looking as though it is too likely an outcome.