Manfred comments on A question on rationality. - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (12)
Of course Bob becomes a monster superintelligence hell bent on using all the energy in the universe for his own selfish reasons. I mean, duh! It's just that "his own selfish reasons" involves things like cute puppies. If Bob cares about cute puppies, then Bob will use his monstrous intelligence to bend the energy of the universe towards cute puppies. And love and flowers and sunrises and babies and cake.
And killing the unbelievers if he's a certain sort - I don't want to make this sound too great. But power doesn't corrupt. Corruption corrupts. Power just lets you do what you want, and people don't want "to stay alive." People want friends and cookies and swimming with dolphins and ice skating and sometimes killing the unbelievers.
I follow you. It does resolve my question of whether or not rationality + power necessarily involves a terrible outcomes. I had asked the question of whether or not a perfect rationalist given enough time and resources would become perfectly selfish. I believe I understand the answer as no.
Matt_Simpson gave a similar answer:
If Bob's utility function is puppies, babies and cakes, then he would not change his utility function for a universe with out these things. Do I have the right idea now?
Indeed. The equation for terrible outcomes is "rationality + power + asshole" (where 'asshole' is defined as the vast majority of utility functions, which will value terrible things. The 'rationality' part is optional to the extent that you can substitute it with more power. :)
When the monster superintelligence Bob is talking about 'cute puppies' lets just say that 'of the universe' isn't the kind of dominance he has in mind!