D227 comments on A question on rationality. - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (12)
I follow you. It does resolve my question of whether or not rationality + power necessarily involves a terrible outcomes. I had asked the question of whether or not a perfect rationalist given enough time and resources would become perfectly selfish. I believe I understand the answer as no.
Matt_Simpson gave a similar answer:
If Bob's utility function is puppies, babies and cakes, then he would not change his utility function for a universe with out these things. Do I have the right idea now?
Indeed. The equation for terrible outcomes is "rationality + power + asshole" (where 'asshole' is defined as the vast majority of utility functions, which will value terrible things. The 'rationality' part is optional to the extent that you can substitute it with more power. :)