Solvent comments on Stupid Questions Open Thread - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (265)
If the SIAI engineers figure out how to construct friendly super-AI, why would they care about making it respect the values of anyone but themselves? What incentive do they have to program an AI that is friendly to humanity, and not just to themselves? What's stopping LukeProg from appointing himself king of the universe?
Short answer is that they're nice people, and they understand that power corrupts, so they can't even rationalize wanting to be king of the universe for altruistic reasons.
Also, a post-Singularity future will probably (hopefully) be absolutely fantastic for everyone, so it doesn't matter whether you selfishly get the AI to prefer you or not.