Lara_Foster2 comments on Optimization - Less Wrong

20 Post author: Eliezer_Yudkowsky 13 September 2008 04:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (44)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Lara_Foster2 14 September 2008 07:42:40PM -1 points [-]

It's not about resisting temptation to meddle, but about what will, in fact, maximize human utility. The AI will not care whether utility is maximized by us or by it, as long as it is maximized (unless you want to program in 'autonomy' as an axiom, but I'm sure there are other problems with that). I think there is a high probability that, given its power, the fAI will determine that it *can* best maximize human utility by taking away human autonomy. It might give humans the *illusion* of autonomy in some circumstances, and low and behold these people will be 'happier' than non-delusional people would be. Heck, what's to keep it from putting everyone in their own individual simulation? I was assuming some axiom that stated, 'no wire-heading', but it's very hard for me to even know what that means in a post-singularity context. I'm very skeptical of handing over control of my life to any dictatorial source of power, no matter how 'friendly' it's programmed to be. Now, if Eliezer is conviced it's a choice between his creation as dictator vs someone else's destroying the universe, then it is understandable why he is working towards the best dictator he can surmise... But I would rather not have either.