Nick_Tarleton comments on Optimization - Less Wrong

20 Post author: Eliezer_Yudkowsky 13 September 2008 04:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (44)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Nick_Tarleton 14 September 2008 07:23:15PM 0 points [-]

Thus Being Secure is > Working to be secure > Not being secure > being secure.

As judged at different times, under different circumstances (having less or more money, being less or more burned out). This doesn't sound like a "real" intransitive preference.

whatever simulation the fAI decides on for post-singularity humanity, I think I'd rather be free of it to fuck up my own life. Me and many others.... Why should we trust an AI that maximizes human utility, even if it understands what that means?

But then, your freedom is a factor in deciding what's best for you. It sounds like you're thinking of an FAI as a well-intentioned but extremely arrogant human, who can't resist the temptation to meddle where it rationally shouldn't.