denisbider comments on Welcome to Heaven - Less Wrong

23 Post author: denisbider 25 January 2010 11:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (242)

You are viewing a single comment's thread.

Comment author: denisbider 26 January 2010 01:35:15PM 2 points [-]

If we take for granted that an AI that is friendly to all potential creatures is out of the question - that the only type of FAI we really want is one that's just friendly to us - then the following is the next issue I see.

If we all think it's so great to be autonomous, to feel like we're doing all of our own work, all of our own thinking, all of our own exploration - then why does anyone want to build an AI in the first place?

Isn't the world, as it is, lacking an all-powerful AI, perfectly suited to our desires of control and autonomy?

Suppose an AI-friendly-to-you exists, and you know that you can always ask it to expand your mind, and download into you everything it knows about the issues you care for, short-circuiting thousands of years of work that it would otherwise take for you to make the same discoveries.

Doesn't it seem pointless to be doing all that work, if you know that FAI can already provide you with all the answers?

Furthermore, again supposing an AI-friendly-to-you exists - you know that you can always ask it to wire-head you. In any given year, there is a negligible, but non-zero probability that you'll succumb to the temptation. Once you do succumb to the temptation, it will feel so great that you will never ever want to do the boring "thinking and doing stuff" again. You will be constantly blissful, and anything you want to know about the universe will be immediately available to you, through a direct interface with FAI.

It doesn't take much to see that, whether it takes a thousand years or a million years before you succumb, you will eventually choose to be wire-headed; you will choose this much sooner than the universe ends; and the vast majority of your total existence will be lived that way.

Comment author: AdeleneDawner 26 January 2010 01:44:32PM 1 point [-]

If the FAI values that we value independence, and values that we value autonomy - which I think it would have to, to be considered properly Friendly - and if wireheading is an threat to our ability to maintain those values, it doesn't make sense that the FAI would make wireheading available for the asking. It makes much more sense that the FAI would actively protect us from wireheading as it would from any other existential threat, in that case.

(Also, just because it would protect us from existential threats, that wouldn't imply that it would protect us from non-existential ones. Part of the idea is that it's very smart: It can figure out the balance of protecting and not-protecting that best preserves its values, and by extension ours.)