thomblake comments on Holden's Objection 1: Friendliness is dangerous - Less Wrong

11 Post author: PhilGoetz 18 May 2012 12:48AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (428)

You are viewing a single comment's thread. Show more comments above.

Comment author: thomblake 24 May 2012 02:58:19PM 1 point [-]

I feel like a cost-benefit analysis has gone on here, the internals of which I'm not privy to.

Shouldn't it be possible that becoming a singleton is expensive and/or would conflict with one's values?

Comment author: DanArmak 24 May 2012 03:11:14PM *  0 points [-]

It's certainly possible. My analysis so far is only on a "all else being equal" footing.

I do feel that, absent other data, the safer assumption is that if an AI is capable of becoming a singleton at all, expense (in terms of energy/matter and space or time) isn't going to be the thing that stops it. But that may be just a cached thought because I'm used to thinking of an AI trying to become a singleton as a dangerous potential adversary. I would appreciate your insight.

As for values, certainly conflicting values can exist, from ones that mention the subject directly ("don't move everyone to a simulation in a way they don't notice" would close one obvious route) to ones that impinge upon it in unexpected ways ("no first strike against aliens" becomes "oops, an alien-built paperclipper just ate Jupiter from the inside out").