timtyler comments on Complexity of Value ≠ Complexity of Outcome - Less Wrong

32 Post author: Wei_Dai 30 January 2010 02:50AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (198)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 31 January 2010 12:43:57AM *  0 points [-]

Machine intelligences seem likely to vary in their desirability to humans.

Friendly / unFriendly seems rather binary, maybe a "desirability" scale would help.

Alas, this seems to be drifting away from the topic.

Comment author: gregconen 31 January 2010 01:34:10AM 6 points [-]

Machine intelligences seem likely to vary in their desirability to humans.

Technically true. However, most naive superintelligence designs will simply kill all humans. You've accomplished quite a lot to even get to a failed utopia, much less deciding whether you want Prime Intellect or Coherent Extrapolated Volition.

It's also unlikely you'll accidentally do something significantly worse than killing all humans, for the same reasons. A superintelligent sadist is just as hard as a utopia.