You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Artaxerxes comments on Superintelligence 8: Cognitive superpowers - Less Wrong Discussion

7 Post author: KatjaGrace 04 November 2014 02:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (95)

You are viewing a single comment's thread. Show more comments above.

Comment author: Artaxerxes 04 November 2014 09:03:42PM *  0 points [-]

Suppose you or I suddenly woke up with superintelligence, but with our existing goal structure intact (and a desire to be cautious).

Can you show me why a decent person like (I presume) you or I with these new powers would suddenly choose to slaughter the human race as an instrumental goal to accomplishing some other ends?

If CEV (or whatever we're up to at the moment) turns out to be a dud and human values are inexorably inconsistent and mutually conflicting, one possible solution would be for me to kill everyone and try again, perhaps building roughly humanish beings with complex values I can actually satisfy that aren't messed up because they were made by an intelligent designer (me) rather than Azathoth.

But really, the problem is that a superintelligent AI has every chance of being nothing like a human, and although we may try to give it innocuous goals we have to remember that it will do what we tell it to do, and not necessarily what we want it to do.

See this Facing the Intelligence Explosion post, or this Sequence post, or Smarter Than Us chapter 6, or something else that says the same thing.

Comment author: SteveG 05 November 2014 02:21:33AM 0 points [-]

Did that. So let's get busy and start try to fix the issues!

The ethical code/values that this new entity gets need not be extremely simple. Ethical codes typically come in MULTI-VOLUME SETS.

Comment author: Artaxerxes 05 November 2014 05:00:36AM 0 points [-]

Did that. So let's get busy and start try to fix the issues!

Sounds good to me. What do you think of MIRI's approach so far?

I haven't read all of their papers on Value Loading yet.