Artaxerxes comments on Superintelligence 8: Cognitive superpowers - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (95)
If CEV (or whatever we're up to at the moment) turns out to be a dud and human values are inexorably inconsistent and mutually conflicting, one possible solution would be for me to kill everyone and try again, perhaps building roughly humanish beings with complex values I can actually satisfy that aren't messed up because they were made by an intelligent designer (me) rather than Azathoth.
But really, the problem is that a superintelligent AI has every chance of being nothing like a human, and although we may try to give it innocuous goals we have to remember that it will do what we tell it to do, and not necessarily what we want it to do.
See this Facing the Intelligence Explosion post, or this Sequence post, or Smarter Than Us chapter 6, or something else that says the same thing.
Did that. So let's get busy and start try to fix the issues!
The ethical code/values that this new entity gets need not be extremely simple. Ethical codes typically come in MULTI-VOLUME SETS.
Sounds good to me. What do you think of MIRI's approach so far?
I haven't read all of their papers on Value Loading yet.