Kaj_Sotala comments on Contaminated by Optimism - Less Wrong

10 Post author: Eliezer_Yudkowsky 06 August 2008 12:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (74)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Kaj_Sotala 06 August 2008 06:05:08PM 0 points [-]

It seems to me like the simplest way to solve friendliness is: "Ok AI, I'm friendly so do what I tell you to do and confirm with me before taking any action." It is much simpler to program a goal system that responds to direct commands than to somehow try to infuse 'friendliness' into the AI.

As was pointed out, this might not have the consequences one wants. However, even if that wasn't true, I'd still be leery of this option - this'd effectively be giving one human unlimited power. History has shown that people who are given unlimited power (or something close to it) tend to easily misuse it, even if they started out with good intentions.