Houshalter comments on Dreams of Friendliness - Less Wrong

15 Post author: Eliezer_Yudkowsky 31 August 2008 01:20AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (78)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: denis_bider 02 September 2008 12:39:02AM 2 points [-]

Why build an AI at all?

That is, why build a self-optimizing process?

Why not build a process that accumulates data and helps us find relationships and answers that we would not have found ourselves? And if we want to use that same process to improve it, why not let us do that ourselves?

Why be locked out of the optimization loop, and then inevitably become subjects of a God, when we can make ourselves a critical component in that loop, and thus 'be' gods?

I find it perplexing why anyone would ever want to build an automatic self-optimizing AI and switch it to "on". No matter how well you planned things out, not matter how sure you are of yourself, by turning the thing on, you are basically relinquishing control over your future to... whatever genie it is that pops out.

Why would anyone want to do that?

Comment author: Houshalter 04 May 2014 07:01:16AM 1 point [-]

Well if it was truly friendly, it could do things like stop other people from doing that, cure your diseases, stop war, etc, etc. If it's not friendly, well of course we don't want to switch it on. But other people might do so because they don't understand the friendliness problem or the difficulty of AI boxing.

Comment author: TheAncientGeek 18 September 2015 12:02:35PM 0 points [-]

Most people would not want to do that, because it is a common safety principle to keep humans in the loop. Planes have human pilots as well as auto pilots, etc.