TheOtherDave comments on AI risk, new executive summary - Less Wrong

12 Post author: Stuart_Armstrong 18 April 2014 10:45AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (76)

You are viewing a single comment's thread. Show more comments above.

Comment author: shminux 20 April 2014 07:55:23AM *  -1 points [-]

Re goals, I feel that comparing advanced AGI to humans is like comparing humans to chimps: regardless how much we want to explain human ethics and goals to a chimp, and how much effort we put in, its mind just isn't equipped to comprehend them. Similarly, even the most benevolent and conscientious AGI would be unable to explain its goal system or its ethical system to even a very smart human. Like chimps, humans have their own limits of comprehension, even though we do not know what they are from the inside.

Comment author: TheOtherDave 20 April 2014 05:20:37PM 1 point [-]

Can you say more about what you're expecting a successful explanation to comprise, here?

E.g., suppose an AGI attempts to explain its ethics and goals to me, and at the end of that process it generates thousand-word descriptions of N future worlds and asks me to rank them in order of its preferences as I understand them. I expect to be significantly better at predicting the AGI's rankings than I was before the explanation.

I don't expect to be able to do anything equivalent with a chimp.

Do our expectations differ here?

Comment author: shminux 20 April 2014 06:03:33PM *  1 point [-]

E.g., suppose an AGI attempts to explain its ethics and goals to me

"Suppose an AGI attempts to explain its <untranslatable1> and <untranslatable2> to me" is what I expect it to sound like to humans if we were to replace human abstractions with those an advanced AGI would use. It would not even call these abstractions "ethics" or "goals", no more than we call ethics "groom" and goals "sex" when talking to a chimp.

suppose an AGI attempts to explain its ethics and goals to me, and at the end of that process it generates thousand-word descriptions of N future worlds and asks me to rank them in order of its preferences as I understand them.

I do not expect it to be able to generate such descriptions at all, due to the limitations of the human mind and human language. So, yes, our expectations differ here. I do not think that human intelligence reached some magical threshold where everything can be explained to it, given enough effort, even though it was not possible with "less advanced" animals. For all I know, I am not even using the right terms. Maybe an AGI improvement on the term "explain" is incomprehensible to us. Like if we were to translate "explain" into chimp or cat it would come out as "show", or something.

Comment author: TheOtherDave 20 April 2014 10:44:12PM *  0 points [-]

(shrug) Translating the terms is rather beside my point here.

If the AGI is using these things to choose among possible future worlds, then I expect it to be able to teach me to choose among possible future worlds more like it does than I would without that explanation.

I'm happy to call those things goals, ethics, morality, etc., even if those words don't capture what the AGI means by them. (I don't know that they really capture what I mean by them either, come to that.) Perhaps I would do better to call them "groom" or "fleem" or "untranslatable1" or refer to them by means of a specific shade of orange. I don't know; but as I say, I don't really care; terminology is largely independent of explanation.

But, sure, if you expect that it's incapable of doing that, then our expectations differ.

I'll note that my expectations don't depend on my having reached a magical threshold, or on everything being explainable to me given enough effort.