TheAncientGeek comments on AI risk, new executive summary - Less Wrong

12 Post author: Stuart_Armstrong 18 April 2014 10:45AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (76)

You are viewing a single comment's thread. Show more comments above.

Comment author: shminux 20 April 2014 07:55:23AM *  -1 points [-]

Re goals, I feel that comparing advanced AGI to humans is like comparing humans to chimps: regardless how much we want to explain human ethics and goals to a chimp, and how much effort we put in, its mind just isn't equipped to comprehend them. Similarly, even the most benevolent and conscientious AGI would be unable to explain its goal system or its ethical system to even a very smart human. Like chimps, humans have their own limits of comprehension, even though we do not know what they are from the inside.

Comment author: TheAncientGeek 20 April 2014 08:55:12AM *  0 points [-]

Is the problems supposed to be that the human doesn't have enough intelligence, or that we have some kind of highly parochial rationality?

Comment author: shminux 20 April 2014 05:34:15PM 0 points [-]

Not enough intelligence, yes. And rationality is a part of intelligence. Also, see my reply to hen.

Comment author: TheAncientGeek 21 April 2014 10:57:47AM *  -1 points [-]

But that's not ready analogous to the human champ gap, which is qualitative....chimps don't have language.