TheAncientGeek comments on AI risk, new executive summary - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (76)
Re goals, I feel that comparing advanced AGI to humans is like comparing humans to chimps: regardless how much we want to explain human ethics and goals to a chimp, and how much effort we put in, its mind just isn't equipped to comprehend them. Similarly, even the most benevolent and conscientious AGI would be unable to explain its goal system or its ethical system to even a very smart human. Like chimps, humans have their own limits of comprehension, even though we do not know what they are from the inside.
Is the problems supposed to be that the human doesn't have enough intelligence, or that we have some kind of highly parochial rationality?
Not enough intelligence, yes. And rationality is a part of intelligence. Also, see my reply to hen.
But that's not ready analogous to the human champ gap, which is qualitative....chimps don't have language.