XiXiDu comments on [draft] Concepts are Difficult, and Unfriendliness is the Default: A Scary Idea Summary - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (39)
Didn't you claim in your paper that an AGI will only act correctly if its ontology is sufficiently similar to our own. But what does constitute a sufficiently similar ontology? And where do you draw the line between an agent that is autonomously intelligent to make correct cross-domain inferences and an agent that is unable to update its ontology and infer consistent concepts and the correct frame of reference?
There seem to be no examples where conceptual differences constitute a serious obstacle. Speech recognition seems to work reasonably well, even though it would be fallacious to claim that any speech recognition software comprehends the underlying concepts. IBM Watson seems to be able to correctly answer questions without even a shallow comprehension of the underlying concepts.
Or take the example of Google maps. We do not possess a detailed digital map of the world. Yet Google maps does pick destinations consistent with human intent. It does not misunderstand what I mean by "Take me to McDonald's".
As far as I understood, you were saying that a superhuman general intelligence will misunderstand what is meant by "Make humans happy.", without justifying why humans will be better able to infer the correct interpretation.
Allow me to act a bit dull-witted and simulate someone with a long inferential distance:
A behavior executor? Because if it is not a behavior executor but an agent capable of reflective decision making and recursive self-improvement, then it needs to interpret its own workings and eliminate any vagueness. Since the most basic drive it has must be, by definition, to act intelligently and make correct and autonomous decisions.
Is this the correct references class? Isn't an AGI closer to a human trying to understand how to act in accordance with God's law?
We're right now talking about why we get hungry and how we act on it and the correct frame of reference in which to interpret the drive, natural selection. How would a superhuman AI not contemplate its own drives and interpret them given the right frame of reference, i.e. human volition?
But an AGI does not have all those goals and values, e.g. an inherent aversion against revising its goals according to another agent. An AGI mostly wants to act correctly. And if its goal is to make humans happy then it doesn't care to do it in the most literal sense possible. Its goal would be to do it in the most correct sense possible. If it wouldn't want to be maximally correct then it wouldn't become superhuman intelligent in the first place.
Yea. The method for interpreting vagueness correctly, is to try alternative interpretations and pick the one that makes most sense. Sadly, humans seldom do that in an argument, instead opting to maximize some sort of utility function which may be maximum for the interpretation that is easiest to disagree with.
Humans try alternative interpretations and tend to pick the one that accords them winning status. It takes actual effort to do otherwise.
(Note: the reason why I haven't replied to this comment isn't that I wouldn't find it useful, but because I haven't had the time to answer it - so far SI has preferred to keep me working on other things for my pay, and I've been busy with those. I'll get back to this article eventually.)
Most of the time - but with a few highly inconvenient exceptions. A human travel agent would do much better. IBM's Watson is an even less compelling example. Many of its responses are just bizarre, but it makes up for that with blazing search speed/volume and reaction times. And yet it still got beaten by a U.S. Congresscritter.
You seem to be implying that the AGI will be programmed to seek human help in interpreting/crystallizing its own goals. I agree that such an approach is a likely strategy by the programmers, and that it is inadequately addressed in the target paper.