Liron comments on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (432)
A particularly troubling quote from the post:
The obvious truth is that mind-design space contains every combination of intelligence and empathy.
I don't find that "truth" either obvious or true.
Would you say that "The obvious truth is that mind -design space contains every combination of intelligence and rationality"? How about "The obvious truth is that mind -design space contains every combination of intelligence and effectiveness"?
One of my fundamental contentions is that empathy is a requirement for intelligence beyond a certain point because the consequences of lacking it are too severe to overcome.
Two questions:
1) The consequences for whom?
2) How much empathy do you have for, oh, say, an E. coli bacterium?
Connecting these two questions is left as an exercise for the reader. ;-)
Human psychopaths are a counterexample to this claim, and they seem to be doing alright in spite of active efforts by the rest of humanity to detect and eliminate them.
'Detect and eliminate' or 'detect and affiliate with the most effective ones'. One or the other. ;)
There are no efforts by the rest of humanity to detect and eliminate the sort of psychopaths who understand it's in their own interests to cooperate with society.
The sort of psychopaths who fail to understand that, and act accordingly, typically end up doing very badly.
Why all the focus on psychopaths? It could be said that certain forms of autism are equally empathy-blinded, and yet people along that portion of the spectrum are often hugely helpful to the human race, and get along just fine with the more neurotypical.
No. There are two bad assumptions in your counterexample.
They are:
Human psychopaths are above the certain point of intelligence that I was talking about.
Human psychopaths are sufficiently long-lived for the consequences to be severe enough.
Hmmmm. #2 says that I probably didn't make clear enough the importance of the length of interaction.
You also appear to have the assumption that my argument is that the AGI fears detection of its unfriendly behavior and any consequences that humanity can apply. Humanity CANNOT apply sufficient negative consequences to a sufficiently powerful AGI. The severe consequences are all missed opportunity costs which means that the AGI is thereby sub-optimal and thereby less intelligent than is possible.
What sort of opportunity costs?
The AI can simulate humans if it needs them, for a lower energy cost than keeping the human race alive.
So, why should it keep the human race alive?
The underlying disorders of what is commonly referred to as psychopathy are indeed detectable. I also find it comforting that they are in fact disorders and that being evil in this fashion is not an attribute of an otherwise high-functioning mind. Psychopaths can be high-functioning in some areas, but a short interaction with them almost always makes it clear that there is something is.wrong.
Homosexuality was also a disorder once. Defining something as a sickness or disorder is a matter of politics as much as anything else.
Cat burning was also a form of entertainment once. Defining something as fun or entertainment is a matter of politics as much as anything else. The same goes for friendliness. I fear that once we pinpoint it, it'll be outdated.
What do you mean by psychopathy?
At least one sort of no-empathy person is unusually good at manipulating most people.
Everybody who is known to be a psychopath is a bad psychopath, by definition; a skilled psychopath is one who will not let people figure out that he's a psychopath.
Of course, this means that the existence of sufficiently skilled psychopath is, in everyday practice, unprovable and unfalsifiable (at least to the degree that we cannot tell the difference between a good actor and someone genuinely feeling empathy; I suppose you might figure out something by measuring people's brain activity while they watch a torture scene).
Even then it is far from definitive. Experienced doctors, for example, lose a lot the ability to feel certain kinds of physical empathy - their brains will look closer to a good actor's brain than that of a naive individual exposed to the same stimulus. That's just practical adaptation and good for patient and practitioner alike.
Considering the number of horror stories I've heard about doctors who just don't pay attention, I'm not sure you're right that doctors acting their empathy is good for patients.
Cite? I'm curious about where and when that study was done.
Don't know. Never saw it first hand - I heard it from a doctor.
Thanks for your reply, but I think I'm going to push for some community norms for sourcing information from studies, ranging from read the whole thing carefully to heard about it from someone.
Only on lesswrong - we look down our noses at people who take the word of medical specialists.
I'll add that at particularly high levels of competence it makes very little difference whether you are a psychopath who has mastered the deception of others or a hypocrite (normal person) who has mastered deception of yourself.
That is probably because you don't share a definition of intelligence with most of those here.
Perhaps look through http://www.vetta.org/definitions-of-intelligence/ - and see if you can find your position.
Nope. I agree with the vast majority of the vetta definitions.
But let's go with Marcus Hutter - "There are strong arguments that AIXI is the most intelligent unbiased agent possible in the sense that AIXI behaves optimally in any computable environment."
Now, which is more optimal -- opting to play a positive-sum game of potentially infinite length and utility with cooperating humans OR passing up the game forever for a modest short-term gain?
Assume, for the purposes of argument, that the AGI does not have an immediate pressing need for the gain (since we could then go into a recursion of how pressing is the need -- and yes, if the need is pressing enough, the intelligent thing to do unless the agent's goal is to preserve humanity is to take the short-term gain and wipe out humanity -- but how would a super-intelligent AGI have gotten itself into that situation?). This should answer all of the questions about "Well, what if the AGI had a short-term preference and humans weren't it".
I am jumping in here from Recent Comments, so perhaps I am missing context - but how is AIXI interacting with humanity an infinite positive-sum gain for it?
It doesn't seem like AIXI could even expect zero-sum gains from humanity: we are using up a lot of what could be computronium.
That definition doesn't explicity mention goals. Many of the definitions do explicity mention goals. What the definitions usually don't mention is what those goals are - and that permits super-villains, along the lines of General Zod.
If (as it appears) you want to argue that evolution is likely to produce super-saints - rather than super-villains - then that's a bit of a different topic. If you wanted to argue that, "requirement" was probably the wrong way of putting it.
Now if you had suggested that intelligence cannot evolve beyond a certain point unless accompanied by empathy ... that would be another matter. I could easily be convinced that a social animal requires empathy almost as much as it requires eyesight, and that non-social animals cannot become very intelligent because they would never develop language.
But I see no reason to think that an evolved intelligence would have empathy for entities with whom it had no social interactions during its evolutionary history. And no a priori reason to expect any kind of empathy at all in an engineered intelligence.
Which brings up an interesting thought. Perhaps human-level AI already exists. But we don't realize it because we have no empathy for AIs.
The most likely location for an "unobserved" machine intelligence is probably the NSA's basement.
However, it seems challenging to believe that a machine intelligence would need to stay hidden for very long.
MIT's Leonardo? Engineered super-cuteness!
Well, it does contain all those points, but some weird points are weighted much less heavily.