Liron comments on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) - Less Wrong

32 Post author: ciphergoth 30 October 2010 09:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (432)

You are viewing a single comment's thread.

Comment author: Liron 30 October 2010 07:04:50PM 21 points [-]

A particularly troubling quote from the post:

I think the relation between breadth of intelligence and depth of empathy is a subtle issue which none of us fully understands (yet). It's possible that with sufficient real-world intelligence tends to come a sense of connectedness with the universe that militates against squashing other sentiences. But I'm not terribly certain of this, any more than I'm terribly certain of its opposite.

The obvious truth is that mind-design space contains every combination of intelligence and empathy.

Comment author: mwaser 30 October 2010 07:18:19PM -1 points [-]

I don't find that "truth" either obvious or true.

Would you say that "The obvious truth is that mind -design space contains every combination of intelligence and rationality"? How about "The obvious truth is that mind -design space contains every combination of intelligence and effectiveness"?

One of my fundamental contentions is that empathy is a requirement for intelligence beyond a certain point because the consequences of lacking it are too severe to overcome.

Comment author: pjeby 30 October 2010 08:04:35PM 22 points [-]

One of my fundamental contentions is that empathy is a requirement for intelligence beyond a certain point because the consequences of lacking it are too severe to overcome.

Two questions:

1) The consequences for whom?

2) How much empathy do you have for, oh, say, an E. coli bacterium?

Connecting these two questions is left as an exercise for the reader. ;-)

Comment author: jimrandomh 30 October 2010 07:19:41PM 21 points [-]

One of my fundamental contentions is that empathy is a requirement for intelligence beyond a certain point because the consequences of lacking it are too severe to overcome.

Human psychopaths are a counterexample to this claim, and they seem to be doing alright in spite of active efforts by the rest of humanity to detect and eliminate them.

Comment author: wedrifid 01 November 2010 01:12:36AM 5 points [-]

Human psychopaths are a counterexample to this claim, and they seem to be doing alright in spite of active efforts by the rest of humanity to detect and eliminate them.

'Detect and eliminate' or 'detect and affiliate with the most effective ones'. One or the other. ;)

Comment author: rwallace 01 November 2010 03:03:12PM 4 points [-]

There are no efforts by the rest of humanity to detect and eliminate the sort of psychopaths who understand it's in their own interests to cooperate with society.

The sort of psychopaths who fail to understand that, and act accordingly, typically end up doing very badly.

Comment author: Eneasz 01 November 2010 10:13:39PM 2 points [-]

Why all the focus on psychopaths? It could be said that certain forms of autism are equally empathy-blinded, and yet people along that portion of the spectrum are often hugely helpful to the human race, and get along just fine with the more neurotypical.

Comment author: mwaser 01 November 2010 01:55:40PM *  0 points [-]

No. There are two bad assumptions in your counterexample.

They are:

  1. Human psychopaths are above the certain point of intelligence that I was talking about.

  2. Human psychopaths are sufficiently long-lived for the consequences to be severe enough.

Hmmmm. #2 says that I probably didn't make clear enough the importance of the length of interaction.

You also appear to have the assumption that my argument is that the AGI fears detection of its unfriendly behavior and any consequences that humanity can apply. Humanity CANNOT apply sufficient negative consequences to a sufficiently powerful AGI. The severe consequences are all missed opportunity costs which means that the AGI is thereby sub-optimal and thereby less intelligent than is possible.

Comment author: Kingreaper 02 November 2010 09:03:16AM 3 points [-]

What sort of opportunity costs?

The AI can simulate humans if it needs them, for a lower energy cost than keeping the human race alive.

So, why should it keep the human race alive?

Comment author: udo 31 October 2010 12:00:58PM -1 points [-]

The underlying disorders of what is commonly referred to as psychopathy are indeed detectable. I also find it comforting that they are in fact disorders and that being evil in this fashion is not an attribute of an otherwise high-functioning mind. Psychopaths can be high-functioning in some areas, but a short interaction with them almost always makes it clear that there is something is.wrong.

Comment author: Kaj_Sotala 01 November 2010 09:28:13AM 6 points [-]

also find it comforting that they are in fact disorders

Homosexuality was also a disorder once. Defining something as a sickness or disorder is a matter of politics as much as anything else.

Comment author: XiXiDu 01 November 2010 10:07:47AM 0 points [-]

Cat burning was also a form of entertainment once. Defining something as fun or entertainment is a matter of politics as much as anything else. The same goes for friendliness. I fear that once we pinpoint it, it'll be outdated.

Comment author: NancyLebovitz 31 October 2010 03:41:12PM 2 points [-]

What do you mean by psychopathy?

At least one sort of no-empathy person is unusually good at manipulating most people.

Comment author: NihilCredo 31 October 2010 03:51:45PM 1 point [-]

Everybody who is known to be a psychopath is a bad psychopath, by definition; a skilled psychopath is one who will not let people figure out that he's a psychopath.

Of course, this means that the existence of sufficiently skilled psychopath is, in everyday practice, unprovable and unfalsifiable (at least to the degree that we cannot tell the difference between a good actor and someone genuinely feeling empathy; I suppose you might figure out something by measuring people's brain activity while they watch a torture scene).

Comment author: wedrifid 01 November 2010 01:25:11AM 4 points [-]

I suppose you might figure out something by measuring people's brain activity while they watch a torture scene

Even then it is far from definitive. Experienced doctors, for example, lose a lot the ability to feel certain kinds of physical empathy - their brains will look closer to a good actor's brain than that of a naive individual exposed to the same stimulus. That's just practical adaptation and good for patient and practitioner alike.

Comment author: NancyLebovitz 01 November 2010 10:52:39AM *  1 point [-]

Considering the number of horror stories I've heard about doctors who just don't pay attention, I'm not sure you're right that doctors acting their empathy is good for patients.

Cite? I'm curious about where and when that study was done.

Comment author: wedrifid 01 November 2010 10:38:14PM 0 points [-]

Cite? I'm curious about where and when that study was done.

Don't know. Never saw it first hand - I heard it from a doctor.

Comment author: NancyLebovitz 02 November 2010 01:31:16AM 2 points [-]

Thanks for your reply, but I think I'm going to push for some community norms for sourcing information from studies, ranging from read the whole thing carefully to heard about it from someone.

Comment author: wedrifid 02 November 2010 07:04:49AM 4 points [-]

Only on lesswrong - we look down our noses at people who take the word of medical specialists.

Comment author: wedrifid 01 November 2010 01:18:17AM 3 points [-]

I'll add that at particularly high levels of competence it makes very little difference whether you are a psychopath who has mastered the deception of others or a hypocrite (normal person) who has mastered deception of yourself.

Comment author: timtyler 30 October 2010 07:23:13PM *  7 points [-]

One of my fundamental contentions is that empathy is a requirement for intelligence beyond a certain point because the consequences of lacking it are too severe to overcome.

That is probably because you don't share a definition of intelligence with most of those here.

Perhaps look through http://www.vetta.org/definitions-of-intelligence/ - and see if you can find your position.

Comment author: mwaser 01 November 2010 01:43:47PM -2 points [-]

Nope. I agree with the vast majority of the vetta definitions.

But let's go with Marcus Hutter - "There are strong arguments that AIXI is the most intelligent unbiased agent possible in the sense that AIXI behaves optimally in any computable environment."

Now, which is more optimal -- opting to play a positive-sum game of potentially infinite length and utility with cooperating humans OR passing up the game forever for a modest short-term gain?

Assume, for the purposes of argument, that the AGI does not have an immediate pressing need for the gain (since we could then go into a recursion of how pressing is the need -- and yes, if the need is pressing enough, the intelligent thing to do unless the agent's goal is to preserve humanity is to take the short-term gain and wipe out humanity -- but how would a super-intelligent AGI have gotten itself into that situation?). This should answer all of the questions about "Well, what if the AGI had a short-term preference and humans weren't it".

Comment author: gwern 01 November 2010 05:13:54PM 3 points [-]

I am jumping in here from Recent Comments, so perhaps I am missing context - but how is AIXI interacting with humanity an infinite positive-sum gain for it?

It doesn't seem like AIXI could even expect zero-sum gains from humanity: we are using up a lot of what could be computronium.

Comment author: timtyler 01 November 2010 09:08:15PM *  2 points [-]

That definition doesn't explicity mention goals. Many of the definitions do explicity mention goals. What the definitions usually don't mention is what those goals are - and that permits super-villains, along the lines of General Zod.

If (as it appears) you want to argue that evolution is likely to produce super-saints - rather than super-villains - then that's a bit of a different topic. If you wanted to argue that, "requirement" was probably the wrong way of putting it.

Comment author: Perplexed 31 October 2010 02:39:57AM 4 points [-]

One of my fundamental contentions is that empathy is a requirement for intelligence beyond a certain point because the consequences of lacking it are too severe to overcome.

Now if you had suggested that intelligence cannot evolve beyond a certain point unless accompanied by empathy ... that would be another matter. I could easily be convinced that a social animal requires empathy almost as much as it requires eyesight, and that non-social animals cannot become very intelligent because they would never develop language.

But I see no reason to think that an evolved intelligence would have empathy for entities with whom it had no social interactions during its evolutionary history. And no a priori reason to expect any kind of empathy at all in an engineered intelligence.

Which brings up an interesting thought. Perhaps human-level AI already exists. But we don't realize it because we have no empathy for AIs.

Comment author: timtyler 31 October 2010 09:21:53AM 1 point [-]

The most likely location for an "unobserved" machine intelligence is probably the NSA's basement.

However, it seems challenging to believe that a machine intelligence would need to stay hidden for very long.

Comment author: timtyler 01 November 2010 10:00:42PM *  0 points [-]

But I see no reason to think that an evolved intelligence would have empathy for entities with whom it had no social interactions during its evolutionary history.

MIT's Leonardo? Engineered super-cuteness!

MIT Leonardo

Comment author: Mass_Driver 01 November 2010 01:52:22PM 2 points [-]

Well, it does contain all those points, but some weird points are weighted much less heavily.