Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Eliezer_Yudkowsky comments on Nonperson Predicates - Less Wrong

29 Post author: Eliezer_Yudkowsky 27 December 2008 01:47AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (175)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Eliezer_Yudkowsky 27 December 2008 02:24:16AM 18 points [-]

Because for the AI to figure out this problem without creating new people within itself, it has to understand consciousness without ever simulating anything conscious.

Comment author: diegocaleiro 23 November 2010 01:50:57AM 3 points [-]

An obvious yet brilliant point, which should be on the main post (and in your book), not in the replies (Inferential distance to Robin Hanson is supposed to be minimal, yet...)

It is interesting that people working in AI don't want to tackle this problem. When I was Diego 2004, equivalent age of Eliezer 1998, I decided that The Most Important problem was how to avoid catastrophic events from happening either because a part of a program was conscious and suffering, or because everyone uploaded to an unconscious machine. So I dedicated the last 6 years to this impossible problem.

But unlike other problems that interested me "What should I do?" "What is the universe all about anyway" "How the mind works" "How can a brain be intelligent", this one has not become less and less impossible over time.

In fact, when one reads Chalmers' formulations of the hard problem, he can keep you trapped for a long time. It is very hard to understand where he made mistakes (which seem to be on purpose).

So you can stick to Dennett, and some form of monism, but that will not dissolve the problem of how to detect unconscious AI and differentiate it.

Comment author: TheOtherDave 01 December 2010 04:37:27AM 9 points [-]

I am struggling to understand how something can be a friendly AI in the first place without being able to distinguish people from non-people.

Comment author: Eliezer_Yudkowsky 20 March 2013 06:00:21AM 7 points [-]

The boundaries between present-day people and non-people can be sharper, by a fiat of many intervening class members being nonexistent, than the ideal categories. In other words, except for chimpanzees, cryonics patients, Terry Schiavo, and babies who are exactly 1 year and 2 months and 5 days old, there isn't much that's ambiguous between person and non-person.

More to the point, a CEV-based AI has a potentially different definition of 'sentient being' and 'the class I am to extrapolate'. Theoretically you could be given the latter definition by pointing and not worry too much about boundary cases, and let it work out the former class by itself - if you were sure that the FAI would arrive at the correct answer without creating any sentients along the way!

Comment author: TheOtherDave 20 March 2013 03:02:43PM 1 point [-]

The boundaries between present-day people and non-people can be sharper, by a fiat of many intervening class members being nonexistent, than the ideal categories.

Fair point.

More to the point, a CEV-based AI has a potentially different definition of 'sentient being' and 'the class I am to extrapolate'. Theoretically you could be given the latter definition by pointing

Mm. Theoretically, yes, I suppose someone could point to every person, and I could be constructed so as to not generalize the extrapolated class beyond the particular targets I've been given.

I'm not sure I would endorse that, but I think that gets us into questions of what the extrapolated class ought to comprise in the first place, which is a much larger and mostly tangential discussion.

So, fair enough... point taken.

Comment author: MugaSofer 24 March 2013 10:47:22PM 0 points [-]

In other words, except for chimpanzees, cryonics patients, Terry Schiavo, and babies who are exactly 1 year and 2 months and 5 days old, there isn't much that's ambiguous between person and non-person.

Slightly offtopic, but doesn't that assume personhood is binary? I've always assumed it was a sliding scale (I care far less about a dog compared to a human, but I care even less about a fly getting it's wings pulled off. And even then, I care more than about a miniature clockwork fly.)