Bakkot comments on Welcome to Less Wrong! (2012) - Less Wrong

25 Post author: orthonormal 26 December 2011 10:57PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1430)

You are viewing a single comment's thread. Show more comments above.

Comment author: Bakkot 01 January 2012 10:45:02PM 2 points [-]

If we do make an FAI, and encoded it with some idealised version of our own morality then do we want a rule that says 'Kill everything that looks unlike yourself'?

No. But we do want a rule that says something like "the closer things are to being people, the more importance should be given to them". As a consequence of this rule, I think it should be legal to kill your newborn children.

how sure are you this isn’t going to come back and bite you in the arse? I'm observably a person. Any AI which concluded otherwise is probably already so dangerous that worrying about how my opinions stated here would affect it is probably completely pointless. So... pretty sure.

Oh, and I'm never encouraging killing your newborns, just arguing that it should be allowed (if done for something other than sadism).

Comment author: Estarlio 02 January 2012 12:42:58AM -1 points [-]

You did not answer me on the human question - how we’d like powerful humans to think .

No. But we do want a rule that says something like "the closer things are to being people, the more importance should be given to them". As a consequence of this rule, I think it should be legal to kill your newborn children.

This sounds fine as long as you and everything you care about are and always will be included in the group of, ‘people.’ However, by your own admission, (earlier in the discussion to wedrifid,) you've defined people in terms of how closely they realise your ideology:

Extremely young children are lacking basically all of the traits I'd want a "person" to have.

You’ve made it something fluid; a matter of mood and convenience. If I make an AI and tell it to save only ‘people,’ it can go horribly wrong for you - maybe you’re not part of what I mean by ‘people.’ Maybe by people I mean those who believe in some religion or other. Maybe I mean those who are close to a certain processing capacity - and then what happens to those who exceed that capacity? And surely the AI itself would do so....

There are a lot of ways it can go wrong.

I'm observably a person.

You observe yourself to be a person. That’s not necessarily the same thing as being observably a person to someone else operating with different definitions.

Any AI which concluded otherwise is probably already so dangerous that worrying about how my opinions stated here would affect it is probably completely pointless. So... pretty sure.

The opinion you state may influence what sort of AI you end up with. And at the very least it seems liable to influence the sort of people you end up with.

Oh, and I'm never encouraging killing your newborns, just arguing that it should be allowed (if done for something other than sadism).

-shrug- You’re trying to weaken the idea that newborns are people, and are arguing for something that, I suspect, would increase the occurrence of their demise. Call it what you will.

Comment author: Bakkot 02 January 2012 01:53:28AM 1 point [-]

You did not answer me on the human question - how we’d like powerful humans to think .

I want powerful humans to have a rule like "the closer things are to being people, the more importance should be given to them".

they realise your ideology

I think I must have been unclear, since both you and wedrifid seemed to interpet the wrong thing. What I meant was that I don't have a good definition for person, but no reasonable partial definition I can come up with includes babies. I didn't at all mean that just because I would like people to be nice to each other, and so on, I wouldn't consider people who aren't nice not to be people. I'd intended to convey this distinction by the quotation marks.

There are a lot of ways it can go wrong.

Obviously. There's a lot of ways any AI can go wrong. But you have to do something. Is your rule "don't kill humans"? For what definition of human, and isn't that going to be awfully unfair to aliens? I think "don't kill people" is probably about as good as you're going to do.

You observe yourself to be a person. That’s not necessarily the same thing as being observably a person to someone else operating with different definitions.

I don't want the rule to be "don't kill people" for whatever values of "kill" and "people" you have in your book. For all I know you're going to interpet this as something I'd understand more like "don't eat pineapples". I want the rule to be "don't kill people" with your definitions in accordance with mine.

-shrug- You’re trying to weaken the idea that newborns are people, and are arguing for something that, I suspect, would increase the occurrence of their demise. Call it what you will.

If you don't understand the distinction between "legal" and "encouraged", we're going to have a very difficult time communicating.

Comment author: wedrifid 02 January 2012 01:56:55AM *  2 points [-]

I think I must have been unclear, since both you and wedrifid seemed to interpet the wrong thing. What I meant was that I don't have a good definition for person, but no reasonable partial definition I can come up with includes babies.

How did I misinterpret? I read that you don't include babies and I said that I do include babies. That's (preference) disagreement, not a problem with interpretation.

Comment author: Bakkot 02 January 2012 02:21:27AM *  1 point [-]

Most adults don't have traits I'd want a "person" to have. At least with babies there is a chance they'll turn out as worthwhile people.

This line gave me the impression that you thought I was saying I want my definition of "person", for the moral calculus, to include things like "worthwhile".Which was not what I was saying -

I wasn't saying anything about the desirability of traits for people in general. I was talking about the desirability of traits in the definition of the word "person", so that it would be an accurate and useful definition.

I'd want my definition of the word "person" to be such that included virtually all adults (eta: but also thinking aliens, and certain strong AI's), but not, say, pigs. This makes it difficult to also include babies.

Comment author: wedrifid 02 January 2012 02:34:28AM 0 points [-]

This line gave me the impression that you thought I was saying I want my definition of "person", for the moral calculus, to include things like "worthwhile".Which was not what I was saying -

Intended as a tangential observation about my perceptions of people. (Some of them really are easier for me to model as objects running a machiavellian routine.)

Comment author: Bakkot 02 January 2012 02:46:57AM 0 points [-]

Ah, my mistake. (That's what I'd originally figured, but then Estarlio seemed to be saying the same thing, so I thought perhaps I'd been unclear.)

Comment author: Multiheaded 02 January 2012 08:58:50AM 1 point [-]

If you don't understand the distinction between "legal" and "encouraged", we're going to have a very difficult time communicating.

"Encouraged" is very clearly not absolute but relative here, "somewhat less discouraged than now" can just be written as "encouraged" for brevity's sake.

Comment author: Estarlio 02 January 2012 09:21:23PM -1 points [-]

I think I must have been unclear, since both you and wedrifid seemed to interpet the wrong thing. What I meant was that I don't have a good definition for person, but no reasonable partial definition I can come up with includes babies. I didn't at all mean that just because I would like people to be nice to each other, and so on, I wouldn't consider people who aren't nice not to be people. I'd intended to convey this distinction by the quotation marks.

How are you deciding whether your definition is reasonable?

Obviously. There's a lot of ways any AI can go wrong. But you have to do something. Is your rule "don't kill humans"? For what definition of human, and isn't that going to be awfully unfair to aliens? I think "don't kill people" is probably about as good as you're going to do.

‘Don’t kill anything that can learn,’ springs to mind as a safer alternative - were I inclined to program this stuff in directly, which I'm not.

I don’t expect us to be explicitly declaring these rules, I expect the moral themes prevalent in our society - or at least an idealised model of part of it - will form much of the seed for the AI’s eventual goals. I know that the moral themes prevalent in our society form much of the seed for the eventual goals of people.

In either case, I don’t expect us to be in-charge. Which makes me kinda concerned when people talk about how we should be fine with going around offing the lesser life-forms.

I don't want the rule to be "don't kill people" for whatever values of "kill" and "people" you have in your book. For all I know you're going to interpet this as something I'd understand more like "don't eat pineapples". I want the rule to be "don't kill people" with your definitions in accordance with mine.

Yet my definitions are not in accordance with yours. And, if I apply the rule that I can kill everything that’s not a person, you’re not going to get the results you desire.

It’d be great if I could just say ‘I want you to do good - with your definition of good in accordance with mine.’ But it’s not that simple. People grow up with different definitions - AIs may well grow up with different definitions - and if you've got some rule operating over a fuzzy boundary like that, you may end up as paperclips, or dogmeat or something horrible.

Comment author: Bakkot 04 January 2012 07:19:42PM *  3 points [-]

How are you deciding whether your definition is reasonable?

In the standard way. Or if you'd prefer it written out, there's a bunch of things in my mind for which the label "person" seems appropriate - including, say, humans, strong AIs, and thinking aliens. There's also a bunch of things for which said label seems inappropriate - say, pigs, chess-playing computer programs, and rocks. On consideration, babies seem not to share the important common characteristics of the first set nearly as much as they do the second; as such the label "person" seems inappropriate for babies.

‘Don’t kill anything that can learn,’ springs to mind as a safer alternative - were I inclined to program this stuff in directly, which I'm not.

Pigs can learn, without a doubt. Even if from this you decide not to kill pigs, the Bayesian spam filter that keeps dozens of viagra ads per day from cluttering up my inbox is also undoubtably learning. Learning, indeed, in much the same way that you or I do, or that pigs do, except that it's arguably better at it. Have I committed a serious moral wrong if I delete its source code?

I don’t expect us to be explicitly declaring these rules, I expect the moral themes prevalent in our society - or at least an idealised model of part of it - will form much of the seed for the AI’s eventual goals. I know that the moral themes prevalent in our society form much of the seed for the eventual goals of people.

Agreed, but I also think said idealized themes should be kept as simple as practical, so we're not constantly inserting odd corner-cases. This is partially because simple things are easier to understand and partially because odd corner-cases are almost always indicative of ideas which we would not have arrived at ourselves if we weren't conditioned with them from an early age. I strongly suspect the prohibition on infanticide is such a corner case.

It’d be great if I could just say ‘I want you to do good - with your definition of good in accordance with mine.’ But it’s not that simple. People grow up with different definitions - AIs may well grow up with different definitions - and if you've got some rule operating over a fuzzy boundary like that, you may end up as paperclips, or dogmeat or something horrible.

You think I'm going to try to program an AI in English?

Happily this isn't really a problem for the current debate, because I'm communicating exclusively with people who seem to share nearly all of my priors, definitions, and moral axioms already.

So let me break this down a bit.

If you don't think "don't kill people" is a good broad moral rule (setting aside small distinctions in our definitions, because we do probably agree on almost all counts), my task is to try to understand how you arrived at a different conclusion than I did.

If you do think babies are people, my task is to try to understand whether you've organized your space of all possible things which could be described in some different way than I have or if, as I suspect is the common case (this is not to be taken to apply to LW readers), you've just drawn your boundaries wrong.

If you do think the rule should be "don't kill people" and that babies aren't people, then my task is either to understand why you don't feel that infanticide should be legal or to point out that perhaps you really would agree that infanticide should be legal if you stopped and seriously considered the proposition for a bit.

(I think almost everyone falls into either the "wrong boundaries" case or the "would agree that infanticide should be legal if they thought about it" case.)

edit: clarity

Comment author: dlthomas 04 January 2012 07:37:20PM 4 points [-]

If you do think both of the above things, then my task is either to understand why you don't feel that infanticide should be legal or to point out that perhaps you really would agree that infanticide should be legal if you stopped and seriously considered the proposition for a bit.

I'm not certain whether or not it's germane to the broader discussion, but "think X is immoral" and "think X should be illegal" are not identical beliefs.

Comment author: Bakkot 04 January 2012 07:41:09PM *  1 point [-]

Oh, agreed, and I didn't mean to imply otherwise. A lot of people in this thread seem to have concluded that while infanticide might not be immoral in itself it should be illegal for reasons to do with Schelling points or increased risk of developing sadistic tendencies. These are perfectly good reasons to not feel that infanticide should be legal despite agreeing with both propositions I listed.

Comment author: TheOtherDave 04 January 2012 08:19:33PM 2 points [-]

I was with you, until your summary.

Suppose hypothetically that I think "don't kill people" is a good broad moral rule, and I think babies are people.
It seems to follow from what you said that I therefore ought to agree that infanticide should be legal.

If that is what you meant to say, then I am deeply confused. If (hypothetically) I think babies are people, and if (hypothetically) I think "don't kill people" is a good law, then all else being equal I should think "don't kill babies" is a good law. That is, I should believe that infanticide ought not be any more legal than murder in general.

It seems like one of us dropped a negative sign somewhere along the line. Perhaps it was me, but if so, I seem incapable of finding it again.

Comment author: Bakkot 04 January 2012 08:34:17PM 1 point [-]

Quite right, I had an extra negative in there - the people I need to talk to aren't those who think babies are not people, because we already agree. Fixed, thanks.

Comment author: TheOtherDave 04 January 2012 08:36:50PM 1 point [-]

Oh good! I don't usually nitpick about such things, but you had me genuinely puzzled.

Comment author: Strange7 05 June 2012 04:52:40AM 0 points [-]

Even if from this you decide not to kill pigs, the Bayesian spam filter that keeps dozens of viagra ads per day from cluttering up my inbox is also undoubtably learning. Learning, indeed, in much the same way that you or I do, or that pigs do, except that it's arguably better at it. Have I committed a serious moral wrong if I delete its source code?

If I were programming an AI to be a perfect world-guiding moral paragon, I'd rather have it keep the spam filter in storage (the equivalent of a retirement home, or cryostasis) than delete it for the crime of obsolescence. Digital storage space is cheap, and getting cheaper all the time.

Comment author: Estarlio 03 June 2012 05:52:38PM *  -1 points [-]

Somewhat late, I must have missed this reply agessss ago when it went up.

there's a bunch of things in my mind for which the label "person" seems appropriate [...] There's also a bunch of things for which said label seems inappropriate

That's not a reasoned way to form definitions that have any more validity as referents than lists of what you approve of. What you're doing is referencing your feelings and seeing what the objects of those feelings have in common. It so happens that I feel that infants are people. But we're not doing anything particularly logical or reasonable here - we're not drawing our boundaries using different tools. One of us just thinks they belong on the list and the other thinks they don't.

If we try and agree on a common list. Well, you're agreeing that aliens and powerful AIs go on the list - so biology isn't the primary concern. If we try to draw a line through the commonalities what are we going to get? All of them seem able to gather, store, process and apply information to some ends. Even infants can - they're just not particularly good at it yet.

Conversely, what do all your other examples have in common that infants don't?

Pigs can learn, without a doubt. Even if from this you decide not to kill pigs, the Bayesian spam filter that keeps dozens of viagra ads per day from cluttering up my inbox is also undoubtably learning. Learning, indeed, in much the same way that you or I do, or that pigs do, except that it's arguably better at it. Have I committed a serious moral wrong if I delete its source code?

Arguably that would be a good heuristic to keep around. I don't know I'd call it a moral wrong – there's not much reason to talk about morals when we can just say discouraged in society and have everyone on the same page. But you would probably do well to have a reluctance to destroy it. One day someone vastly more complex than you may well look on you in the same light you look on your spam filter.

[...] odd corner-cases are almost always indicative of ideas which we would not have arrived at ourselves if we weren't conditioned with them from an early age. I strongly suspect the prohibition on infanticide is such a corner case.

I strongly suspect that societies where people had no reluctance to go around offing their infants wouldn't have lasted very long. Infants are significant investments of time and resources. Offing your infants is a sign that there's something emotionally maladjusted in you – by the standards of the needs of society. If we'd not had the precept, and magically appeared out of nowhere, I think we'd have invented it pretty quick.

You think I'm going to try to program an AI in English?

Not really about you specifically. But, in general – yeah, more or less. Maybe not write the source code, but instruct it. English, or uploads or some other incredibly high-level language with a lot of horrible dependencies built into its libraries (or concepts or what have you) that the person using it barely understands themselves. Why? Because it will be quicker. The guy who just tells the AI to guess what he means by good skips the step of having to calculate it herself.

Comment author: Bakkot 04 June 2012 10:47:58PM *  1 point [-]

Five months later...

What you're doing is referencing your feelings and seeing what the objects of those feelings have in common. [...] One of us just thinks they belong on the list and the other thinks they don't. [...] If we try to draw a line through the commonalities what are we going to get? [...] Conversely, what do all your other examples have in common that infants don't?

These all seem to indicate a bit of confusion about what I'm trying to do.

It seems to me that this thread of the debate has come down to "Should we consider babies to be people?" There are, broadly, two ways of settling this question: moving up the ladder of abstraction, or moving down. That is, we can answer this by attempting to define 'people' in terms of other, broader terms (this being the former case) or by defining 'people' via the listing of examples of things which we all agree are or are not people and then trying to decide by inspection in which category 'babies' belong.

Especially in light of the last line I've quoted above, you appear to be attempting the first method (and to be assuming I'm doing the same). In my experience, this method almost always ensures more confusion, not less, so I'm staying as far away from it as possible.

Instead I am attempting the second method. Here are several things I think are people: [adult] humans, strong AIs, thinking aliens. Here are several things I think are not people: pigs, chess-playing computer programs, rocks, dead humans. I am not going to attempt to suss the defining characteristics and commonalities of either category, because that would be an example of applying the first method and tends, as I say, to result only in more confusion.

Now, using only these categories (and in particular not using our feelings about whether or not babies are people), it seems to me that babies are less similar to the members of the first set than they are to the second. As such it seems that we ought to conclude babies are not people.


Worth repeating, I think: I am not going to attempt to list defining characteristics of the "people" and "not people" categories. Unless you can come up with what you think is a complete definition, that's not going to get us anywhere.


There are, at this point, several particulars on which you might disagree.

  • You might think that this method of defining words is in some way invalid, especially in light of how difficult it would be to use this method with an AI's programming. [If this is the case, I challenge you to write a piece of software that, without appealing to you, identifies objects as people or non-people the same way you do it. Can't do this? Then we'd best stick with what tools we have available to us.]
  • You might think that some of {humans, strong AIs, thinking aliens} are not people. [If this is the case, let's keep looking for a set we can agree on.]
  • You might think that some of {pigs, chess-playing computer programs, rocks, dead humans} are people. [If this is the case, let's keep looking for a set we can agree on. Or you could announce that you're now, for moral reasons, vegetarian, in which case our moralities are likely irreconcilable without vastly more work.]
  • You might think that babies are more similar to members of the first category than members of the second. [If this is the case, we're probably at an impasse. You or I might attempt to convince the other by expanding the sets I've outlined above with members we both agree belong until the similarities and differences are sufficiently stark - but frankly, at this point, it seems quite obvious that babies belong in the second category.]

A misc. point (the important part is above):

I strongly suspect that societies where people had no reluctance to go around offing their infants wouldn't have lasted very long.

This is entirely orthogonal to the point I was trying to make. Keep in mind, most societies invented misogyny pretty quick too. Rather, I doubt that you personally, raised in a society much like this one except without the taboo on killing infants, would have come to the conclusion that killing infants is a moral wrong.

There's a specific implication in your response about "the needs of society" to which I could respond but which I'm not going to unless prompted; I hope the above has dealt that.

Comment author: wmorgan 05 June 2012 12:09:41AM *  0 points [-]

Consider this set:

A sleeping man. A cryonics patient. A nonverbal 3-year-old. A drunk, passed out.

I think these are all people, they're pretty close to babies, and we shouldn't kill any of them.

The reason they all feel like babies to me, from the perspective of "are they people?", is that they're in a condition where we can see a reasonable path for turning them into something that is unquestionably a person.

EDIT: That doesn't mean we have to pay any cost to follow that path -- the value we assign to a person's life can be high but must be finite, and sometimes the correct, moral decision is to not pay that price. But just because we don't pay that cost doesn't mean it's not a person.

I don't think the time frame matters, either. If I found Fry from Futurama in the cryostasis tube today, and I killed him because I hated him, that would be murder even though he isn't going to talk, learn, or have self-awareness until the year 3000.

Gametes are not people, even though we know how to make people from them. I don't know why they don't count.

EDIT: oh shit, better explain myself about that last one. What I mean is that it is not possible to murder a gamete -- they don't have the moral weight of personhood. You can, potentially, in some situations, murder a baby (and even a fetus): that is possible to do, because they count as people.

Comment author: Bakkot 06 June 2012 03:20:51AM *  1 point [-]

Posted this above as well.

The reason they all feel like babies to me, from the perspective of "are they people?", is that they're in a condition where we can see a reasonable path for turning them into something that is unquestionably a person.

Here's another case to consider:

I assume you've granted that sufficiently advanced AIs ought to be counted as people. Say that I have running on my computer a script which is compiling an AI's source, and which will launch the resultant executable as soon as compilation finishes with no intervention on my part.

Am I killing a person if I terminate this script before compilation completes? That is, does "software which will compile and run an AI" belong to the "people" or the "not people" group?

I think babies are much closer to this than to any of the examples you've listed above.


In the interests of settling confusion, here's another example:

Suppose we let the above script finish and the AI go about its merry way for a few centuries. We shut down the computer it's running on - writing its current state to non-volatile memory - to transport it somewhere else. To me it seems that destroying that memory would constitute killing a person.

From these examples, I think "will become a person" is only significant for objects which were people in the past. This handles all of the examples you list (leaving aside 3-year-olds, which are too close to the issue at hand), as well as explaining why I don't think interrupting compilation as above is killing a person but destroying the state of a running-but-paused AI does.


Questions for you:

  • Does interrupting compilation as above seem to you like killing someone?
  • If not, do you still think babies are closer to the examples you list than to this example?
  • If not, do you still think babies are people?
  • If so, can you think of some other example which we can both readily agree is a person (or not a person) which can help settle this?
Comment author: wmorgan 07 June 2012 12:49:05AM 0 points [-]

I've never seen a compiling AI, let alone an interrupted one, even in fiction, so your example isn't very available to me. I can imagine conditions that would make it OK or not OK to cancel the compilation process.

This is most interesting to me:

From these examples, I think "will become a person" is only significant for objects which were people in the past

I know we're talking about intuitions, but this is one description that can't jump from the map into the territory. We know that the past is completely screened off by the present, so our decisions, including moral decisions, can't ultimately depend on it. Ultimately, there has to be something about the present or future states of these humans that makes it OK to kill the baby but not the guy in the coma. Could you take another shot at the distinction between them?

Comment author: Nornagest 05 June 2012 01:09:30AM 1 point [-]

This question is fraught with politics and other highly sensitive topics, so I'll try to avoid getting too specific, but it seems to me that thinking of this sort of thing purely in terms of a potentiality relation rather misses the point. A self-extracting binary, a .torrent file, a million lines of uncompiled source code, and a design document are all, in different ways, potential programs, but they differ from each other both in degree and in type of potentiality. Whether you'd call one a program in any given context depends on what you're planning to do with it.

Comment author: Jayson_Virissimo 05 June 2012 05:16:00AM *  0 points [-]

Gametes are not people, even though we know how to make people from them.

I'm not at all sure a randomly selected human gamete is less likely to become a person than a randomly selected cryonics patient (at least, with currently-existing technology).

Comment author: Strange7 05 June 2012 02:42:21AM 0 points [-]

Might be better to talk about this in terms of conversion cost rather than probability. To turn a gamete into a person you need another gamete, $X worth of miscellaneous raw materials (including, but certainly not limited to, food), and a healthy female of childbearing age. She's effectively removed from the workforce for a predictable period of time, reducing her probable lifetime earning potential by $Y, and has some chance of various medical complications, which can be mitigated by modern treatments costing $Z but even then works out to some number of QALYs in reduced life expectancy. Finally, there's some chance of the process failing and producing an undersized corpse, or a living creature which does not adequately fulfill the definition of "person."

In short, a gamete isn't a person for the same reason a work order and a handful of plastic pellets aren't a street-legal automobile.

Comment author: Alicorn 05 June 2012 12:31:23AM 0 points [-]

Gametes are not people, even though we know how to make people from them, because the chance that any given sex cell ever becomes a person is so slim.

What's the cutoff probability?

Comment author: wmorgan 05 June 2012 04:50:28AM 0 points [-]

You are right; retracted.

Comment author: Estarlio 05 June 2012 03:23:51AM *  -1 points [-]

Five months later...

Yeah, a lack of reply notification's a real pain in the rear.


It seems to me that this thread of the debate has come down to "Should we consider babies to be people?" There are, broadly, two ways of settling this question: moving up the ladder of abstraction, or moving down. That is, we can answer this by attempting to define 'people' in terms of other, broader terms (this being the former case) or by defining 'people' via the listing of examples of things which we all agree are or are not people and then trying to decide by inspection in which category 'babies' belong.

Edit: You can skip to the next break line if you're not interested in reading about the methodological component so much as you are continuing the infants argument.

What we're doing here, ideally, is pattern matching. I present you with a pattern and part of that pattern is what I'm talking about. I present you with another pattern where some things have changed and the parts of the pattern I want to talk about are the same in that one. And I suppose to be strict we'd have to present you with patterns that are fairly similar and express disapproval for those.

Because we have a large set of existing patterns that we both know about - properties - it's a lot quicker to make reference to some of those patterns than it is to continue to flesh out our lists to play guess the commonality. We can still do it both ways, as long as we can still head back down the abstraction pile fairly quickly. Compressing the search space by abstract reference to elements of patterns that members of the set share, is not the same thing as starting off with a word alone and then trying to decide on the pattern and then fit the members to that set.

If you cannot do that exercise, if you cannot explicitly declare at least some of the commonalities you're talking about, then it leads me to believe that your definition is incoherent. The odds that, with our vast set of shared patterns - with our language that allows us to do this compression - that you can't come up with at least a fairly rough definition fairly quickly seem remote.

If I wanted to define humans for instance - "Most numerous group of bipedal tools users on Earth." That was a lot quicker than having to define humans by providing examples of different creatures. We can only think the way we do because we have these little compression tricks that let us leap around the search space, abstraction doesn't have to lead to more confusion - as long as your terms refer to things that people have experience with.

Whereas if I provided you a selection of human genetic structures - while my terms would refer exactly, while I'd even be able to stick you in front of a machine and point to it directly - would you even recognise it without going to a computer? I wouldn't. The reference falls beyond the level of my experience.

I don't see why you think my definition needs to be complete. We have very few exact definitions for anything; I couldn't exactly define what I mean by human. Even by reference to genetic structure I've no idea where it would make sense to set the deviation from any specific example that makes you human or not human.


But let's go with your approach:

It seems to me that mentally disabled people belong on the people list. And babies seem more similar to mentally disabled people than they do to pigs and stones.


This is entirely orthogonal to the point I was trying to make. Keep in mind, most societies invented misogyny pretty quick too. Rather, I doubt that you personally, raised in a society much like this one except without the taboo on killing infants, would have come to the conclusion that killing infants is a moral wrong.

Well, no, but you could make that argument about anything. I raised in a society just like this one but without taboo X would never create taboo X on my own, taboos are created by their effects on society. It's the fact that society would not have been like this one without taboo X that makes it taboo in the first place.

Comment author: Bakkot 06 June 2012 03:10:59AM 0 points [-]

Compressing the search space by abstract reference to elements of patterns that members of the set share, is not the same thing as starting off with a word alone and then trying to decide on the pattern and then fit the members to that set.

Sure, there's obvious common threads. For example, ability to learn, ability to make decisions, capacity for language, and capacity for introspection. Though these are not collectively necessary nor sufficient, of course.

I don't see why you think my definition needs to be complete.

The problem is that, lacking a complete definition, this exercise doesn't actually help much in any particular case where there might be doubt. If our list of properties handled all cases, it would be a complete definition; it is precisely because no complete definition is in hand that we need to move down the ladder of abstraction to gain clarity.

I can come up with a rough definition, but rough definitions fail in exactly those cases where there is potential disagreement.


It seems to me that mentally disabled people belong on the people list.

I'm going to assume you meant "humans" rather than "people", because otherwise that's not very illustrative.

But there are certainly levels of mental disability beyond which a human ought not be considered a person, no? If we removed the entire brain, say, and kept the body alive through pacemakers and so forth. That doesn't seem like a person at all (at least to me - do you disagree?). So will we rather say that we include mentally disabled humans above a certain level of functioning? The problem then is that babies almost certainly fall well below that threshold, wherever you might set it.


Another hypothetical, which I developed in response to wmorgan's comment below (where I'm also posting this bit in a moment):

I assume you've granted that sufficiently advanced AIs ought to be counted as people. Say that I have running on my computer a script which is compiling an AI's source, and which will launch the resultant executable as soon as compilation finishes with no intervention on my part.

Am I killing a person if I terminate this script before compilation completes? That is, does "software which will compile and run an AI" belong to the "people" or the "not people" group?

(If it's not clear, I think the answer is "not people".)


I raised in a society just like this one but without taboo X would never create taboo X on my own, taboos are created by their effects on society.

Really? It seems to me that someone did invent the taboo[1] on, say, slavery.

The point I'm trying to make here is that if you started with your current set of rules minus the rule about "don't rape people" (not to say your hypothetical morals view it as acceptable, merely undecided), I think you could quite naturally come to conclude that rape was wrong. But it seems to me that this would not be the case if instead you left out the rule about "don't kill babies".

[1] (It's possible some confusion is arising here from my use of "taboo" when what I really mean to say is "widely shared personal moral conviction against".)

Comment author: Estarlio 09 June 2012 03:43:57AM -1 points [-]

I can come up with a rough definition, but rough definitions fail in exactly those cases where there is potential disagreement.

Eh, functioning is a very rough definition and we've got to that pretty quickly.


So will we rather say that we include mentally disabled humans above a certain level of functioning? The problem then is that babies almost certainly fall well below that threshold, wherever you might set it.

Well, the question is whether food animals fall beneath the level of babies. If they do, then I can keep eating them happily enough; if they don't, I've got the dilemma as to whether to stop eating animals or start eating babies.

And it's not clear to me, without knowing what you mean by functioning, that pigs or cows are more intelligent than babies. I've not seen one do anything like that. Predatory animals - wolves and the like, on the other tentacle, are obviously more intelligent than a baby.

As to how I'd resolve the dilemma if it did occur, I'm leaning more towards stopping eating food animals than starting to eat babies. Despite the fact that food animals are really tasty, I don't want to put a precedent in place that might get me eaten at some point.


I assume you've granted that sufficiently advanced AIs ought to be counted as people.

By fiat - sufficiently advanced for what? But I suppose I'll grant any AI that can pass the Turing test qualifies, yes.

Am I killing a person if I terminate this script before compilation completes? That is, does "software which will compile and run an AI" belong to the "people" or the "not people" group?

That depends on the nature of the script. If it's just performing some relatively simple task over and over, then I'm inclined to agree that it belongs in the not people group. If it is itself as smart as, say, a wolf, then I'm inclined to think it belongs in the people group.


Really? It seems to me that someone did invent the taboo[1] on, say, slavery.

I suppose, what I really mean to say is they're taboos because that taboo has some desirable effect on society.

The point I'm trying to make here is that if you started with your current set of rules minus the rule about "don't rape people" (not to say your hypothetical morals view it as acceptable, merely undecided), I think you could quite naturally come to conclude that rape was wrong. But it seems to me that this would not be the case if instead you left out the rule about "don't kill babies".

It seems to me that babies are quite valuable, and became so as their survival probability went up. In the olden days infanticide was relatively common - as was death in childbirth. People had a far more casual attitude towards the whole thing.

But as the survival probability went up the investment people made, and were expected to make, in individual children went up - and when that happened infanticide became a sign of maladaptive behaviour.

Though I doubt they'd have put it in these terms: People recognised a poor gambling strategy and wondered what was wrong with the person.

And I think it would be the same in any advanced society.

Comment author: NancyLebovitz 13 June 2012 02:59:49AM 0 points [-]

Figuring out how to define human (as in "don't kill humans") so as to include babies is relatively easy, since babies are extremely likely to grow up into humans.

The hard question is deciding which transhumans-- including types not yet invented, possibly types not yet thought of, and certainly types which are only imagined in a sketchy abstract way-- can reasonably be considered as entities which shouldn't be killed.