Estarlio comments on Welcome to Less Wrong! (2012) - Less Wrong

25 Post author: orthonormal 26 December 2011 10:57PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1430)

You are viewing a single comment's thread. Show more comments above.

Comment author: Estarlio 01 January 2012 01:16:39PM 1 point [-]

Babies aren't people by any measure I can see

Do you really think it's wise to have a precedent that allows agents of Type X to go around killing off all of the !X group ? Doesn't bode well if people end up with a really sharp intelligence gradient.

Comment author: Bakkot 01 January 2012 07:01:39PM *  8 points [-]

We already have a bunch of those precedents, depending on how you look at it. You're more than free to go around killing ants. No one is going to care. You can even, depending on zoning laws, raise pigs and then slaughter them for their meat. The reason that this is just not a problem in the eyes of the law is that pigs aren't people.

If you look at it another way, we have exactly one precedent: It's generally morally OK to kill members of the !X group if and only if that group consists of agents which are not people.

ETA: I hate that I have to say this, but can people respond instead of just downvoting? I'm honestly curious as to why this particular post is controversial - or have I missed something?

Comment author: TheOtherDave 02 January 2012 05:02:02AM 6 points [-]

I haven't seen anyone respond to your request for feedback about votes, so let me do so, despite not being one of the downvoters.

By my lights, at least, your posts have been fine. Obviously, I can't speak for the site as a whole... then again, neither can anyone else.

Basically, it's complicated, because the site isn't homogenous. Expressing conventionally "bad" moral views will usually earn some downvotes from people who don't want such views expressed; expressing them clearly and coherently and engaging thoughtfully with the responses will usually net you upvotes.

Comment author: wedrifid 02 January 2012 07:34:24AM *  4 points [-]

ETA: I hate that I have to say this, but can people respond instead of just downvoting? I'm honestly curious as to why this particular post is controversial - or have I missed something?

I haven't downvoted, for what it is worth. Sure, you may be an evil baby killing advocate but it's not like l care!

Comment author: Solvent 02 January 2012 07:44:33AM 4 points [-]

but it's not I care!

I think you accidentally a word.

Comment author: [deleted] 02 January 2012 06:56:10PM 1 point [-]

ETA: I hate that I have to say this, but can people respond instead of just downvoting? I'm honestly curious as to why this particular post is controversial - or have I missed something?

I often "claim" my downvotes (aka I will post "downvoted" and then give reason.) However, I know that when I do this, I will be downvoted myself. So that is probably one big deterrent to others doing the same.

For one thing, the person you are downvoting will generally retaliate by downvoting you (or so it seems to me, since I tend to get an instant -1 on downvoting comments), and people who disagree with your reason for downvoting will also downvote you.

Also, many people on this site are just a-holes. Sorry.

Comment author: Nornagest 02 January 2012 10:13:55PM 6 points [-]

If I downvote with comment, it's usually for a fairly specific problem, and usually one that I expect can be addressed if it's pointed out; some very clear logical problem that I can throw a link at, for example, or an isolated offensive statement. I may also comment if the post is problematic for a complicated reason that the poster can't reasonably be expected to figure out, or if its problems are clearly due to ignorance.

Otherwise it's fairly rare for me to do so; I see downvotes as signaling that I don't want to read similar posts, and replying to such a post is likely to generate more posts I don't want to read. This goes double if I think the poster is actually trolling rather than just exhibiting some bias or patch of ignorance. Basically it's a cost-benefit analysis regarding further conversation; if continuing to reply would generate more heat than light, better to just downvote silently and drive on.

It's uncommon for me to receive retaliatory downvotes when I do comment, though.

Comment author: wedrifid 02 January 2012 09:08:12PM 6 points [-]

I often "claim" my downvotes (aka I will post "downvoted" and then give reason.) However, I know that when I do this, I will be downvoted myself. So that is probably one big deterrent to others doing the same.

On the other hand if people agree with your reasons they often upvote it (especially back up towards zero if it dropped negative).

For one thing, the person you are downvoting will generally retaliate by downvoting you (or so it seems to me, since I tend to get an instant -1 on downvoting comments)

I certainly hope so. I would expect that they disagree with your reasons for downvoting or else they would have not made their comment. It would take a particularly insightful explanation for your vote for them to believe that you influencing others toward thinking their contribution is negative is itself a valuable contribution.

Also, many people on this site are just a-holes. Sorry.

*arch*

Comment author: [deleted] 02 January 2012 09:17:54PM 5 points [-]

For one thing, the person you are downvoting will generally retaliate by downvoting you (or so it seems to me, since I tend to get an instant -1 on downvoting comments)

I certainly hope so. I would expect that they disagree with your reasons for downvoting or else they would have not made their comment. It would take a particularly insightful explanation for your vote for them to believe that you influencing others toward thinking their contribution is negative is itself a valuable contribution.

Do you think that's a good thing, or just a likely outcome?

Downvoting explanations of downvotes seems like a really bad idea, regardless how you feel about the downvote. It strongly incentives people to not explain themselves, not open themselves up for debates, but just vote and then remove themselves from the discussion.

I don't see how downvoting explanations and more explicit behavior is helpful for rational discourse in any way.

Comment author: MixedNuts 02 January 2012 09:53:48PM 3 points [-]

It strongly incentives people to not explain themselves, not open themselves up for debates, but just vote and then remove themselves from the discussion.

This is exactly the reaction I want to trolls, basic questions outside of dedicated posts, and stupid mistakes. Are downvotes of explanations in those cases also read as an incentive not to post explanations in general?

Comment author: [deleted] 02 January 2012 10:02:39PM 2 points [-]

Speaking for myself, yes. I read it as "don't engage this topic on this site, period".

I agree with downvoting (and ignoring) the types of comments you mentioned, but not explanations of such downvotes. The explanations don't add any noise, so they shouldn't be punished. (Maybe if they got really excessive, but currently I have the impression that too few downvotes are explained, rather than too many.)

Comment author: wedrifid 02 January 2012 09:56:48PM *  1 point [-]

Do you think that's a good thing, or just a likely outcome?

Comments can serve as calls to action encouraging others to downvote or priming people with a negative or unintended interpretation of a comment - be it yours or that of someone else -that influence is something to be discouraged. This is not the case with all explanations of downvotes but it certainly describes the effect and often intent of the vast majority of "Downvoted because" declarations. Exceptions include explanations that are requested and occasionally reasons that are legitimately surprising or useful. Obviously also an exception is any time when you actually agree they have a point.

Comment author: TheOtherDave 02 January 2012 09:19:04PM 1 point [-]

I might well consider an explanation of a downvote on a comment of mine to be a valuable contribution, even if I continue to disagree with the thinking behind it. Actually, that's not uncommon.

Comment author: MixedNuts 02 January 2012 09:49:10PM 10 points [-]

Common reasons I downvote with no comment: I think the mistake is obvious to most readers (or already mentioned) and there's little to be gained from teaching the author. I think there's little insight and much noise - length, unpleasant style, politically disagreeable implications that would be tedious to pick apart (especially in tone rather than content). I judge that jerkishness is impairing comprehension; cutting out the courtesies and using strong words may be defensible, but using insults where explanations would do isn't.

On the "just a-holes" note (yes, I thought "Is this about me?"): It might be that your threshold for acceptable niceness is unusually high. We have traditions of bluntness and flaw-hunting (mostly from hackers, who correctly consider niceness noise when discussing bugs in X), so we ended up rather mean on average, and very tolerant of meanness. People who want LW to be nicer usually do it by being especially nice, not by especially punishing meanness. I notice you're on my list of people I should be exceptionally nice to, but not on my list of exceptionally nice people, which is a bad thing if you love Postel's law. (Which, by Postel's law, nobody but me has to.) The only LessWronger I think is an asshole is wedrifid, and I think this is one of his good traits.

Comment author: Prismattic 02 January 2012 10:25:10PM 2 points [-]

We have traditions of bluntness and flaw-hunting (mostly from hackers, who correctly consider niceness noise when discussing bugs in X), so we ended up rather mean on average, and very tolerant of meanness.

I think there is a difference between choosing bluntness where niceness would tend to obscure the truth, and choosing between two forms of expression which are equally illuminating but not equally nice. I don't know about anyone else, but I'm using "a-hole" here to mean "One who routinely chooses the less nice variant in the latter situation."

(This is not a specific reference to you; your comment just happened to provide a good anchor for it.)

Comment author: TheOtherDave 02 January 2012 11:25:36PM 1 point [-]

Of course, if that's the meaning, then before I judge someone to be an "a-hole" I need to know what they intended to illumine.

Comment author: [deleted] 02 January 2012 10:10:43PM *  3 points [-]

The only LessWronger I think is an asshole is wedrifid, and I think this is one of his good traits.

If he's an asshole, then "asshole" needs a new subdefinition. I love that guy.

Comment author: [deleted] 02 January 2012 10:07:04PM *  1 point [-]

I notice you're on my list of people I should be exceptionally nice to, but not on my list of exceptionally nice people,

Would you mind discussing this with me, because I find it disturbing that I come off as having double-standards, and am interested to know more about where that impression comes from. I personally feel that I do not expect better behaviour from others than I practice, but would like to know (and update my behaviour) if I am wrong about this.

I admit to lowering my level of "niceness" on LW, because I can't seem to function when I am nice and no one else is. However MY level of being "not nice" means that I don't spend a lot of time finding ways to word things in the most inoffensive manner. I don't feel like I am exceptionally rude, and am concerned if I give off that impression.

I also feel like I keep my "punishing meanness" levels to a pretty high standard too: I only "punish" (by downvoting or calling out) what I consider to be extremely rude behavior (ie "I wish you were dead" or "X is crap.") that is nowhere near the level of "meanness" that I feel like my posts ever get near.

Comment author: MixedNuts 02 January 2012 10:45:37PM 4 points [-]

I come off as having double-standards

You come off as having single-standards. That is, I think the minimal level of niceness you accept from others is also the minimal level of niceness you practice - you don't allow wiggle room for others having different standards. I sincerely don't resent that! My model of nice people in general suggests y'all practice Postel's law ("Be liberal in what you accept and conservative in what you send"), but I don't think it's even consistent to demand that someone follow it.

extremely rude behavior (ie "I wish you were dead" or "X is crap.")

...I'm never going to live that one down, am I? Let's just say that there's an enormous amount of behaviours that I'd describe as "slightly blunter than politeness would allow, for the sake of clarity" and you'd describe as "extremely rude".

Also, while I've accepted the verdict that "<thing> is crap" is extremely rude and I shouldn't ever say it, I was taken aback at your assertion that it doesn't contribute anything. Surely "Don't use this thing for this purpose" is non-empty. By the same token, I'd actually be pretty okay with being told "I wish you were dead" in many contexts. For example, in a discussion of eugenics, I'd be quite fine with a position that implies I should be dead, and would much rather hear it than have others dance around the implication.

Maybe the lesson for you is that many people suck really bad at phrasing things, so you should apply the principle of charity harder and be tolerant if they can't be both as nice and as clear as you'd have been and choose to sacrifice niceness? The lesson I've learned is that I should be more polite in general, more polite to you in particular, look harder for nice phrasings, and spell out implications rather than try to bake them in connotations.

Comment author: Alicorn 02 January 2012 11:07:07PM 3 points [-]

For example, in a discussion of eugenics, I'd be quite fine with a position that implies I should be dead, and would much rather hear it than have others dance around the implication.

I'm fine with positions that imply I should never have been born (although I have yet to hear one that includes me), but I'd feel very differently about one implying that I should be dead!

Comment author: lessdazed 02 January 2012 11:25:42PM 2 points [-]

Many people don't endorse anything similar to the principle that "any argument for no more of something should explain why there is a perfect amount of that thing or be counted as an argument for less of that thing."

E.g. thinking arguments that "life extension is bad" generally have no implications regarding killing people were it to become available. So those who say I shouldn't live to be 200 are not only basically arguing I should (eventually, sooner than I want) be dead, the implication I take is often that I should be killed (in the future).

Comment author: TheOtherDave 02 January 2012 11:22:18PM 2 points [-]

Personally, I'd be far more insulted by the suggestion that I should never have been born, than by the suggestion that I should die now.

Comment author: Alicorn 02 January 2012 11:32:06PM 3 points [-]

Why?

Comment author: TheOtherDave 03 January 2012 01:38:47AM 2 points [-]

If someone tells me I should die now, I understand that to mean that my life from this point forward is of negative value to them. If they tell me I should never have been born, I understand that to mean not only that my life from this point forward is of negative value, but also that my life up to this point has been of negative value.

Comment author: [deleted] 02 January 2012 11:04:28PM 2 points [-]

Upvoted, and thank you for the explanation.

I'm never going to live that one down, am I?

If it helps, I didn't even remember that one of the times I've called someone out on "X is crap" was you. So consider it "lived down".

taken aback at your assertion that it doesn't contribute anything.

You're right. How about an assertion that it doesn't contribute anything that couldn't be easily rephrased in a much better way? Your example of "Don't use this thing for this purpose", especially if followed by a brief explanation, is an order of magnitude better than "X is crap", and I doubt it took you more than 5 seconds to write.

Comment author: [deleted] 04 January 2012 07:30:25PM *  0 points [-]

Are you more, less, or equally likely to say "<thing> is crap" in person as opposed to online?

Comment author: MixedNuts 05 January 2012 02:31:18PM 0 points [-]

Correcting for my differing speech patterns across languages and need to speak to stuck-up authorities... probably roughly as much.

Comment author: Prismattic 02 January 2012 08:32:54PM 4 points [-]

Also, many people on this site are just a-holes. Sorry.

I think it's more that there are a few a-holes, but they are very prolific (well, that and the same bias that causes us to notice how many red lights we get stopped at but not how many green lights we speed through also focuses our attention on the worst posting behavior).

Comment author: TheOtherDave 02 January 2012 09:22:00PM 3 points [-]

Interesting. Who are the prolific "a-holes"?

Comment author: Prismattic 02 January 2012 09:31:01PM 4 points [-]

Explicitly naming names accomplishes nothing except inducing hostility, as it will be taken as a status challenge. Not explicitly naming names, one hopes, leaves everyone re-examining whether their default tone is appropriately calibrated.

Comment author: wedrifid 02 January 2012 11:30:49PM 1 point [-]

Not explicitly naming names, one hopes, leaves everyone re-examining whether their default tone is appropriately calibrated.

It left me evaluating whether it was me personally that was being called an asshole or others in the community and whether those others are people that deserve the insult or not. Basically I needed to determine whether it was a defection against me, an ally or my tribe in general. Then I had to decide what, if any, was an appropriate, desirable and socially acceptable tit-for-tat response. I decided to mostly ignore him because engaging didn't seem like it would do much more than giving him a platform from which to gripe more.

Comment author: magfrump 02 January 2012 11:44:01PM 1 point [-]

If it makes you feel better, when I read his post I thought lovingly of you. (I also believe your response was appropriate.)

Comment author: dlthomas 02 January 2012 11:42:32PM 0 points [-]

Why do you feel it's correct to interpret it as defection in the first place?

Comment author: wedrifid 03 January 2012 12:43:26AM *  1 point [-]

Why do you feel it's correct to interpret it as defection in the first place?

In case you were wondering the translation of this from social-speak to Vulcan is:

Calling people assholes isn't a defection, therefore you saying - and in particular feeling - that labeling people as assholes is a defection says something personal about you. I am clever and smooth for communicating this rhetorically.

So this too is a defection. Not that I mind - because it is a rather mild defection that is well within the bounds of normal interaction. I mean... it's not like you called me an asshole or anything. ;)

Comment author: dlthomas 03 January 2012 06:35:47AM *  2 points [-]

That is not a correct translation. Calling someone an asshole may or may not be defection. In this case, I'm not sure whether it was. Examining why you feel that it was may be enlightening to me or to you or hopefully both. Defecting by accident is a common flaw, for sure, but interpreting a cooperation as a defection is no less damaging and no less common.

Comment author: TheOtherDave 02 January 2012 09:43:01PM 1 point [-]

I agree with you that naming names can be taken as a status challenge.
Of course, this whole topic positions you as an abjudicator of appropriate calibration, which can be taken as a status grab, for the excellent reason that it is one. Not that there's anything wrong with going for status.
All of that notwithstanding, if you prefer to diffuse your assertions of individual inappropriate behavior over an entire community, that's your privilege.

Comment author: Prismattic 02 January 2012 10:16:26PM 1 point [-]

I care about my status on this site only to the extent that it remains above some minimum required for people not to discount my posts simply because they were written by me.

My interest in this thread is that like Daenerys I think the current norm for discourse is suboptimal, but I think I give greater weight to the possibility of that some of the suboptimal behavior is people defecting by accident; hence the subtle push for occasional recalibration of tone.

Comment author: wedrifid 02 January 2012 10:33:22PM *  4 points [-]

hence the subtle push for occasional recalibration of tone.

There was a subtle push? I must of missed that while I was distracted by the blatant one!

Comment author: Prismattic 02 January 2012 10:38:00PM 1 point [-]

See, it's working!

Comment author: TheOtherDave 02 January 2012 11:32:25PM 3 points [-]

Just to be clear: I'm fine with you pushing for a norm that's optimal for you. Blatantly, if you want to; subtly if you'd rather.

But I don't agree that the norm you're pushing is optimal for me, and I consider either of us pushing for the establishment of norms that we're most comfortable with to be a status-linked social maneuver.

Comment author: Prismattic 03 January 2012 12:02:19AM 0 points [-]

But I don't agree that the norm you're pushing is optimal for me,

Why? (A sincere question, not a rhetorical one)

and I consider either of us pushing for the establishment of norms that we're most comfortable with to be a status-linked social maneuver.

I'm not sure how every post doesn't do this; many posts push to maintain a status-quo, but all posts implicitly favor some set of norms.

Comment author: MixedNuts 02 January 2012 09:58:56PM 0 points [-]

Am I an asshole?

I'm already working on not being an asshole in general, and on not being an asshole to specific people on LW. If someone answers "yes" to that I'll work harder at being a non-asshole on LW. Or post less. Or try to do one of those for two days then forget about the whole thing.

Comment author: wedrifid 02 January 2012 11:32:43PM 0 points [-]

Am I an asshole?

You haven't stood out as someone who has been an asshole to me or anyone I didn't think deserved it in the context, those being the only cases salient enough that I could expect myself to remember.

Comment author: Prismattic 02 January 2012 10:11:19PM 0 points [-]

If you're already working on it, you're probably in the clear. Not being an a-hole is a high-effort activity for many of us; in this case I will depart from primitive consquentialism and say that effort counts for something.

Comment author: wedrifid 02 January 2012 11:33:18PM 1 point [-]

effort counts for something.

And, equivalently, signalling effectively that you are expending effort counts for something.

Comment author: Solvent 02 January 2012 07:45:18AM 0 points [-]

Well, it sure looks like babies have a lot of things in common with people, and will become people one day, and lots of people care about them.

Comment author: Bakkot 02 January 2012 06:46:47PM 6 points [-]

babies have a lot of things in common with people

If your definition of "people" is going to include AI's but exclude pigs, then babies don't really have much in common with people at all.

and will become people one day

The "will become people" discussion is being had elsewhere in this thread, but recapping briefly: if the reason for not killing babies is that they're going to become people, then (it seems to me) one must conclude that the morally correct thing to do is to create as many people as possible, since the argument is (as far as I can tell) that increasing the number of people in the world is a net positive.

I don't agree with this conclusion, and I doubt you do either. For me, I reject the premise; this nicely explains my rejection of the conclusion. Do you reject the premise, or that the conclusion follows from the premise? Why?

and lots of people care about them

If this is all we're left with, it's a weak argument indeed. What if society started caring a lot about moths? Does this lend significant weight to the proposition that it should be illegal to kill moths?

Comment author: Solvent 03 January 2012 04:06:37AM *  0 points [-]

babies have a lot of things in common with people

I meant humans, not people. Sorry.

And I agree that we should treat animals better. I'm vegetarian.

and will become people one day

I agree that this discussion is slightly complex. Gwern's abortion dialogue contains a lot of relevant material.

However, I don't feel that saying that "we should protect babies because one day they will be human" requires aggregate utilitarianism as opposed to average utilitarianism, which I in general prefer. Babies are already alive, and already experience things.

and lots of people care about them

This argument has two functions. One is the literal meaning of "we should respect people's preferences". See discussion on the Everybody Draw Mohammed day. The other is that other people's strong moral preferences are some evidence towards the correct moral path.

Comment author: Bakkot 04 January 2012 07:27:02PM 0 points [-]

And I agree that we should treat animals better. I'm vegetarian. ...

However, I don't feel that saying that "we should protect babies because one day they will be human" requires aggregate utilitarianism as opposed to average utilitarianism, which I in general prefer. Babies are already alive, and already experience things.

Ah, the fact that you're vegetarian is somewhat illuminating. The next questions, then: Do you think pigs should be weighted more strongly as babies in the moral calculus? If not, is it because babies are going to become people? If it is because babies are going to become people, why does that matter at all?

This argument has two functions. One is the literal meaning of "we should respect people's preferences". See discussion on the Everybody Draw Mohammed day. The other is that other people's strong moral preferences are some evidence towards the correct moral path.

Agreed, but again, it's very weak evidence.

Comment author: Estarlio 01 January 2012 10:35:39PM 0 points [-]

I think you may have taken me to be talking about whether it was acceptable or moral in the sense that society will allow it, that was not my intent. Society allows many unwise, inefficient things and no doubt will do so for some time.

My question was simply whether you thought it wise. If we do make an FAI, and encoded it with some idealised version of our own morality then do we want a rule that says 'Kill everything that looks unlike yourself'? If we end up on the downside of a vast power gradient with other humans do we want them thinking that everything that has little or no value to them should be for the chopping block?

In a somewhat more pithy form, I guess what I’m asking you is: Given that you cannot be sure you will always be strong enough to have things entirely your way, how sure are you this isn’t going to come back and bite you in the arse?

If it is unwise, then it would make sense to weaken that strand of thought in society - to destroy less out of hand, rather than more. That the strand is already quite strong in society would not alter that.

Comment author: Bakkot 01 January 2012 10:45:02PM 2 points [-]

If we do make an FAI, and encoded it with some idealised version of our own morality then do we want a rule that says 'Kill everything that looks unlike yourself'?

No. But we do want a rule that says something like "the closer things are to being people, the more importance should be given to them". As a consequence of this rule, I think it should be legal to kill your newborn children.

how sure are you this isn’t going to come back and bite you in the arse? I'm observably a person. Any AI which concluded otherwise is probably already so dangerous that worrying about how my opinions stated here would affect it is probably completely pointless. So... pretty sure.

Oh, and I'm never encouraging killing your newborns, just arguing that it should be allowed (if done for something other than sadism).

Comment author: Estarlio 02 January 2012 12:42:58AM -1 points [-]

You did not answer me on the human question - how we’d like powerful humans to think .

No. But we do want a rule that says something like "the closer things are to being people, the more importance should be given to them". As a consequence of this rule, I think it should be legal to kill your newborn children.

This sounds fine as long as you and everything you care about are and always will be included in the group of, ‘people.’ However, by your own admission, (earlier in the discussion to wedrifid,) you've defined people in terms of how closely they realise your ideology:

Extremely young children are lacking basically all of the traits I'd want a "person" to have.

You’ve made it something fluid; a matter of mood and convenience. If I make an AI and tell it to save only ‘people,’ it can go horribly wrong for you - maybe you’re not part of what I mean by ‘people.’ Maybe by people I mean those who believe in some religion or other. Maybe I mean those who are close to a certain processing capacity - and then what happens to those who exceed that capacity? And surely the AI itself would do so....

There are a lot of ways it can go wrong.

I'm observably a person.

You observe yourself to be a person. That’s not necessarily the same thing as being observably a person to someone else operating with different definitions.

Any AI which concluded otherwise is probably already so dangerous that worrying about how my opinions stated here would affect it is probably completely pointless. So... pretty sure.

The opinion you state may influence what sort of AI you end up with. And at the very least it seems liable to influence the sort of people you end up with.

Oh, and I'm never encouraging killing your newborns, just arguing that it should be allowed (if done for something other than sadism).

-shrug- You’re trying to weaken the idea that newborns are people, and are arguing for something that, I suspect, would increase the occurrence of their demise. Call it what you will.

Comment author: Bakkot 02 January 2012 01:53:28AM 1 point [-]

You did not answer me on the human question - how we’d like powerful humans to think .

I want powerful humans to have a rule like "the closer things are to being people, the more importance should be given to them".

they realise your ideology

I think I must have been unclear, since both you and wedrifid seemed to interpet the wrong thing. What I meant was that I don't have a good definition for person, but no reasonable partial definition I can come up with includes babies. I didn't at all mean that just because I would like people to be nice to each other, and so on, I wouldn't consider people who aren't nice not to be people. I'd intended to convey this distinction by the quotation marks.

There are a lot of ways it can go wrong.

Obviously. There's a lot of ways any AI can go wrong. But you have to do something. Is your rule "don't kill humans"? For what definition of human, and isn't that going to be awfully unfair to aliens? I think "don't kill people" is probably about as good as you're going to do.

You observe yourself to be a person. That’s not necessarily the same thing as being observably a person to someone else operating with different definitions.

I don't want the rule to be "don't kill people" for whatever values of "kill" and "people" you have in your book. For all I know you're going to interpet this as something I'd understand more like "don't eat pineapples". I want the rule to be "don't kill people" with your definitions in accordance with mine.

-shrug- You’re trying to weaken the idea that newborns are people, and are arguing for something that, I suspect, would increase the occurrence of their demise. Call it what you will.

If you don't understand the distinction between "legal" and "encouraged", we're going to have a very difficult time communicating.

Comment author: wedrifid 02 January 2012 01:56:55AM *  2 points [-]

I think I must have been unclear, since both you and wedrifid seemed to interpet the wrong thing. What I meant was that I don't have a good definition for person, but no reasonable partial definition I can come up with includes babies.

How did I misinterpret? I read that you don't include babies and I said that I do include babies. That's (preference) disagreement, not a problem with interpretation.

Comment author: Bakkot 02 January 2012 02:21:27AM *  1 point [-]

Most adults don't have traits I'd want a "person" to have. At least with babies there is a chance they'll turn out as worthwhile people.

This line gave me the impression that you thought I was saying I want my definition of "person", for the moral calculus, to include things like "worthwhile".Which was not what I was saying -

I wasn't saying anything about the desirability of traits for people in general. I was talking about the desirability of traits in the definition of the word "person", so that it would be an accurate and useful definition.

I'd want my definition of the word "person" to be such that included virtually all adults (eta: but also thinking aliens, and certain strong AI's), but not, say, pigs. This makes it difficult to also include babies.

Comment author: wedrifid 02 January 2012 02:34:28AM 0 points [-]

This line gave me the impression that you thought I was saying I want my definition of "person", for the moral calculus, to include things like "worthwhile".Which was not what I was saying -

Intended as a tangential observation about my perceptions of people. (Some of them really are easier for me to model as objects running a machiavellian routine.)

Comment author: Bakkot 02 January 2012 02:46:57AM 0 points [-]

Ah, my mistake. (That's what I'd originally figured, but then Estarlio seemed to be saying the same thing, so I thought perhaps I'd been unclear.)

Comment author: Multiheaded 02 January 2012 08:58:50AM 1 point [-]

If you don't understand the distinction between "legal" and "encouraged", we're going to have a very difficult time communicating.

"Encouraged" is very clearly not absolute but relative here, "somewhat less discouraged than now" can just be written as "encouraged" for brevity's sake.

Comment author: Estarlio 02 January 2012 09:21:23PM -1 points [-]

I think I must have been unclear, since both you and wedrifid seemed to interpet the wrong thing. What I meant was that I don't have a good definition for person, but no reasonable partial definition I can come up with includes babies. I didn't at all mean that just because I would like people to be nice to each other, and so on, I wouldn't consider people who aren't nice not to be people. I'd intended to convey this distinction by the quotation marks.

How are you deciding whether your definition is reasonable?

Obviously. There's a lot of ways any AI can go wrong. But you have to do something. Is your rule "don't kill humans"? For what definition of human, and isn't that going to be awfully unfair to aliens? I think "don't kill people" is probably about as good as you're going to do.

‘Don’t kill anything that can learn,’ springs to mind as a safer alternative - were I inclined to program this stuff in directly, which I'm not.

I don’t expect us to be explicitly declaring these rules, I expect the moral themes prevalent in our society - or at least an idealised model of part of it - will form much of the seed for the AI’s eventual goals. I know that the moral themes prevalent in our society form much of the seed for the eventual goals of people.

In either case, I don’t expect us to be in-charge. Which makes me kinda concerned when people talk about how we should be fine with going around offing the lesser life-forms.

I don't want the rule to be "don't kill people" for whatever values of "kill" and "people" you have in your book. For all I know you're going to interpet this as something I'd understand more like "don't eat pineapples". I want the rule to be "don't kill people" with your definitions in accordance with mine.

Yet my definitions are not in accordance with yours. And, if I apply the rule that I can kill everything that’s not a person, you’re not going to get the results you desire.

It’d be great if I could just say ‘I want you to do good - with your definition of good in accordance with mine.’ But it’s not that simple. People grow up with different definitions - AIs may well grow up with different definitions - and if you've got some rule operating over a fuzzy boundary like that, you may end up as paperclips, or dogmeat or something horrible.

Comment author: Bakkot 04 January 2012 07:19:42PM *  3 points [-]

How are you deciding whether your definition is reasonable?

In the standard way. Or if you'd prefer it written out, there's a bunch of things in my mind for which the label "person" seems appropriate - including, say, humans, strong AIs, and thinking aliens. There's also a bunch of things for which said label seems inappropriate - say, pigs, chess-playing computer programs, and rocks. On consideration, babies seem not to share the important common characteristics of the first set nearly as much as they do the second; as such the label "person" seems inappropriate for babies.

‘Don’t kill anything that can learn,’ springs to mind as a safer alternative - were I inclined to program this stuff in directly, which I'm not.

Pigs can learn, without a doubt. Even if from this you decide not to kill pigs, the Bayesian spam filter that keeps dozens of viagra ads per day from cluttering up my inbox is also undoubtably learning. Learning, indeed, in much the same way that you or I do, or that pigs do, except that it's arguably better at it. Have I committed a serious moral wrong if I delete its source code?

I don’t expect us to be explicitly declaring these rules, I expect the moral themes prevalent in our society - or at least an idealised model of part of it - will form much of the seed for the AI’s eventual goals. I know that the moral themes prevalent in our society form much of the seed for the eventual goals of people.

Agreed, but I also think said idealized themes should be kept as simple as practical, so we're not constantly inserting odd corner-cases. This is partially because simple things are easier to understand and partially because odd corner-cases are almost always indicative of ideas which we would not have arrived at ourselves if we weren't conditioned with them from an early age. I strongly suspect the prohibition on infanticide is such a corner case.

It’d be great if I could just say ‘I want you to do good - with your definition of good in accordance with mine.’ But it’s not that simple. People grow up with different definitions - AIs may well grow up with different definitions - and if you've got some rule operating over a fuzzy boundary like that, you may end up as paperclips, or dogmeat or something horrible.

You think I'm going to try to program an AI in English?

Happily this isn't really a problem for the current debate, because I'm communicating exclusively with people who seem to share nearly all of my priors, definitions, and moral axioms already.

So let me break this down a bit.

If you don't think "don't kill people" is a good broad moral rule (setting aside small distinctions in our definitions, because we do probably agree on almost all counts), my task is to try to understand how you arrived at a different conclusion than I did.

If you do think babies are people, my task is to try to understand whether you've organized your space of all possible things which could be described in some different way than I have or if, as I suspect is the common case (this is not to be taken to apply to LW readers), you've just drawn your boundaries wrong.

If you do think the rule should be "don't kill people" and that babies aren't people, then my task is either to understand why you don't feel that infanticide should be legal or to point out that perhaps you really would agree that infanticide should be legal if you stopped and seriously considered the proposition for a bit.

(I think almost everyone falls into either the "wrong boundaries" case or the "would agree that infanticide should be legal if they thought about it" case.)

edit: clarity

Comment author: dlthomas 04 January 2012 07:37:20PM 4 points [-]

If you do think both of the above things, then my task is either to understand why you don't feel that infanticide should be legal or to point out that perhaps you really would agree that infanticide should be legal if you stopped and seriously considered the proposition for a bit.

I'm not certain whether or not it's germane to the broader discussion, but "think X is immoral" and "think X should be illegal" are not identical beliefs.

Comment author: Bakkot 04 January 2012 07:41:09PM *  1 point [-]

Oh, agreed, and I didn't mean to imply otherwise. A lot of people in this thread seem to have concluded that while infanticide might not be immoral in itself it should be illegal for reasons to do with Schelling points or increased risk of developing sadistic tendencies. These are perfectly good reasons to not feel that infanticide should be legal despite agreeing with both propositions I listed.

Comment author: TheOtherDave 04 January 2012 08:19:33PM 2 points [-]

I was with you, until your summary.

Suppose hypothetically that I think "don't kill people" is a good broad moral rule, and I think babies are people.
It seems to follow from what you said that I therefore ought to agree that infanticide should be legal.

If that is what you meant to say, then I am deeply confused. If (hypothetically) I think babies are people, and if (hypothetically) I think "don't kill people" is a good law, then all else being equal I should think "don't kill babies" is a good law. That is, I should believe that infanticide ought not be any more legal than murder in general.

It seems like one of us dropped a negative sign somewhere along the line. Perhaps it was me, but if so, I seem incapable of finding it again.

Comment author: Bakkot 04 January 2012 08:34:17PM 1 point [-]

Quite right, I had an extra negative in there - the people I need to talk to aren't those who think babies are not people, because we already agree. Fixed, thanks.

Comment author: Strange7 05 June 2012 04:52:40AM 0 points [-]

Even if from this you decide not to kill pigs, the Bayesian spam filter that keeps dozens of viagra ads per day from cluttering up my inbox is also undoubtably learning. Learning, indeed, in much the same way that you or I do, or that pigs do, except that it's arguably better at it. Have I committed a serious moral wrong if I delete its source code?

If I were programming an AI to be a perfect world-guiding moral paragon, I'd rather have it keep the spam filter in storage (the equivalent of a retirement home, or cryostasis) than delete it for the crime of obsolescence. Digital storage space is cheap, and getting cheaper all the time.

Comment author: Estarlio 03 June 2012 05:52:38PM *  -1 points [-]

Somewhat late, I must have missed this reply agessss ago when it went up.

there's a bunch of things in my mind for which the label "person" seems appropriate [...] There's also a bunch of things for which said label seems inappropriate

That's not a reasoned way to form definitions that have any more validity as referents than lists of what you approve of. What you're doing is referencing your feelings and seeing what the objects of those feelings have in common. It so happens that I feel that infants are people. But we're not doing anything particularly logical or reasonable here - we're not drawing our boundaries using different tools. One of us just thinks they belong on the list and the other thinks they don't.

If we try and agree on a common list. Well, you're agreeing that aliens and powerful AIs go on the list - so biology isn't the primary concern. If we try to draw a line through the commonalities what are we going to get? All of them seem able to gather, store, process and apply information to some ends. Even infants can - they're just not particularly good at it yet.

Conversely, what do all your other examples have in common that infants don't?

Pigs can learn, without a doubt. Even if from this you decide not to kill pigs, the Bayesian spam filter that keeps dozens of viagra ads per day from cluttering up my inbox is also undoubtably learning. Learning, indeed, in much the same way that you or I do, or that pigs do, except that it's arguably better at it. Have I committed a serious moral wrong if I delete its source code?

Arguably that would be a good heuristic to keep around. I don't know I'd call it a moral wrong – there's not much reason to talk about morals when we can just say discouraged in society and have everyone on the same page. But you would probably do well to have a reluctance to destroy it. One day someone vastly more complex than you may well look on you in the same light you look on your spam filter.

[...] odd corner-cases are almost always indicative of ideas which we would not have arrived at ourselves if we weren't conditioned with them from an early age. I strongly suspect the prohibition on infanticide is such a corner case.

I strongly suspect that societies where people had no reluctance to go around offing their infants wouldn't have lasted very long. Infants are significant investments of time and resources. Offing your infants is a sign that there's something emotionally maladjusted in you – by the standards of the needs of society. If we'd not had the precept, and magically appeared out of nowhere, I think we'd have invented it pretty quick.

You think I'm going to try to program an AI in English?

Not really about you specifically. But, in general – yeah, more or less. Maybe not write the source code, but instruct it. English, or uploads or some other incredibly high-level language with a lot of horrible dependencies built into its libraries (or concepts or what have you) that the person using it barely understands themselves. Why? Because it will be quicker. The guy who just tells the AI to guess what he means by good skips the step of having to calculate it herself.

Comment author: Bakkot 04 June 2012 10:47:58PM *  1 point [-]

Five months later...

What you're doing is referencing your feelings and seeing what the objects of those feelings have in common. [...] One of us just thinks they belong on the list and the other thinks they don't. [...] If we try to draw a line through the commonalities what are we going to get? [...] Conversely, what do all your other examples have in common that infants don't?

These all seem to indicate a bit of confusion about what I'm trying to do.

It seems to me that this thread of the debate has come down to "Should we consider babies to be people?" There are, broadly, two ways of settling this question: moving up the ladder of abstraction, or moving down. That is, we can answer this by attempting to define 'people' in terms of other, broader terms (this being the former case) or by defining 'people' via the listing of examples of things which we all agree are or are not people and then trying to decide by inspection in which category 'babies' belong.

Especially in light of the last line I've quoted above, you appear to be attempting the first method (and to be assuming I'm doing the same). In my experience, this method almost always ensures more confusion, not less, so I'm staying as far away from it as possible.

Instead I am attempting the second method. Here are several things I think are people: [adult] humans, strong AIs, thinking aliens. Here are several things I think are not people: pigs, chess-playing computer programs, rocks, dead humans. I am not going to attempt to suss the defining characteristics and commonalities of either category, because that would be an example of applying the first method and tends, as I say, to result only in more confusion.

Now, using only these categories (and in particular not using our feelings about whether or not babies are people), it seems to me that babies are less similar to the members of the first set than they are to the second. As such it seems that we ought to conclude babies are not people.


Worth repeating, I think: I am not going to attempt to list defining characteristics of the "people" and "not people" categories. Unless you can come up with what you think is a complete definition, that's not going to get us anywhere.


There are, at this point, several particulars on which you might disagree.

  • You might think that this method of defining words is in some way invalid, especially in light of how difficult it would be to use this method with an AI's programming. [If this is the case, I challenge you to write a piece of software that, without appealing to you, identifies objects as people or non-people the same way you do it. Can't do this? Then we'd best stick with what tools we have available to us.]
  • You might think that some of {humans, strong AIs, thinking aliens} are not people. [If this is the case, let's keep looking for a set we can agree on.]
  • You might think that some of {pigs, chess-playing computer programs, rocks, dead humans} are people. [If this is the case, let's keep looking for a set we can agree on. Or you could announce that you're now, for moral reasons, vegetarian, in which case our moralities are likely irreconcilable without vastly more work.]
  • You might think that babies are more similar to members of the first category than members of the second. [If this is the case, we're probably at an impasse. You or I might attempt to convince the other by expanding the sets I've outlined above with members we both agree belong until the similarities and differences are sufficiently stark - but frankly, at this point, it seems quite obvious that babies belong in the second category.]

A misc. point (the important part is above):

I strongly suspect that societies where people had no reluctance to go around offing their infants wouldn't have lasted very long.

This is entirely orthogonal to the point I was trying to make. Keep in mind, most societies invented misogyny pretty quick too. Rather, I doubt that you personally, raised in a society much like this one except without the taboo on killing infants, would have come to the conclusion that killing infants is a moral wrong.

There's a specific implication in your response about "the needs of society" to which I could respond but which I'm not going to unless prompted; I hope the above has dealt that.

Comment author: NancyLebovitz 13 June 2012 02:59:49AM 0 points [-]

Figuring out how to define human (as in "don't kill humans") so as to include babies is relatively easy, since babies are extremely likely to grow up into humans.

The hard question is deciding which transhumans-- including types not yet invented, possibly types not yet thought of, and certainly types which are only imagined in a sketchy abstract way-- can reasonably be considered as entities which shouldn't be killed.