wedrifid comments on Welcome to Less Wrong! (2012) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (1430)
Yes. The explanation given was significant.
It takes a 110 years to make a 110 year old . In most cases I'd prefer to keep a 30 year old than either of them. More to the point I don't intrinsically value creating more humans. The replacement cost of a dead human isn't anything to do with the moral aversion I have to murder.
I read your explanation. I'm just somewhat incredulous that this could be your actual belief. Roosters are a lot better prepared to defend themselves than, say, pigs. Is this a good reason to prefer it to be legal to kill roosters than pigs? Not in light of the fact that pigs are vastly more intelligent, capable of abstract reasoning and personality, etc.
The moral aversion I have to murder is twofold, roughly: harm to a person, and harm to society. Babies aren't people by any measure I can see, so the first doesn't apply. The second is where replacement cost comes in.
Do you really think it's wise to have a precedent that allows agents of Type X to go around killing off all of the !X group ? Doesn't bode well if people end up with a really sharp intelligence gradient.
We already have a bunch of those precedents, depending on how you look at it. You're more than free to go around killing ants. No one is going to care. You can even, depending on zoning laws, raise pigs and then slaughter them for their meat. The reason that this is just not a problem in the eyes of the law is that pigs aren't people.
If you look at it another way, we have exactly one precedent: It's generally morally OK to kill members of the !X group if and only if that group consists of agents which are not people.
ETA: I hate that I have to say this, but can people respond instead of just downvoting? I'm honestly curious as to why this particular post is controversial - or have I missed something?
I haven't seen anyone respond to your request for feedback about votes, so let me do so, despite not being one of the downvoters.
By my lights, at least, your posts have been fine. Obviously, I can't speak for the site as a whole... then again, neither can anyone else.
Basically, it's complicated, because the site isn't homogenous. Expressing conventionally "bad" moral views will usually earn some downvotes from people who don't want such views expressed; expressing them clearly and coherently and engaging thoughtfully with the responses will usually net you upvotes.
I haven't downvoted, for what it is worth. Sure, you may be an evil baby killing advocate but it's not like l care!
I think you accidentally a word.
I often "claim" my downvotes (aka I will post "downvoted" and then give reason.) However, I know that when I do this, I will be downvoted myself. So that is probably one big deterrent to others doing the same.
For one thing, the person you are downvoting will generally retaliate by downvoting you (or so it seems to me, since I tend to get an instant -1 on downvoting comments), and people who disagree with your reason for downvoting will also downvote you.
Also, many people on this site are just a-holes. Sorry.
If I downvote with comment, it's usually for a fairly specific problem, and usually one that I expect can be addressed if it's pointed out; some very clear logical problem that I can throw a link at, for example, or an isolated offensive statement. I may also comment if the post is problematic for a complicated reason that the poster can't reasonably be expected to figure out, or if its problems are clearly due to ignorance.
Otherwise it's fairly rare for me to do so; I see downvotes as signaling that I don't want to read similar posts, and replying to such a post is likely to generate more posts I don't want to read. This goes double if I think the poster is actually trolling rather than just exhibiting some bias or patch of ignorance. Basically it's a cost-benefit analysis regarding further conversation; if continuing to reply would generate more heat than light, better to just downvote silently and drive on.
It's uncommon for me to receive retaliatory downvotes when I do comment, though.
On the other hand if people agree with your reasons they often upvote it (especially back up towards zero if it dropped negative).
I certainly hope so. I would expect that they disagree with your reasons for downvoting or else they would have not made their comment. It would take a particularly insightful explanation for your vote for them to believe that you influencing others toward thinking their contribution is negative is itself a valuable contribution.
*arch*
Do you think that's a good thing, or just a likely outcome?
Downvoting explanations of downvotes seems like a really bad idea, regardless how you feel about the downvote. It strongly incentives people to not explain themselves, not open themselves up for debates, but just vote and then remove themselves from the discussion.
I don't see how downvoting explanations and more explicit behavior is helpful for rational discourse in any way.
This is exactly the reaction I want to trolls, basic questions outside of dedicated posts, and stupid mistakes. Are downvotes of explanations in those cases also read as an incentive not to post explanations in general?
Speaking for myself, yes. I read it as "don't engage this topic on this site, period".
I agree with downvoting (and ignoring) the types of comments you mentioned, but not explanations of such downvotes. The explanations don't add any noise, so they shouldn't be punished. (Maybe if they got really excessive, but currently I have the impression that too few downvotes are explained, rather than too many.)
Comments can serve as calls to action encouraging others to downvote or priming people with a negative or unintended interpretation of a comment - be it yours or that of someone else -that influence is something to be discouraged. This is not the case with all explanations of downvotes but it certainly describes the effect and often intent of the vast majority of "Downvoted because" declarations. Exceptions include explanations that are requested and occasionally reasons that are legitimately surprising or useful. Obviously also an exception is any time when you actually agree they have a point.
I might well consider an explanation of a downvote on a comment of mine to be a valuable contribution, even if I continue to disagree with the thinking behind it. Actually, that's not uncommon.
Common reasons I downvote with no comment: I think the mistake is obvious to most readers (or already mentioned) and there's little to be gained from teaching the author. I think there's little insight and much noise - length, unpleasant style, politically disagreeable implications that would be tedious to pick apart (especially in tone rather than content). I judge that jerkishness is impairing comprehension; cutting out the courtesies and using strong words may be defensible, but using insults where explanations would do isn't.
On the "just a-holes" note (yes, I thought "Is this about me?"): It might be that your threshold for acceptable niceness is unusually high. We have traditions of bluntness and flaw-hunting (mostly from hackers, who correctly consider niceness noise when discussing bugs in X), so we ended up rather mean on average, and very tolerant of meanness. People who want LW to be nicer usually do it by being especially nice, not by especially punishing meanness. I notice you're on my list of people I should be exceptionally nice to, but not on my list of exceptionally nice people, which is a bad thing if you love Postel's law. (Which, by Postel's law, nobody but me has to.) The only LessWronger I think is an asshole is wedrifid, and I think this is one of his good traits.
I think there is a difference between choosing bluntness where niceness would tend to obscure the truth, and choosing between two forms of expression which are equally illuminating but not equally nice. I don't know about anyone else, but I'm using "a-hole" here to mean "One who routinely chooses the less nice variant in the latter situation."
(This is not a specific reference to you; your comment just happened to provide a good anchor for it.)
Of course, if that's the meaning, then before I judge someone to be an "a-hole" I need to know what they intended to illumine.
If he's an asshole, then "asshole" needs a new subdefinition. I love that guy.
Would you mind discussing this with me, because I find it disturbing that I come off as having double-standards, and am interested to know more about where that impression comes from. I personally feel that I do not expect better behaviour from others than I practice, but would like to know (and update my behaviour) if I am wrong about this.
I admit to lowering my level of "niceness" on LW, because I can't seem to function when I am nice and no one else is. However MY level of being "not nice" means that I don't spend a lot of time finding ways to word things in the most inoffensive manner. I don't feel like I am exceptionally rude, and am concerned if I give off that impression.
I also feel like I keep my "punishing meanness" levels to a pretty high standard too: I only "punish" (by downvoting or calling out) what I consider to be extremely rude behavior (ie "I wish you were dead" or "X is crap.") that is nowhere near the level of "meanness" that I feel like my posts ever get near.
You come off as having single-standards. That is, I think the minimal level of niceness you accept from others is also the minimal level of niceness you practice - you don't allow wiggle room for others having different standards. I sincerely don't resent that! My model of nice people in general suggests y'all practice Postel's law ("Be liberal in what you accept and conservative in what you send"), but I don't think it's even consistent to demand that someone follow it.
...I'm never going to live that one down, am I? Let's just say that there's an enormous amount of behaviours that I'd describe as "slightly blunter than politeness would allow, for the sake of clarity" and you'd describe as "extremely rude".
Also, while I've accepted the verdict that "<thing> is crap" is extremely rude and I shouldn't ever say it, I was taken aback at your assertion that it doesn't contribute anything. Surely "Don't use this thing for this purpose" is non-empty. By the same token, I'd actually be pretty okay with being told "I wish you were dead" in many contexts. For example, in a discussion of eugenics, I'd be quite fine with a position that implies I should be dead, and would much rather hear it than have others dance around the implication.
Maybe the lesson for you is that many people suck really bad at phrasing things, so you should apply the principle of charity harder and be tolerant if they can't be both as nice and as clear as you'd have been and choose to sacrifice niceness? The lesson I've learned is that I should be more polite in general, more polite to you in particular, look harder for nice phrasings, and spell out implications rather than try to bake them in connotations.
I'm fine with positions that imply I should never have been born (although I have yet to hear one that includes me), but I'd feel very differently about one implying that I should be dead!
Many people don't endorse anything similar to the principle that "any argument for no more of something should explain why there is a perfect amount of that thing or be counted as an argument for less of that thing."
E.g. thinking arguments that "life extension is bad" generally have no implications regarding killing people were it to become available. So those who say I shouldn't live to be 200 are not only basically arguing I should (eventually, sooner than I want) be dead, the implication I take is often that I should be killed (in the future).
Personally, I'd be far more insulted by the suggestion that I should never have been born, than by the suggestion that I should die now.
Upvoted, and thank you for the explanation.
If it helps, I didn't even remember that one of the times I've called someone out on "X is crap" was you. So consider it "lived down".
You're right. How about an assertion that it doesn't contribute anything that couldn't be easily rephrased in a much better way? Your example of "Don't use this thing for this purpose", especially if followed by a brief explanation, is an order of magnitude better than "X is crap", and I doubt it took you more than 5 seconds to write.
Are you more, less, or equally likely to say "<thing> is crap" in person as opposed to online?
Correcting for my differing speech patterns across languages and need to speak to stuck-up authorities... probably roughly as much.
I think it's more that there are a few a-holes, but they are very prolific (well, that and the same bias that causes us to notice how many red lights we get stopped at but not how many green lights we speed through also focuses our attention on the worst posting behavior).
Interesting. Who are the prolific "a-holes"?
Explicitly naming names accomplishes nothing except inducing hostility, as it will be taken as a status challenge. Not explicitly naming names, one hopes, leaves everyone re-examining whether their default tone is appropriately calibrated.
It left me evaluating whether it was me personally that was being called an asshole or others in the community and whether those others are people that deserve the insult or not. Basically I needed to determine whether it was a defection against me, an ally or my tribe in general. Then I had to decide what, if any, was an appropriate, desirable and socially acceptable tit-for-tat response. I decided to mostly ignore him because engaging didn't seem like it would do much more than giving him a platform from which to gripe more.
If it makes you feel better, when I read his post I thought lovingly of you. (I also believe your response was appropriate.)
Why do you feel it's correct to interpret it as defection in the first place?
I agree with you that naming names can be taken as a status challenge.
Of course, this whole topic positions you as an abjudicator of appropriate calibration, which can be taken as a status grab, for the excellent reason that it is one. Not that there's anything wrong with going for status.
All of that notwithstanding, if you prefer to diffuse your assertions of individual inappropriate behavior over an entire community, that's your privilege.
I care about my status on this site only to the extent that it remains above some minimum required for people not to discount my posts simply because they were written by me.
My interest in this thread is that like Daenerys I think the current norm for discourse is suboptimal, but I think I give greater weight to the possibility of that some of the suboptimal behavior is people defecting by accident; hence the subtle push for occasional recalibration of tone.
Am I an asshole?
I'm already working on not being an asshole in general, and on not being an asshole to specific people on LW. If someone answers "yes" to that I'll work harder at being a non-asshole on LW. Or post less. Or try to do one of those for two days then forget about the whole thing.
You haven't stood out as someone who has been an asshole to me or anyone I didn't think deserved it in the context, those being the only cases salient enough that I could expect myself to remember.
If you're already working on it, you're probably in the clear. Not being an a-hole is a high-effort activity for many of us; in this case I will depart from primitive consquentialism and say that effort counts for something.
Well, it sure looks like babies have a lot of things in common with people, and will become people one day, and lots of people care about them.
If your definition of "people" is going to include AI's but exclude pigs, then babies don't really have much in common with people at all.
The "will become people" discussion is being had elsewhere in this thread, but recapping briefly: if the reason for not killing babies is that they're going to become people, then (it seems to me) one must conclude that the morally correct thing to do is to create as many people as possible, since the argument is (as far as I can tell) that increasing the number of people in the world is a net positive.
I don't agree with this conclusion, and I doubt you do either. For me, I reject the premise; this nicely explains my rejection of the conclusion. Do you reject the premise, or that the conclusion follows from the premise? Why?
If this is all we're left with, it's a weak argument indeed. What if society started caring a lot about moths? Does this lend significant weight to the proposition that it should be illegal to kill moths?
I meant humans, not people. Sorry.
And I agree that we should treat animals better. I'm vegetarian.
I agree that this discussion is slightly complex. Gwern's abortion dialogue contains a lot of relevant material.
However, I don't feel that saying that "we should protect babies because one day they will be human" requires aggregate utilitarianism as opposed to average utilitarianism, which I in general prefer. Babies are already alive, and already experience things.
This argument has two functions. One is the literal meaning of "we should respect people's preferences". See discussion on the Everybody Draw Mohammed day. The other is that other people's strong moral preferences are some evidence towards the correct moral path.
Ah, the fact that you're vegetarian is somewhat illuminating. The next questions, then: Do you think pigs should be weighted more strongly as babies in the moral calculus? If not, is it because babies are going to become people? If it is because babies are going to become people, why does that matter at all?
Agreed, but again, it's very weak evidence.
I think you may have taken me to be talking about whether it was acceptable or moral in the sense that society will allow it, that was not my intent. Society allows many unwise, inefficient things and no doubt will do so for some time.
My question was simply whether you thought it wise. If we do make an FAI, and encoded it with some idealised version of our own morality then do we want a rule that says 'Kill everything that looks unlike yourself'? If we end up on the downside of a vast power gradient with other humans do we want them thinking that everything that has little or no value to them should be for the chopping block?
In a somewhat more pithy form, I guess what I’m asking you is: Given that you cannot be sure you will always be strong enough to have things entirely your way, how sure are you this isn’t going to come back and bite you in the arse?
If it is unwise, then it would make sense to weaken that strand of thought in society - to destroy less out of hand, rather than more. That the strand is already quite strong in society would not alter that.
No. But we do want a rule that says something like "the closer things are to being people, the more importance should be given to them". As a consequence of this rule, I think it should be legal to kill your newborn children.
Oh, and I'm never encouraging killing your newborns, just arguing that it should be allowed (if done for something other than sadism).
You did not answer me on the human question - how we’d like powerful humans to think .
This sounds fine as long as you and everything you care about are and always will be included in the group of, ‘people.’ However, by your own admission, (earlier in the discussion to wedrifid,) you've defined people in terms of how closely they realise your ideology:
You’ve made it something fluid; a matter of mood and convenience. If I make an AI and tell it to save only ‘people,’ it can go horribly wrong for you - maybe you’re not part of what I mean by ‘people.’ Maybe by people I mean those who believe in some religion or other. Maybe I mean those who are close to a certain processing capacity - and then what happens to those who exceed that capacity? And surely the AI itself would do so....
There are a lot of ways it can go wrong.
You observe yourself to be a person. That’s not necessarily the same thing as being observably a person to someone else operating with different definitions.
The opinion you state may influence what sort of AI you end up with. And at the very least it seems liable to influence the sort of people you end up with.
-shrug- You’re trying to weaken the idea that newborns are people, and are arguing for something that, I suspect, would increase the occurrence of their demise. Call it what you will.
I want powerful humans to have a rule like "the closer things are to being people, the more importance should be given to them".
I think I must have been unclear, since both you and wedrifid seemed to interpet the wrong thing. What I meant was that I don't have a good definition for person, but no reasonable partial definition I can come up with includes babies. I didn't at all mean that just because I would like people to be nice to each other, and so on, I wouldn't consider people who aren't nice not to be people. I'd intended to convey this distinction by the quotation marks.
Obviously. There's a lot of ways any AI can go wrong. But you have to do something. Is your rule "don't kill humans"? For what definition of human, and isn't that going to be awfully unfair to aliens? I think "don't kill people" is probably about as good as you're going to do.
I don't want the rule to be "don't kill people" for whatever values of "kill" and "people" you have in your book. For all I know you're going to interpet this as something I'd understand more like "don't eat pineapples". I want the rule to be "don't kill people" with your definitions in accordance with mine.
If you don't understand the distinction between "legal" and "encouraged", we're going to have a very difficult time communicating.
How did I misinterpret? I read that you don't include babies and I said that I do include babies. That's (preference) disagreement, not a problem with interpretation.
This line gave me the impression that you thought I was saying I want my definition of "person", for the moral calculus, to include things like "worthwhile".Which was not what I was saying -
I wasn't saying anything about the desirability of traits for people in general. I was talking about the desirability of traits in the definition of the word "person", so that it would be an accurate and useful definition.
I'd want my definition of the word "person" to be such that included virtually all adults (eta: but also thinking aliens, and certain strong AI's), but not, say, pigs. This makes it difficult to also include babies.
"Encouraged" is very clearly not absolute but relative here, "somewhat less discouraged than now" can just be written as "encouraged" for brevity's sake.
How are you deciding whether your definition is reasonable?
‘Don’t kill anything that can learn,’ springs to mind as a safer alternative - were I inclined to program this stuff in directly, which I'm not.
I don’t expect us to be explicitly declaring these rules, I expect the moral themes prevalent in our society - or at least an idealised model of part of it - will form much of the seed for the AI’s eventual goals. I know that the moral themes prevalent in our society form much of the seed for the eventual goals of people.
In either case, I don’t expect us to be in-charge. Which makes me kinda concerned when people talk about how we should be fine with going around offing the lesser life-forms.
Yet my definitions are not in accordance with yours. And, if I apply the rule that I can kill everything that’s not a person, you’re not going to get the results you desire.
It’d be great if I could just say ‘I want you to do good - with your definition of good in accordance with mine.’ But it’s not that simple. People grow up with different definitions - AIs may well grow up with different definitions - and if you've got some rule operating over a fuzzy boundary like that, you may end up as paperclips, or dogmeat or something horrible.
In the standard way. Or if you'd prefer it written out, there's a bunch of things in my mind for which the label "person" seems appropriate - including, say, humans, strong AIs, and thinking aliens. There's also a bunch of things for which said label seems inappropriate - say, pigs, chess-playing computer programs, and rocks. On consideration, babies seem not to share the important common characteristics of the first set nearly as much as they do the second; as such the label "person" seems inappropriate for babies.
Pigs can learn, without a doubt. Even if from this you decide not to kill pigs, the Bayesian spam filter that keeps dozens of viagra ads per day from cluttering up my inbox is also undoubtably learning. Learning, indeed, in much the same way that you or I do, or that pigs do, except that it's arguably better at it. Have I committed a serious moral wrong if I delete its source code?
Agreed, but I also think said idealized themes should be kept as simple as practical, so we're not constantly inserting odd corner-cases. This is partially because simple things are easier to understand and partially because odd corner-cases are almost always indicative of ideas which we would not have arrived at ourselves if we weren't conditioned with them from an early age. I strongly suspect the prohibition on infanticide is such a corner case.
You think I'm going to try to program an AI in English?
Happily this isn't really a problem for the current debate, because I'm communicating exclusively with people who seem to share nearly all of my priors, definitions, and moral axioms already.
So let me break this down a bit.
If you don't think "don't kill people" is a good broad moral rule (setting aside small distinctions in our definitions, because we do probably agree on almost all counts), my task is to try to understand how you arrived at a different conclusion than I did.
If you do think babies are people, my task is to try to understand whether you've organized your space of all possible things which could be described in some different way than I have or if, as I suspect is the common case (this is not to be taken to apply to LW readers), you've just drawn your boundaries wrong.
If you do think the rule should be "don't kill people" and that babies aren't people, then my task is either to understand why you don't feel that infanticide should be legal or to point out that perhaps you really would agree that infanticide should be legal if you stopped and seriously considered the proposition for a bit.
(I think almost everyone falls into either the "wrong boundaries" case or the "would agree that infanticide should be legal if they thought about it" case.)
edit: clarity
Figuring out how to define human (as in "don't kill humans") so as to include babies is relatively easy, since babies are extremely likely to grow up into humans.
The hard question is deciding which transhumans-- including types not yet invented, possibly types not yet thought of, and certainly types which are only imagined in a sketchy abstract way-- can reasonably be considered as entities which shouldn't be killed.