Vladimir_Nesov comments on Is every life really worth preserving? - Less Wrong

2 Post author: RationallyOptimistic 23 December 2011 05:04PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (77)

You are viewing a single comment's thread.

Comment author: Vladimir_Nesov 23 December 2011 06:08:20PM *  18 points [-]

From consequentialist perspective, the value of not saving a life is the same as the value of killing someone. In which light, the title of your post becomes, "Is every person really worth not killing?" Try re-reading the argument with this framing in mind.

(Avoiding measures that save lives with certain probability is then equivalent to introducing the corresponding risk of death.)

Comment author: roystgnr 24 December 2011 01:58:18AM 6 points [-]

the value of not saving a life is the same as the value of killing someone

If you found someone in the process of killing another, what actions would you be willing to undertake to stop them? Would you be willing to undertake those same actions every time you found someone whose non-subsistence expenditures exceeded $X, the minimum expenditure necessary to [buy enough malaria nets, etc... to] have an expected outcome of one life saved?

Even consequentialism is supposed to acknowledge that ethical rules need to be evaluated in terms of their long-term consequences rather than just their immediate outcomes.

Comment author: smijer 24 December 2011 02:13:00PM 14 points [-]

If the value of not saving a life is the same as the value of killing someone, that's fine. We can do that exercise and re-frame in terms of killing, and do the consequentialist calculation from there. The math is the same. If the goal is to bring ourselves to calculate from the heightened <em>emotional</em> perspective associated with killing, though, it is time to drop that frame and just get back to the math.

In terms of the opening post, the math is going to be similar even for the creation of all possible minds. If we have a good reason to restore every mind that has lived, it seems very probable that we have the exact same reason to create every mind that has not lived.

I'm not sure I see what that value is, though. Even if I want to live forever - and continue to want to live forever right up to the point that I am dead... One second after that point, I no longer care. At that point, only other living minds can find value in having me alive. It's up to them if they want to invest their resources in preserving and re-animating me or prefer to invest more of their resources in keeping themselves alive and creating more novel new minds through reproduction.

Comment author: wedrifid 24 December 2011 02:22:19PM 8 points [-]

If the goal is to bring ourselves to calculate from the heightened <em>emotional</em> perspective associated with killing, though, it is time to drop that frame and just get back to the math.

Well spotted. I was wondering if anyone was going to notice that Vladimir's (absurdly highly upvoted) comment was basically a just a dark arts exploit trying to harness (largely deontological) moral judgements outside their intended context.

Comment author: ArisKatsaris 24 December 2011 04:10:00PM *  4 points [-]

I was wondering if anyone was going to notice that Vladimir's (absurdly highly upvoted) comment was basically a just a dark arts exploit trying to harness (largely deontological) moral judgements outside their intended context.

If that was an observation that you had already thought of, and you believed it good to be mentioned, why didn't you so mention it yourself -- instead of waiting to see if anyone else said it? I can conceive of some comments that are good to be made by only specific individuals, given specific contexts -- but I don't see this being one of them.

I find the attitude of "waiting to see if anyone else does this" and afterwards condemning/praising people collectively for failure/success in doing whatever person-failed-to-do-themselves an extremely distasteful one to me.

Comment author: wedrifid 24 December 2011 04:30:55PM 7 points [-]

If that was an observation that you had already thought of, and you believed it good to be mentioned, why didn't you so mention it yourself -- instead of waiting to see if anyone else said it?

I did write a reply when Vladimir first wrote the comment. But I deleted it since I decided I couldn't be bothered getting into a potential flamewar about a subject that I know from experience is easy to spin for cheap moral-high-group points ("you're a murderer!", etc). I long ago realized that it is not (always) my responsibility to fix people who are wrong on the internet.

Since smijer is (as of the time of this comment) a user with 9 votes while Vladimir is in the top 20 of the top contributors and the specific comment being corrected is at +19 it does not seem at all inappropriate to lend support to his observation.

Comment author: fortyeridania 25 December 2011 12:32:21PM 3 points [-]

Not wanting to get into a flamewar is, of course, reasonable. But daring to be the first to dissent is a valuable service, too.

Comment author: smijer 24 December 2011 04:48:41PM 2 points [-]

I appreciate the support.

Comment author: ArisKatsaris 24 December 2011 04:44:07PM 3 points [-]

Since smijer is (as of the time of this comment) a user with 9 votes while Vladimir is in the top 20 of the top contributors and the specific comment being corrected is at +19 it does not seem at all inappropriate to lend support to his observation.

Okay, I think I find this a good reason. Thank you for explaining.

Comment author: fortyeridania 25 December 2011 12:29:01PM *  1 point [-]

You find this a good reason for what?

(1) For supporting smijer's comment

(2) For not chiming in when he first had the idea

If you mean the first...why? That wasn't the issue. The issue was why wedrifid hadn't chimed in. As for the second, wouldn't this imply that wedrifid was holding out because he expected someone with low karma to speak up first?

Comment author: ArisKatsaris 25 December 2011 03:39:02PM 0 points [-]

You find this a good reason for what?

For the seeming inconsistency I had noticed between (1) and (2).

Comment author: XiXiDu 24 December 2011 03:13:25PM *  0 points [-]

Off topic:

If I remember correctly then you have been taking a quite derogatory stance with respect to people who complained about the voting behavior on this site. In any case, here are some snippets from comments made by you in the past 30 days:

Note: I am at least as shocked by the current downvote of this comment...

I express disgust with specific instances of voting.

Ok, me getting downvoted I can understand - someone has been mass dowvnvoting me across the board.

I'm actually getting concerned here. [...] he has not only been taken seriously but received upvotes while ridicule of the assumptions gets downvotes.

I was wondering if anyone was going to notice that Vladimir's (absurdly highly upvoted) comment was basically a just a dark arts exploit...

I predict that within 5 years you will become frequently appalled by the voting behavior on this site and in another 10 years you'll at least partly agree with me that a reputation system is actually a bad idea to have on a site like lesswrong because it doesn't refine what you deem rational nor does it provide valuable feedback but instead does lend credence to the arguments of trolls (as you would call them).

Comment author: wedrifid 24 December 2011 04:14:37PM 3 points [-]

If I remember correctly then you have been taking a quite derogatory stance with respect to people who complained about the voting behavior on this site.

I doubt I ever took such a broad stance. You seem to have generalized to a large category so that you can fit me into it. In fact one of those artfully trimmed quotes you make there should have, if parsed for meaning rather than scanned for quotable keywords, given a far more reasonable impression of where my preferences lie on that subject.

I predict that within 5 years you will become frequently appalled by the voting behavior on this site

Quite possible. A few years after that I may well start telling kids to get off my lawn and tell stories about "When I was your age".

and in another 10 years you'll at least partly agree with me that a reputation system is actually a bad idea to have on a site like lesswrong

Money. Make the prediction with money. Because I want to take it.

Counter-prediction: In ten years time you will not have changed your mind (on this subject) at all.

Comment author: gwern 24 December 2011 10:49:28PM 4 points [-]

At least for myself, I'm happy to give that a low probability. Even with the lowered quality since Eliezer stopped writing, LW is still much better - thanks to karma - than OB or SL4 were.

Comment author: XiXiDu 27 December 2011 06:32:14PM *  1 point [-]

LW is still much better - thanks to karma - than OB or SL4 were.

How do you know this? Would a reputation system cause the Tea Party movement to become less wrong?

The n-Category Café or Timothy Gowers blog do not employ a reputation system like less wrong. It's the people who make places better off than others.

It is trivially true that the lesswrong reputation system would fail if there were more irrational people here than rational people, where 'rational' is defined according to your criteria (not implying that your criteria are wrong).

I am quite sure that a lot of valuable opinions are lost due to the current reputation system because there are a lot of people who don't like the idea of being voted down according to unknown criteria rather than engaging in argumentative discourses.

And as I wrote before, the curren reputation system favors non-technical posts. More technical posts often don't receive the same amount of upvotes as non-technical posts and technical posts that turn out to be wrong are downvoted more extensively. This does discourage rigor and gives incentive to write posts about basic rationality rather than tackling important problems collaboratively.

Comment author: shminux 27 December 2011 07:22:07PM 3 points [-]

A reputation system necessarily favors status quo.

This community are mostly aspired rationalists, not professionals in philosophy/decision theory/psychology, though there are a number of experts around. Accuracy of technical posts is hard to judge, so people probably go by the post quality, their gut feeling and how well it conforms to what has been agreed upon as correct before. Plus the usual points for humor. Minus penalty for poor spelling/grammar/style.

An example of a reputation system that works for a technical forum is MathOverflow, though partly because the mods are quite ruthless there about off-topic posts.

I am quite sure that a lot of valuable opinions are lost due to the current reputation system

...which likely means that this forum is not the right one for them. LW is open enough to resist "evaporative cooling", and rapid downvoting inhibits all but expert trolling.

gives incentive to write posts about basic rationality rather than tackling important problems collaboratively.

I think that is the idea. Educating people "about basic rationality" is a much more viable goal than doing basic research collaboratively. LW is often used as a sounding board for research write-ups, but that is probably as far as it can go. Anything more would require excluding amateurs from the discussion, to reduce the noise level. I am yet to see a public forum where "important problems" are solved "collaboratively". Feel free to provide counterexamples.

Comment author: gwern 27 December 2011 07:55:38PM 1 point [-]

Would a reputation system cause the Tea Party movement to become less wrong?

Yes. They would still have their major shibboleths like Obama being a Muslim born in Kenya, but reputation systems would at least reduce the most mouth-breathing comments.

The n-Category Café or Timothy Gowers blog do not employ a reputation system like less wrong. It's the people who make places better off than others.

People are a factor. People are not the only factor which is solely determinative. Code is Law.

I am quite sure that a lot of valuable opinions are lost due to the current reputation system because there are a lot of people who don't like the idea of being voted down according to unknown criteria rather than engaging in argumentative discourses.

And that is why LW has orders of magnitude less comments and posts than OB or SL4 did. Wait, never mind, I meant 'more'.

This does discourage rigor and gives incentive to write posts about basic rationality rather than tackling important problems collaboratively.

Or it discourages attempts to bamboozle with rigor. I don't remember terribly many rigorous proofs on LW, but then, I don't remember terribly many on OB or SL4 either.

Comment author: XiXiDu 27 December 2011 06:16:35PM 1 point [-]

I retracted the comment. Not sure why I made it and why I haven't used my brain more, sorry.

Counter-prediction: In ten years time you will not have changed your mind (on this subject) at all.

Likely, because I hate reputation systems. Peer pressure is already bad enough as it is. But if a reliable study is being conducted that shows that reputation systems cause groups to become more rational I will of course change my mind.

Money. Make the prediction with money. Because I want to take it.

Betting money seems to be a pretty bad idea if the bet depends on the decision of someone participating in the bet.

Comment author: buybuydandavis 23 December 2011 06:54:22PM 7 points [-]

That's just very poor consequentialism in my eyes. Instead of me pointing out the most abominable scenarios that I believe immediately follow from such a consequentialism, why don't you supply one that you think would be objectionable to others, but which you'd be willing to defend?

As for your spin on the question, while I think it is a different question than the original, I see no need to shy away from it. Some people are worth killing. That's not to say there isn't something of value in them, but choice is about tradeoffs, and I don't expect that to change with greater technology. The particular tradeoffs will change, but that there are tradeoffs will not.

And in the same way, a great many more people are not worth saving either.

Comment author: Vladimir_Nesov 24 December 2011 12:25:22PM 0 points [-]

As for your spin on the question, while I think it is a different question than the original, I see no need to shy away from it.

Sure, assuming we're clear on what the question means.

Comment author: shminux 23 December 2011 06:35:29PM 0 points [-]

Taking this argument ad absurdum: Roe vs Wade is a crime against humanity, since a fetus is potentially a person.

Comment author: Vladimir_Nesov 23 December 2011 06:40:36PM 11 points [-]

The alternatives I'm comparing are a living person dying vs. not dying. Living vs. never having lived is different and harder to evaluate.

Comment author: shminux 23 December 2011 07:22:23PM 4 points [-]

No, the alternatives you are comparing are reviving a frozen brain vs doing something potentially more useful, once the revival technology is available.

For example, if creating a new mind has a positive utility some day, it's the matter of calculating what to spend (potentially still limited) resources on: creating a new happy mind (trivially easy even now, except for the "happy" part) or reviving/rejuvenating/curing/uploading/rehabilitaing a grandpa stiff in a cryo tank (impossible now, but still probably much harder than the alternative even in the future).

Comment author: Vladimir_Nesov 23 December 2011 07:33:02PM *  6 points [-]

No, the alternatives you are comparing are reviving a frozen brain vs doing something potentially more useful

My comment is unrelated to cryonics, I posted it to remind about framing effects of saying "not saving lives" as compared to "killing". (Part of motivation for posting it is that I find the mention of Eliezer's dead brother in the context of an argument for killing people distasteful.)

creating a new happy mind (trivially easy even now, except for the "happy" part) or reviving/rejuvenating/curing/uploading/rehabilitaing a grandpa

As I said, harder to evaluate. I'm uncertain on which of these particular alternatives is better (considering a hypothetical tradeoff), particularly where a new mind can be made better in some respects in a more resource-efficient way.

Comment author: shminux 23 December 2011 07:38:23PM 0 points [-]

My comment is unrelated to cryonics, I posted it to remind about framing effects of saying "not saving lives" as compared to "killing"

Ah, OK. I thought you were commenting on the merits of cryopreservation.

Comment author: Viliam_Bur 25 December 2011 04:40:00PM 2 points [-]

Taking this argument ad absurdum: Roe vs Wade is a crime against humanity, since a fetus is potentially a person.

What exactly makes it absurd?

I am not sure what units are best for measuring a value of human life, so let's just say that a life of average adult person has value 1. What would be your estimate of value of a 3-month fetus, 6-month fetus, 9-month fetus, a newborn child, 1/2 year old child, 1 year old child, etc.?

If you say that a fetus has less value than an adult person, but still a nonzero value, for example it could be 0.01, then killing 100 fetuses is like killing 1 adult person, and killing 100 000 fetuses is like killing 1 000 adult people. Calling the killing of 1 000 adult people "crime against humanity" would be perhaps exaggerated, but not exactly absurd.

If you have strong opinions on this topic, I would like to see your best try to estimate the shape of "human life value" curve for fetuses and small childs. At what age does killing a human organism become worse than having a proverbial dustspeck in rationalist's eye?

Comment author: TheOtherDave 25 December 2011 07:45:50PM 4 points [-]

Thousands of adults are in fact killed in auto accidents every year, and yet it seems to me very strange indeed to call auto accidents a crime against humanity.

Thousands of adults are killed in street crimes, and it seems very strange to me to call street crime a crime against humanity.

Etc., etc., etc.

I conclude that my intuitions about whether something counts as a "crime against humanity" aren't especially well calibrated, and therefore that I should be reluctant to use those intuitions as evidence when thinking about scales way outside my normal experience.

And of course, the value-to-me of an individual can vary by many orders of magnitude, depending on the individual. I would likely have chosen to allow my nephew's fetal development to continue rather than preserve the life of a randomly chosen adult, for example, but I don't generally value the development of a fetus more than an adult.

But leaving the "crimes against humanity" labeling business aside, and assuming some typical value for a fetus and an adult, then sure, if I value a developing fetus 1/N as much as I value a living adult, then I prefer to allow 1 adult to die rather than allow the development of N fetuses to be terminated.

Comment author: Normal_Anomaly 23 December 2011 06:38:56PM 0 points [-]

I think you mean "uncertain probability"?

Comment author: Vladimir_Nesov 23 December 2011 06:44:18PM *  4 points [-]

"Certain" as in a figure of speech, like "ice cream of certain flavor", not indication of precision. (Although probabilities can well be precise...)