Hedonic_Treader comments on On Caring - Less Wrong

99 Post author: So8res 15 October 2014 01:59AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (272)

You are viewing a single comment's thread. Show more comments above.

Comment author: shminux 07 October 2014 06:50:49PM *  23 points [-]

I agree with others that the post is very nice and clear, as most of your posts are. Upvoted for that. I just want to provide a perspective not often voiced here. My mind does not work the way yours does and I do not think I am a worse person than you because of that. I am not sure how common my thought process is on this forum.

Going section by section:

  1. I do not "care about every single individual on this planet". I care about myself, my family, friends and some other people I know. I cannot bring myself to care (and I don't really want to) about a random person half-way around the world, except in the non-scalable general sense that "it is sad that bad stuff happens, be it to 1 person or to 1 billion people". I care about the humanity surviving and thriving, in the abstract, but I do not feel the connection between the current suffering and future thriving. (Actually, it's worse than that. I am not sure whether humanity existing, in Yvain's words, in a 10m x 10m x 10m box of computronium with billions of sims is much different from actually colonizing the observable universe (or the multiverse, as the case might be). But that's a different story, unrelated to the main point.)

  2. No disagreement there, the stakes are high, though I would not say that a thriving community of 1000 is necessarily worse than a thriving community of 1 googoleplex, as long as their probability of long-term survival and thriving is the same.

  3. I occasionally donate modest amounts to this cause or that, if I feel like it. I don't think I do what Alice, Bob or Christine did, and donate out of pressure or guilt.

  4. I spend (or used to spend) a lot of time helping out strangers online with their math and physics questions. I find it more satisfying than caring for oiled birds or stray dogs. Like Daniel, I see the mountain ridges of bad education all around, of which the students asking for help on IRC are just tiny pebbles. Unlike Daniel, I do not feel that I "can't possibly do enough". I help people when I feel like it and I don't pretend that I am a better person because of it, even if they thank me profusely after finally understanding how free-body diagram works. I do wish someone more capable worked on improving the education system to work better than at 1% efficiency, and I have seen isolated cases of it, but I do not feel that it is my problem to deal with. Wrong skillset.

  5. I have read a fair amount of EA propaganda, and I still do not feel that I "should care about people suffering far away", sorry. (Not really sorry, no.) It would be nice if fewer people died and suffered, sure. But "nice" is all it is. Call me heartless. I am happy that other people care, in case I am in the situation where I need their help. I am also happy that some people give money to those who care, for the same reason. I might even chip in, if it hits close to home.

  6. I do not feel that I would be a better person if I donated more money or dedicated my life to solving one of the "biggest problems", as opposed to doing what I am good at, though I am happy that some people feel that way; humanity's strength is in its diversity.

  7. Again, one of the main strengths of humankind is its diversity, and the Bell-curve outliers like "Gandhi, Mother Theresa, Nelson Mandela" tend to have more effect than those of us within 1 standard deviation. Some people address "global poverty", others write poems, prove theorems, shoot the targets they are told to, or convince other people to do what they feel is right. No one knows which of these is more likely to result in the long-term prosperity of the human race. So it is best to diversify and hope that one of these outliers does not end up killing all of us, intentionally or accidentally.

  8. I don't feel the weight of the world. Because it does not weigh on me.

Note: having reread what I wrote, I suspect that some people might find it kind of Objectivist. I actually tried reading Atlas Shrugged and quit after 100 pages or so, getting extremely annoyed by the author belaboring an obvious and trivial point over and over. So I only have a vague idea what the movement is all about. And I have no interest in finding out more, given that people who find this kind of writing insightful are not ones I want to associate with.

Comment author: [deleted] 08 October 2014 07:45:32PM *  5 points [-]

Thank you for stating your perspective and opinion so clearly and honestly. It is valuable. Now allow me to do the same, and follow by a question (driven by sincere curiosity):

I do not think I am a worse person than you because of that.

I think you are.

It would be nice if fewer people died and suffered, sure. But "nice" is all it is. Call me heartless.

You are heartless.

I care about the humanity surviving and thriving, in the abstract

Here's my question, and I hope you take the time to answer as honestly as you wrote your comment:

Why?

After all you've rejected to care about, why in the world would you care about something as abstract as "humanity surviving and thriving"? It's just an ape species, and there have already been billions of them. In addition, you clearly don't care about numbers of individuals or quality of life. And you know the heat death of the universe will kill them all off anyway, if they survive the next few centuries.

I don't mean to convince you otherwise, but it seems arbitrary - and surprisingly common - that someone who doesn't care about the suffering or lives of strangers would care about that one thing out of the blue.

Comment author: TheOtherDave 08 October 2014 09:01:02PM 11 points [-]

I can't speak for shminux, of course, but caring about humanity surviving and thriving while not caring about the suffering or lives of strangers doesn't seem at all arbitrary or puzzling to me.

I mean, consider the impact on me if 1000 people I've never met or heard of die tomorrow, vs. the impact on me if humanity doesn't survive. The latter seems incontestably and vastly greater to me... does it not seem that way to you?

It doesn't seem at all arbitrary that I should care about something that affects me greatly more than something that affects me less. Does it seem that way to you?

Comment author: [deleted] 09 October 2014 02:08:36AM 1 point [-]

I mean, consider the impact on me if 1000 people I've never met or heard of die tomorrow, vs. the impact on me if humanity doesn't survive. The latter seems incontestably and vastly greater to me... does it not seem that way to you?

Yes, rereading it, I think I misinterpreted response 2 as saying it doesn't matter whether a population of 1,000 people has a long future or a population of one googleplex [has an equally long future]. That is, that population scope doesn't matter, just durability and surivival. I thought this defeated the usual Big Future argument.

But even so, his 5 turns it around: Practically all people in the Big Future will be strangers, and if it is only "nicer" if they don't suffer (translation: their wellbeing doesn't really matter), then in what way would the Big Future matter?

I care a lot about humanity's future, but primarily because of its impact on the total amout of positive and negative conscious experiences that it will cause.

Comment author: shminux 08 October 2014 09:57:08PM 6 points [-]

...Slow deep breath... Ignore inflammatory and judgmental comments... Exhale slowly... Resist the urge to downvote... OK, I'm good.

First, as usual, TheOtherDave has already put it better than I could.

Maybe to elaborate just a bit.

First, almost everyone cares about the survival of the human race as a terminal goal. Very few have the infamous 'apres nous le deluge' attitude. It seems neither abstract nor arbitrary to me. I want my family, friends and their descendants to have a bright and long-lasting future, and it is predicated on the humanity in general having one.

Second, a good life and a bright future for the people I care about does not necessarily require me to care about the wellbeing of everyone on Earth. So I only get mildly and non-scalably sad when bad stuff happen to them. Other people, including you, care a lot. Good for them.

Unlike you (and probably Eliezer), I do not tell other people what they should care about, and I get annoyed at those who think their morals are better than mine. And I certainly support any steps to stop people from actively making other people's lives worse, be it abusing them, telling them whom to marry or how much and what cause to donate to. But other than that, it's up to them. Live and let live and such.

Hope this helps you understand where I am coming from. If you decide to reply, please consider doing it in a thoughtful and respectful manner this time.

Comment author: Weedlayer 09 October 2014 08:32:50AM 9 points [-]

I'm actually having difficultly understanding the sentiment "I get annoyed at those who think their morals are better than mine". I mean, I can understand not wanting other people to look down on you as a basic emotional reaction, but doesn't everyone think their morals are better than other people?

That's the difference between morals and tastes. If I like chocolate ice cream and you like vanilla, then oh well. I don't really care and certainly don't think my tastes are better for anyone other than me. But if I think people should value the welfare of strangers and you don't, then of course I think my morality is better. Morals differ from tastes in that people believe that it's not just different, but WRONG to not follow them. If you remove that element from morality, what's left? The sentiment "I have these morals, but other people's morals are equally valid" sounds good, all egalitarian and such, but it doesn't make any sense to me. People judge the value of things through their moral system, and saying "System B is as good as System A, based on System A" is borderline nonsensical.

Also, as an aside, I think you should avoid rhetorical statements like "call me heartless if you like" if you're going to get this upset when someone actually does.

Comment author: Lumifer 09 October 2014 02:51:42PM 2 points [-]

but doesn't everyone think their morals are better than other people?

I don't.

Comment author: hyporational 09 October 2014 05:55:52PM 1 point [-]

Would you make that a normative statement?

Comment author: Lumifer 09 October 2014 06:06:16PM *  2 points [-]

Well, kinda-sorta. I don't think the subject is amenable to black-and-white thinking.

I would consider people who think their personal morals are the very best there is to be deluded and dangerous. However I don't feel that people who think their morals are bad are to be admired and emulated either.

There is some similarity to how smart do you consider yourself to be. Thinking yourself smarter than everyone else is no good. Thinking yourself stupid isn't good either.

Comment author: hyporational 09 October 2014 06:17:53PM 5 points [-]

So would you say that moral systems that don't think they're better than other moral systems are better than other moral systems? What happens if you know to profess the former kind of a moral system and agree with the whole statement? :)

Comment author: Lumifer 09 October 2014 06:22:27PM 0 points [-]

So would you say that moral systems that don't think they're better than other moral systems are better than other moral systems?

In one particular aspect, yes. There are many aspects.

The barber shaves everyone who doesn't shave himself..? X-)

Comment author: Weedlayer 09 October 2014 03:44:22PM 0 points [-]

So if my morality tells me that murdering innocent people is good, then that's not worse than whatever your moral system is?

I know it's possible to believe that (it was pretty much used as an example in my epistemology textbook for arguments against moral relativism), I just never figured anyone actually believed it.

Comment author: hyporational 09 October 2014 06:06:36PM 2 points [-]

It's not clear to me that comparing moral systems on a scale of good and bad makes sense without a metric outside the systems.

So if my morality tells me that murdering innocent people is good, then that's not worse than whatever your moral system is?

So while I wouldn't murder innocent people myself, comparing our moral systems on a scale of good and bad is uselessly meta, since that meta-reality doesn't seem to have any metric I can use. Any statements of good or bad are inside the moral systems that I would be trying to compare. Making a comparison inside my own moral system doesn't seem to provide any new information.

Comment author: Weedlayer 09 October 2014 09:53:24PM 0 points [-]

There's no law of physics that talks about morality, certainly. Morals are derived from the human brain though, which is remarkably similar between individuals. With the exception of extreme outliers, possibly involving brain damage, all people feel emotions like happiness, sadness, pain and anger. Shouldn't it be possible to judge most morality on the basis of these common features, making an argument like "wanton murder is bad, because it goes against the empathy your brain evolved to feel, and hurts the survival chance you are born valuing"? I think this is basically the point EY makes about the "psychological unity of humankind".

Of course, this dream goes out the window with UFAI and aliens. Lets hope we don't have to deal with those.

Comment author: Decius 15 October 2014 07:43:27AM 0 points [-]

Shouldn't it be possible to judge most morality on the basis of these common features, making an argument like "wanton murder is bad, because it goes against the empathy your brain evolved to feel, and hurts the survival chance you are born valuing"?

Yes, it should. However, in the hypothetical case involved, the reason is not true; the hypothetical brain does not have the quality "Has empathy and values survival and survival is impaired by murder".

We are left with the simple truth that evolution (including memetic evolution) selects for things which produce offspring that imitate them, and "Has a moral system that prohibits murder" is a quality that successfully creates offspring that typically have the quality "Has a moral system that prohibits murder".

The different quality "Commits wanton murder" is less successful at creating offspring in modern society, because convicted murderers don't get to teach children that committing wanton murder is something to do.

Comment author: [deleted] 11 October 2014 09:32:09AM 0 points [-]

I think those similarities are much less strong that EY appears to suggests; see e.g. “Typical Mind and Politics”.

Comment author: Lumifer 09 October 2014 03:55:39PM 2 points [-]

You are confused between two very different statements:

(1) I don't think that my morals are (always, necessarily) better than other people's.

(2) I have no basis whatsoever for judging morality and/or behavior of other people.

Comment author: Weedlayer 09 October 2014 05:07:11PM 1 point [-]

What basis do you have for judging others morality other than your own morality? And if you ARE using your own morality to judge their morality, aren't you really just checking for similarity to your own?

I mean, it's the same way with beliefs. I understand not everything I believe is true, and I thus understand intellectually that someone else might be more correct (or, less wrong, if you will) than me. But in practice, when I'm evaluating others' beliefs I basically compare them with how similar they are to my own. On a particularly contentious issue, I consider reevaluating my beliefs, which of course is more difficult and involved, but for simple judgement I just use comparison.

Which of course is similar to the argument people sometimes bring up about "moral progress", claiming that a random walk would look like progress if it ended up where we are now (that is, progress is defined as similarity to modern beliefs).

My question though is that how do you judge morality/behavior if not through your own moral system? And if that is how you do it, how is your own morality not necessarily better?

Comment author: Lumifer 09 October 2014 05:29:35PM *  2 points [-]

if you ARE using your own morality to judge their morality, aren't you really just checking for similarity to your own?

No, I don't think so.

Morals are a part of the value system (mostly the socially-relevant part) and as such you can think of morals as a set of values. The important thing here is that there are many values involved, they have different importance or weight, and some of them contradict other ones. Humans, generally speaking, do not have coherent value systems.

When you need to make a decision, your mind evaluates (mostly below the level of your consciousness) a weighted balance of the various values affected by this decision. One side wins and you make a particular choice, but if the balance was nearly even you feel uncomfortable or maybe even guilty about that choice; if the balance was very lopsided, the decision feels like a no-brainer to you.

Given the diversity and incoherence of personal values, comparison of morals is often an iffy thing. However there's no reason to consider your own value system to be the very best there is, especially given that it's your conscious mind that makes such comparisons, but part of morality is submerged and usually unseen by the consciousness. Looking at an exact copy of your own morals you will evaluate them as just fine, but not necessarily perfect.

Also don't forget that your ability to manipulate your own morals is limited. Who you are is not necessarily who you wish you were.

Comment author: Weedlayer 09 October 2014 09:40:08PM *  2 points [-]

This is a somewhat frustrating situation, where we both seem to agree on what morality is, but are talking over each other. I'll make two points and see if they move the conversation forward:

1: "There's no reason to consider your own value system to be the very best there is"

This seems to be similar to the point I made above about acknowledging on an intellectual level that my (factual) beliefs aren't the absolute best there is. The same logic holds true for morals. I know I'm making some mistakes, but I don't know where those mistakes are. On any individual issue, I think I'm right, and therefore logically if someone disagrees with me, I think they're wrong. This is what I mean by "thinking that one's own morals are the best". I know I might not be right on everything, but I think I'm right about every single issue, even the ones I might really be wrong about. After all, if I was wrong about something, and I was also aware of this fact, I would simply change my beliefs to the right thing (assuming the concept is binary. I have many beliefs I consider to be only approximations, which I consider to be only the best of any explanation I have heard so far. Not prefect, but "least wrong").

Which brings me to point 2.

2: "Also don't forget that your ability to manipulate your own morals is limited. Who you are is not necessarily who you wish you were."

I'm absolutely confused as to what this means. To me, a moral belief and a factual belief are approximately equal, at least internally (if I've been equivocating between the two, that's why). I know I can't alter my moral beliefs on a whim, but that's because I have no reason to want to. Consider self-modifying to want to murder innocents. I can't do this, primarily because I don't want to, and CAN'T want to for any conceivable reason (what reason does Gandhi have to take the murder pill if he doesn't get a million dollars?) I suppose modifying instrumental values to terminal values (which morals are) to enhance motivation is a possible reason, but that's an entirely different can of worms. If I wished I held certain moral beliefs, I already have them. After all, morality is just saying "You should do X". So wishing I had a different morality is like saying "I wish I though I should do X". What does that mean?

Not being who you wish to be is an issue of akrasia, not morality. I consider the two to be separate issues, with morality being an issue of beliefs and akrasia being an issue of motivation.

In short, I'm with you for the first line and two following paragraphs, and then you pull a conclusion out in the next paragraph that I disagree with. Clearly there's a discontinuity either in my reading or your writing.

Comment author: pianoforte611 08 October 2014 11:55:01PM *  6 points [-]

It's interesting because people will often accuse a low status out group of "thinking they are better than everyone else" *. But I had never actually seen anyone actually claim that their ingroup is better than everyone else, the accusation was always made of straw .... until I saw Hedonic Treader's comment.

I do sort of understand the attitude of the utilitarian EA's. If you really believe that everyone must value everyone else's life equally, then you'd be horrified by people's brazen lack of caring. It is quite literally like watching a serial killer casually talk about how many people they killed and finding it odd that other people are horrified. After all, each life you fail to save is essentially the same a murder under utilitarianism.

*I've seen people make this accusation against nerds, atheists, fedora wearers, feminists, left leaning persons, Christians etc

Comment author: gjm 09 October 2014 12:41:27PM *  8 points [-]

the accusation was always made of straw

I expect that's correct, but I'm not sure your justification for it is correct. In particular it seems obviously possible for the following things all to be true:

  • A thinks her group is better than others.
  • A's thinking this is obvious enough for B to be able to discern it with some confidence.
  • A never explicitly says that her group is better than others.

and I think people who say (e.g.) that atheists think they're smarter than everyone else would claim that that's what's happening.

I repeat, I agree that these accusations are usually pretty strawy, but it's a slightly more complicated variety of straw than simply claiming that people have said things they haven't. More specifically, I think the usual situation is something like this:

  • A really does think that, to some extent and in some respects, her group is better than others.
  • But so does everyone else.
  • B imagines that he's discerned unusual or unreasonable opinions of this sort in A.
  • But really he hasn't; at most he's picked up on something that he could find anywhere if he chose to look.

[EDITED to add, for clarity:] By "But so does everyone else" I meant that (almost!) everyone thinks that (many of) the groups they belong to are (to some extent and in some respects) better than others. Most of us mostly wouldn't say so; most of us would mostly agree that these differences are statistical only and that there are respects in which are groups are worse too; but, still, on the whole if a person chooses to belong to some group (e.g., Christians or libertarians or effective altruists or whatever) that's partly because they think that group gets right (or at least more right) some things that other groups get wrong (or at least less right).

Comment author: CCC 09 October 2014 01:51:19PM 1 point [-]

I do imagine that the first situation is more common, in general, than the second.

This is entirely because of the point:

  • But so does everyone else.

A group that everyone considers better than others must be a single group, and probably very small; this requirement therefore limits your second scenario to a very small pool of people, while I imagine that your first scenario is very common.

Comment author: gjm 09 October 2014 01:54:27PM 2 points [-]

Sorry, I wasn't clear enough. By "so does everyone else" I meant "everyone else considers the groups they belong to to be better, to some extent and in some respects, better than others".

Comment author: CCC 09 October 2014 06:17:58PM *  1 point [-]

Ah, that clarification certainly changes your post for the better. Thanks. In light of it, I do agree that the second scenario is common; but looking closely at it, I'm not sure that it's actually different to the first scenario. In both cases, A thinks her group is better; in both cases, B discerns that fact and calls excessive attention to it.

Comment author: [deleted] 11 October 2014 09:38:12AM 0 points [-]

but, still, on the whole if a person chooses to belong to some group (e.g., Christians or libertarians or effective altruists or whatever) that's partly because they think that group gets right (or at least more right) some things that other groups get wrong (or at least less right).

Well, if I belong to the group of chocolate ice cream eaters, I do think that eating chocolate ice cream is better than eating vanilla ice cream -- by my standards; it doesn't follow that I also believe it's better by your standards or by objective standards (whatever they might be) and feel smug about it.

Comment author: gjm 11 October 2014 12:33:28PM 2 points [-]

Sure. Some things are near-universally understood to be subjective and personal. Preference in ice cream is one of them. Many others are less so, though; moral values, for instance. Some even less; opinions about apparently-factual matters such as whether there are any gods, for instance.

(Even food preferences -- a thing so notoriously subjective that the very word "taste" is used in other contexts to indicate something subjective and personal -- can in fact give people that same sort of sense of superiority. I think mostly for reasons tied up with social status.)

Comment author: [deleted] 09 October 2014 01:58:47AM -2 points [-]

Perhaps to avoid confusion, my comment wasn't intended as an in-group out-group thing or even as a statement about my own relative status.

"Better than" and "worse than" are very simple relative judgments. If A rapes 5 victims a week and B rapes 6, A is a better person than B. If X donates 1% of his income potential to good charities and Y donates 2%, X is a worse person than Y (all else equal). It's a rather simple statement of relative moral status.

Here's the problem: If we pretend - like some in the rationalist community do - that all behavior is morally equivalent and all morals are equal, then there is no social incentive to behave prosocially when possible. Social feedback matters and moral judgments have their legitimate place in any on-topic discourse.

Finally caring about not caring is self-defeating: One cannot logically judge jugmentalism without being judgmental oneself.

Comment author: Lumifer 09 October 2014 04:43:07AM 2 points [-]

If we pretend - like some in the rationalist community do - that all behavior is morally equivalent and all morals are equal

That's a strawman. I haven't seen anyone say anything like that. What some people do say is that there is no objective standard by which to judge various moralities (that doesn't make them equal, by the way).

there is no social incentive to behave prosocially when possible

Of course there is. Behavior has consequences regardless of morals. It is quite common to have incentives to behave (or not) in certain ways without morality being involved.

moral judgments have their legitimate place in any on-topic discourse.

Why is that?

Comment author: [deleted] 11 October 2014 09:42:18AM 1 point [-]

Of course there is. Behavior has consequences regardless of morals. It is quite common to have incentives to behave (or not) in certain ways without morality being involved.

What do you mean by “morality”? Were the incentives the Heartstone wearer was facing when deciding whether to kill the kitten about morality, or not?

Comment author: Lumifer 14 October 2014 05:40:56PM 1 point [-]

By morality I mean a particular part of somebody's system of values. Roughly speaking, morality is the socially relevant part of the value system (though that's not a hard definition, but rather a pointer to the area where you should search for it).

Comment author: hyporational 09 October 2014 05:38:51AM 0 points [-]

It seems self termination was the most altruistic way of ending the discussion. A tad over the top I think.

Comment author: Jiro 09 October 2014 02:04:05AM 1 point [-]

One can judge "judgmentalism on set A" without being "judgemental on set A" (while, of course, still being judgmental on set B).

Comment author: gjm 08 October 2014 11:33:34PM 10 points [-]

inflammatory and judgmental comments

It seems to me that when you explicitly make your own virtue or lack thereof a topic of discussion, and challenge readers in so many words to "call [you] heartless", you should not then complain of someone else's "inflammatory and judgmental comments" when they take you up on the offer.

And it doesn't seem to me that Hedonic_Treader's response was particularly thoughtless or disrespectful.

(For what it's worth, I don't think your comments indicate that you're heartless.)

Comment author: Bugmaster 08 October 2014 11:20:18PM 2 points [-]

You are saying that shminux is "a worse person than you" and also "heartless", but I am not sure what these words mean. How do you measure which person is better as compared to another person ? If the answer is, "whoever cares about more people is better", then all you're saying is, "shminux cares about fewer people because he cares about fewer people". This is true, but tautologically so.

Comment author: roryokane 16 October 2014 07:27:40PM 0 points [-]

All morals are axioms, not theorems, and thus all moral claims are tautological.

Whatever morals we choose, we are driven to choose them by the morals we already have – the ones we were born with and raised to have. We did not get our morals from an objective external source. So no matter what your morals, if you condemn someone else by them, your condemnation will be tautoligcal.

Comment author: lackofcheese 17 October 2014 02:57:07PM 3 points [-]

I don't agree.

Yes, at some level there are basic moral claims that behave like axioms, but many moral claims are much more like theorems than axioms.

Derived moral claims also depend upon factual information about the real world, and thus they can be false if they are based on incorrect beliefs about reality.

Comment author: Jiro 08 October 2014 10:40:10PM 2 points [-]

It would be nice if fewer people died and suffered, sure. But "nice" is all it is. Call me heartless. You are heartless.

Then every human being in existence is heartless.

Comment author: CBHacking 29 November 2014 01:21:12PM 0 points [-]

I disagree. There are degrees of caring, and appropriate responses to them. Admittedly, "nice" is a term with no specific meaning, but most of us can probably put it on a relative ranking with other positive terms, such "non-zero benefit" or "decent" (which I, and probably most people, would rank below "nice") and "excellent", "wonderful", "the best thing in the world" (in the hyperbolic "best thing I have in mind right now" sense), or "literally, after months of introspection, study, and multiplying, I find that this is the best thing which could possibly occur at this time"; I suspect most native English speakers would agree that those are stronger sentiments than "nice". I can certainly think of things that are more important than merely "nice" yet less important than a reduction in death and suffering.

For example, I would really like a Tesla car, with all the features. In the category of remotely-feasible things somebody could actually give me, I actually value that higher than there's any rational reason for. On the other hand, if somebody gave me the money for such a car, I wouldn't spend it on one... I don't actually need a car, in fact don't have a place for it, and there are much more valuable things I could do with that money. Donating it to some highly-effective charity, for example.

Leaving aside the fact that "every human being in existence" appears to require excluding a number of people who really are devoting their lives to bringing about reductions in suffering and death, there are lots of people who would respond to a cessation of some cause of suffering or death more positively than to simply think it "nice". Maybe not proportionately more positively - as the post says, our care-o-meters don't scale that far - but there would still be a major difference. I don't know how common, in actual numbers, that reaction is vs. the "It would be nice" reaction (not to mention other possible reactions), but it is absolutely a significant number of people even among those who aren't devoting their whole life towards that goal.

Comment author: Jiro 29 November 2014 06:37:20PM 0 points [-]

Pretty much every human being in existence who thinks that stopping death and suffering is a good thing, still spends resources on themselves and their loved ones beyond the bare minimum needed for survival. They could spend some money to buy poor Africans malaria nets, but have something which is not death or suffering which they consider more important than spending the money. to alleviate death and suffering.

In that sense, it's nice that death and suffering are alleviated, but that's all.

it is absolutely a significant number of people even among those who aren't devoting their whole life towards that goal

"Not devoting their whole life towards stopping death and suffering" equates to "thinks something else is more important than stopping death and suffering".

Comment author: CBHacking 01 December 2014 08:43:25AM *  0 points [-]

False dichotomy. You can have (many!) things which are more than merely "nice" yet less than the thing you spend all available resources on. To take a well-known public philanthropist as an example, are you seriously claiming that because he does not spend every cent he has eliminating malaria as fast as possible, Bill Gates' view on malaria eradication is that "it's nice that death and suffering are alleviated, but that's all"?

We should probably taboo the word "nice" here; since we seem likely to be operating on different definitions of it. To rephrase my second sentence of this post, then: You can have (many!) things which you hold to be important and work to bring about, but which you do not spend every plausibly-available resource on.

Also, your final sentence is not logically consistent. To show that a particular goal is the most important thing to you, you only need to devote more resources (including time) to it than to any other particular goal. If you allocate 49% of your resources to ending world poverty, 48% to being a billionaire playboy, and 3% to personal/private uses that are not strictly required for either of those goals, that is probably not the most efficient possible manner to allocate your resources, but there is nothing you value more than ending poverty (a major cause of suffering and death) even though it doesn't even consume a majority of your resources. Of course, this assumes that the value of your resources is fixed wherever you spend them; in the real world, the marginal value of your investments (especially in things like medicine) go down the more resources you pump into them in a given time frame; a better use might be to invest a large chunk of your resources into things that generate more resources, while providing as much towards your anti-suffering goals as they can efficiently use at once.

Comment author: gjm 01 December 2014 12:39:49PM 3 points [-]

Let's be a bit more concrete here. If you devote approximately half your resources to ending poverty and half to being a billionaire playboy, that means something like this: you value saving 10000 Africans' lives less than you value having a second yacht. I'm sure that second yacht is fun to have, but I think it's reasonable to categorize something that you value less than 1/10000 of the increment from "one yacht" to "two yachts" as no more important than "nice".

This is of course not a problem unique to billionaire playboys, but it's maybe a more acute problem for them; a psychologically equivalent luxury for an ordinarily rich person might be a second house costing $1M, which corresponds to 1/100 as many African lives and likely brings a bigger gain in personal utility; one for an ordinarily not-so-rich person might be a second car costing $10k, another 100x fewer dead Africans and (at least for some -- e.g., two-income families living in the US where getting around without a car can be a biiiig pain) a considerable gain in personal utility. There's still something kinda indecent about valuing your second car more than a person's life, but at least to my mind it's substantially less indecent than valuing your second megayacht more than 10000 people's lives.

Suppose I have a net worth of $1M and you have a net worth of $10B. Each of us chooses to devote half our resources to ending poverty and half to having fun. That means that I think $500k of fun-having is worth the same as $500k of poverty-ending, and you think $5B of fun-having is worth the same as $5B of poverty-ending. But $5B of poverty-ending is about 10,000 times more poverty-ending than $500k of poverty-ending -- but $5B of fun-having is nowhere near 10,000 times more fun than $500k of fun-having. (I doubt it's even 10x more.) So in this situation it is reasonable to say that you value poverty-ending much less, relative to fun-having, than I do.

Pedantic notes: I'm supposing that your second yacht costs you $100M and that you can save one African's life for $10k; billionaires' yachts are often more expensive and the best estimates I've heard for saving poor people's lives are cheaper. Presumably if you focus on ending poverty rather than on e.g. preventing malaria then you think that's a more efficient way of helping the global poor, which makes your luxury trade off against more lives. I am using "saving lives" as a shorthand; presumably what you actually care about is something more like time-discounted aggregate QALYs. Your billionaire playboy's luxury purchase might be something other than a yacht. Offer void where prohibited by law. Slippery when wet.

And, for the avoidance of doubt, I strongly endorse devoting half your resources to ending poverty and half to being a billionaire playboy, if the alternative is putting it all into being a billionaire playboy. The good you can do that way is tremendous, and I'd take my hat off to you if I were wearing one. I just don't think it's right to describe that situation by saying that poverty is the most important thing to you.

Comment author: Jiro 01 December 2014 03:49:16PM 1 point [-]

Thank you, that's what I would have said.

Comment author: RichardKennaway 01 December 2014 12:24:57PM 0 points [-]

You can have (many!) things which you hold to be important and work to bring about, but which you do not spend every plausibly-available resource on.

What about the argument from marginal effectiveness? I.e. unless the best thing for you to work on is so small that your contribution reduces its marginal effectiveness below that of the second-best thing, you should devote all of your resources to the best thing.

I don't myself act on the conclusion, but I also don't see a flaw in the argument.