Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Lumifer 20 May 2015 07:30:17PM *  1 point [-]

There are two issues here.

One is tracking of individual contributions. When a charity says "A $5000 donation saves one life" they do not mean that your particular $5000 will save one specific life. Instead they divide their budget of $Z by their estimate of Y lives saved and produce a dollars/life number. This is an average and doesn't have much to do with you personally other than that you were one data point in the set from which this average was calculated.

"I contributed to the common effort which resulted in preventing Y deaths from malaria" is a more precise formulation which, of course, doesn't sound as good as "I saved X lives".

Two is the length of the causal chain. If you, with your own hands, pull a drowning kid out of the water, that's one life saved with the causal chain of length 1. If you give money to an organization which finances another organization which provides certain goods for the third organization to distribute with the help of a bunch of other organizations, the causal chain is long and the longer it goes, the fuzzier it gets.

As always, look at incentives.Charity fundraising is effectively advertising with greater social latitude to use emotional manipulation. One strand in that manipulation is to make the donor feel an direct emotional connection with "direct" being the key word. That's why you have "Your donation saves lives!" copy next to a photo of an undernourished black or brown kid (preferably a girl) looking at the camera with puppy eyes.

Comment author: jkaufman 27 May 2015 02:41:53PM *  0 points [-]

When a charity says...

If someone is saying "I saved 10 lives" because they gave $500 to a charity that advertises a cost per life saved of $50, then yes, that's very different from actually saving lives. But the problem is that charities' reports of their cost effectiveness are ridiculously exaggerated, and you just shouldn't trust anything they say.

Instead they divide their budget of $Z by their estimate of Y lives saved and produce a dollars/life number.

What we want are marginal costs, not average costs, and these are what organizations like GiveWell try to estimate.

the causal chain is long and the longer it goes, the fuzzier it gets

Yes, this is real. But we're ok with assigning credit along longish causal chains in many domains; why exclude charity?

Comment author: Lumifer 20 May 2015 03:21:21PM 2 points [-]

what is the ideal length of tenure at a company?

A rather important question here is what's "ideal" and from whose point of view? From the point of the view of the company, sure, you want some churn, but I don't know what the company would correspond to in the discussion of the aging of humanity. You're likely thinking about "society", but as opposed to companies societies do not and should not optimize for profit (or even GDP) at any cost. It's not that hard to get to the "put your old geezers on ice floes and push them off into the ocean" practices.

The main issue is that people tend to fixate some on what they learn when they're younger, so if people get much older on average then it would be harder to make progress.

That's true, as a paraphrase of Max Planck's points out, "Science advances one funeral at a time".

However it also depends on what does "live forever" mean. Being stabilized at the biological age of 70 would probably lead to very different consequences from being stabilized at the biological age of 25.

Comment author: jkaufman 20 May 2015 06:27:25PM -1 points [-]

Being stabilized at the biological age of 70 would probably lead to very different consequences from being stabilized at the biological age of 25.

This probably also depends a lot on the particulars of what "stabilized at the biological age of 25" means. Most 25 year-olds are relatively open to experience, but does that come from being biologically younger or just having had less time to become set in their ways?

This also seems like something that may be fixable with better pharma technology if we can figure out how to temporarily put people into a more childlike exploratory open-to-experience state.

Comment author: Lumifer 19 May 2015 02:22:23PM 2 points [-]

I do not accept that a dollar is a unit of caring.

I do not think that contributing money to an organization which runs programs which statistically save lives can be legitimately called "I saved X lives". Compare: "I bought some war bonds so I can say I personally killed X enemy soldiers".

I think that strutting one's charitable activities is in very poor taste.

Comment author: jkaufman 20 May 2015 06:07:30PM 2 points [-]

What would you use "I saved X lives" to mean if not "compared to what I would have done otherwise, X more people are alive today"?

(I don't at all like the implied precision in giving a specific number, though.)

Comment author: Lumifer 18 May 2015 03:46:27PM 0 points [-]

there's a lot of room between "longer is probably better" and "effectively unlimited is ideal".

Yes, but are you saying there's going to a maximum somewhere in that space -- some metric will flip over and start going down? What might that metric be?

Comment author: jkaufman 20 May 2015 02:40:18PM 0 points [-]

As I wrote in that post, there are some factors that lead to us thinking longer lives would be better, and others that shorter would be better.

Maybe this is easier to think about with a related question: what is the ideal length of tenure at a company? Do companies do best when they have entirely employees-for-life, or is it helpful to have some churn? (Ignoring that people can come in with useful relevant knowledge they got working elsewhere.) Clearly too much churn is very bad for the company, but introducing new people to your practices and teaching them help you adapt and modernize, while if everyone has been there forever it can be hard to make adjustments to changing situations.

The main issue is that people tend to fixate some on what they learn when they're younger, so if people get much older on average then it would be harder to make progress.

Comment author: Lumifer 16 May 2015 02:40:59AM 0 points [-]

I meant this as a response specifically to

But dramatically fewer children? Much less of the total human experience spent in early learning stages? Would we become less able to make progress in the world because people have trouble moving on from what they first learned?

Comment author: jkaufman 18 May 2015 12:00:59PM *  1 point [-]

More context:

A world in which we have ended death ... may be better than the world now, but I could also see it being worse. On one hand, not having to see your friends and family die, increased institutional memory, more time to get deeply into subjects and achieve mastery, and time to really build up old strong friendships sound good. But dramatically fewer children? Much less of the total human experience spent in early learning stages? Would we become less able to make progress in the world because people have trouble moving on from what they first learned?

I don't think our current lifespan is the perfect length, but there's a lot of room between "longer is probably better" and "effectively unlimited is ideal".

Comment author: gwern 15 May 2015 03:23:52PM *  0 points [-]

I don't see a specific, well-defined question that you're trying to answer.

su3su2u1 has accused EAers of hypocrisy in not donating despite a moral philosophy centering around donating; hypocrisy is about actions inconsistent with one's own claimed beliefs, and on EA's own aggregative utilitarian premises, total dollars donated are what matter, not anything about the distribution of dollars over people.

Hence, in investigating whether EAers are hypocrites, I must be interested in totals and not internal details of how many are zeros.

(The totals aren't going to change regardless of whether you model it using mixture or hierarchical or zero-inflated distributions; and as the distribution-free tests say, the EAers do report higher median donations.)

Comment author: jkaufman 16 May 2015 02:28:11AM 3 points [-]

on EA's own aggregative utilitarian premises, total dollars donated are what matter, not anything about the distribution of dollars over people.

This is a pretty narrow conception of EA. You can be an EA without earning to give. For example, you could carefully choose a career where you directly do good, you could work in advocacy, or you could be a student gaining career capital for later usage.

Comment author: Lumifer 11 May 2015 04:23:52PM *  5 points [-]

I have a feeling a lot of discussions of life extension suffer from being conditioned on the implicit set point of what's normal now.

Let's imagine that humans are actually replicants and their lifespan runs out in their 40s. That lifespan has a "control dial" and you can turn it to extend the human average life expectancy into the 80s. Would all your arguments apply and construct a case against meddling with that control dial?

Comment author: jkaufman 16 May 2015 02:23:27AM 0 points [-]

Huh? It feels like you're responding to a common thing people say, but not to anything I've said (or believe).

Comment author: CAE_Jones 11 May 2015 10:29:11AM 2 points [-]

In Praise of Life (Let’s Ditch the Cult of Longevity)

That article would be better titled "In Praise of Death", and is a string of the usual platitudes and circularities.

I'm now curious: where are the essays that make actual arguments in favor of death? The linked article doesn't make any; it just asserts that death is OK and we're being silly for fighting it, without actually providing a reason (they cite Borges's distopias at the end, but this paragraph has practically nothing in common with the rest of the article, which seems to assume immortality is impossible anyway).

Preference goes to arguments against Elven-style immortality (resistant but not completely immune to murder or disaster, suicide is an option, age-related disabilities are not a thing).

Comment author: jkaufman 11 May 2015 03:42:52PM 1 point [-]

Here's my argument for why death isn't the supreme enemy: http://www.jefftk.com/p/not-very-anti-death

Comment author: Pablo_Stafforini 23 April 2015 06:08:52AM 2 points [-]

Any updates?

Comment author: jkaufman 27 April 2015 06:24:49PM *  2 points [-]

I eventually got annoyed at the interruptions and stopped, but only about a month ago, 11 months after the baby was born.

http://www.jefftk.com/happiness_graph is up to date with the final samples

I think the rise up to December 2013 was mostly me getting used to the scale I was using.

The baby was born 3/26.

There's no data from periods when I was asleep or trying to sleep, which misses out on the main source of unhappiness: night-time wakings.

The period with no data is data loss from a broken phone -- with TagTime I needed to do manual backups which I didn't get around to very often. This lost data was for a chunk of my paternity leave, sadly.

The low point in late january corresponds to my mother dying; the high point before that corresponds to lots of family being around for the holidays.

Comment author: V_V 04 April 2015 12:37:11PM 1 point [-]

I'm not a big fan of decision making by conditional prediction markets (btw, "futarchy" is an obscure, non-descriptive name. Better call it something like "prophetocracy"), but I think that proponents like Robin Hanson propose that the value system is not set once and for all but regularly updated by a democratically elected government. This should avoid the failure mode you are talking about.

Comment author: jkaufman 06 April 2015 12:49:22PM 2 points [-]

"Futarchy" is an obscure, non-descriptive name. Better call it something like "prophetocracy"

"Futarchy" is the standard term for this governmental system. Perhaps Hanson should have chosen a different name, but that's the name its been going under for about a decade and I don't think "prophetocracy" would be an improvement.

View more: Next