JonahSinick comments on Earning to Give vs. Altruistic Career Choice Revisited - Less Wrong

34 Post author: JonahSinick 02 June 2013 02:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (154)

You are viewing a single comment's thread. Show more comments above.

Comment author: JonahSinick 28 May 2013 08:28:53PM 1 point [-]

I'll also highight another point implicit in my post: even if one assumes that there's not enough funding in the nonprofit world for the projects of highest value, there may be such funding available in other contexts (for-profit, academic and government). This makes the argument for earning to give weaker.

I recognize that I haven't addressed the specific subject of Friendly AI research, and will do so in future posts.

Comment author: Eliezer_Yudkowsky 28 May 2013 08:32:29PM 2 points [-]

I understand if your priorities aren't our priorities. My concrete example reflex was firing, that's all.

Comment author: JonahSinick 28 May 2013 08:36:01PM 4 points [-]

I think that there's substantial overlap between my values and MIRI staff's values, and that the difference regarding the relative value of "earning to give" is epistemic rather than normative. But obviously there's a great deal more that needs to be said about the epistemic side, with reference to the concrete example of Friendly AI.

Comment author: Eliezer_Yudkowsky 28 May 2013 08:43:20PM 9 points [-]

I can imagine someone thinking that FHI was a better use of money than MIRI, or CFAR, or CSER, or the Foresight Institute, or brain-scanning neuroscience, or rapid-response vaccines, or any number of startups, but considering AMF as being in the running at all seems to require either a value difference or really really different epistemics about what affects the fate of future galaxies.

Comment author: Benja 28 May 2013 10:43:43PM 5 points [-]

Realistic amounts of difference in epistemics + the "humans best stick to the mainline probability" heuristic seem enough (where by "realistic" I mean "of the degree actually found in the world"). I.e., I honestly believe that there are many people out there who would care the hell about the fate of future galaxies if they alieved that they had any non-vanishing chance of significantly influencing that fate (and to choose the intervention that influences it in the desired direction).

Comment author: Eliezer_Yudkowsky 28 May 2013 11:10:16PM 1 point [-]

If you're one of 10^11 sentients to be born on Ancient Earth with a golden opportunity to influence a roughly 10^80-sized future, what exactly is a 'vanishing chance'... eh, let's all save it until later.

Comment author: Benja 28 May 2013 11:56:57PM *  10 points [-]

I meant that the alieved probability is small in absolute terms, not that it is small compared to the payoff. That's why I mentioned the "stick to the mainline probability" heuristic. I really do believe that there are many people who, if they alieved that they (or a group effort they could join) could change the probability of a 10^80-sized future by 10%, would really care; but who do not alieve that the probability is large enough to even register, as a probability; and whose brains will not attempt to multiply a not-even-registering probability with a humongous payoff. (By "alieving a probability" I simply mean processing the scenario the way one's brain processes things it assigns that amount of credence, not a conscious statement about percentages.)

This is meant as a statement about people's actual reasoning processes, not about what would be reasonable (though I did think that you didn't feel that multiplying a very small success probability with a very large payoff was a good reason to donate to MIRI; in any case seems to me that the more important unreasonableness is requesting mountains of evidence before alieving a non-vanishing probability for weird-sounding things).

[ETA: I find it hard to put a number on the not-even-registering probability the sort of person I have in mind might actually alieve, but I think a fair comparison is, say, the "LHC will create black holes" thing -- I think people will tend to process both in a similar way, and this does not mean that they would shrug it off if somebody counterfactually actually did drop a mountain of evidence about either possibility on their head.]

Comment author: Eliezer_Yudkowsky 29 May 2013 10:48:00PM 5 points [-]

though I did think that you didn't feel that multiplying a very small success probability with a very large payoff was a good reason to donate to MIRI

Because on a planet like this one, there ought to be some medium-probable way for you and a cohort of like-minded people to do something about x-risk, and if a particular path seems low probability, you should look for one that's at least medium-probability instead.

Comment author: Benja 29 May 2013 10:57:04PM 2 points [-]

Ok, fair enough. (I had misunderstood you on that particular point, sorry.)

Comment author: Mitchell_Porter 31 May 2013 01:45:32AM 4 points [-]

If there was ever a reliable indicator that you're wrong about something, it is the belief that you are special to the order of 1 in 10^70.

Comment author: Eliezer_Yudkowsky 31 May 2013 03:36:01AM 6 points [-]

So do you believe in the Simulation Hypothesis or the Doomsday Argument, then? All attempts to cash out that refusal-to-believe end in one or the other, inevitably.

Comment author: Mitchell_Porter 31 May 2013 02:28:59PM 6 points [-]

From where I stand, it's more like arcane meta-arguments about probability are motivating a refusal-to-doubt the assumptions of a prized scenario.

Yes, I am apriori skeptical of anything which says I am that special. I know there are weird counterarguments (SIA) and I never got to the bottom of that debate. But meta issues aside, why should the "10^80 scenario" be the rational default estimation of Earth's significance in the universe?

The 10^80 scenario assumes that it's physically possible to conquer the universe and that nothing would try to stop such a conquest, both enormous assumptions... astronomically naive and optimistic, about the cosmic prospects that await an Earth which doesn't destroy itself.

Comment author: Eliezer_Yudkowsky 31 May 2013 05:40:52PM 3 points [-]

Okay, so that's the Doomsday Argument then: Since being able to conquer the universe implies we're 10^70 special, we must not be able to conquer the universe.

Calling the converse of this an arcane meta-argument about probability hardly seems fair. You can make a case for Doomsday but it's not non-arcane.

Comment author: shminux 31 May 2013 03:12:14PM 1 point [-]

I largely agree with your skepticism. I would go even farther and say that even the 10^80 scenario happens, what we do now can only influence it by random chance, because the uncertainty in the calculations of the consequences of our actions in the near term on the far future overwhelms the calculations themselves. That said, we should still do what we think is best in the near term (defined by our estimates of the uncertainty being reasonably small), just not invoke the 10^80 leverage argument. This can probably be formalized, by assuming that the prediction error grows exponentially with some relevant parameter, like time or the number of choices investigated, and calculating the exponent from historical data.

Comment author: komponisto 31 May 2013 04:33:23AM 2 points [-]

Doomsday for me, I think. Especially when you consider that it doesn't mean doomsday is literally imminent, just "imminent" relative to the kind of timescale that would be expected to create populations on the order of 10^80.

In other words, it fits with the default human assumption that civilization will basically continue as it is for another few centuries or millennia before being wiped out by some great catastrophe.

Comment author: shminux 31 May 2013 04:30:14AM 1 point [-]

Do you mind elaborating on this inevitability? It seems like there ought to be other assumptions involved. For example, I can easily imagine that humans will never be able to colonize even this one galaxy, or even any solar system other than this one. Or that they will artificially limit the number of individuals. Or maybe the only consistent CEV is that of a single superintelligence of which human minds will be tiny parts. All of these result in the rather small total number of individuals existing at any point in time.

Comment author: Eliezer_Yudkowsky 31 May 2013 05:45:34PM 3 points [-]

For example, I can easily imagine that humans will never be able to colonize even this one galaxy, or even any solar system other than this one.

Counts as Doomsday, also doesn't work because this solar system could support vast numbers of uploads for vast amounts of time (by comparison to previous population).

Or that they will artificially limit the number of individuals.

This is a potential reply to both Doomsday and SA but only if you think that 'random individual' has more force than a similar argument from 'random observer-moment', i.e. to the second you reply, "What do you mean, why am I near the beginning of a billion-year life rather than the middle? Anyone would think that near the beginning!" (And then you have to not translate that argument back into a beginning-civilization saying the same thing.)

Or maybe the only consistent CEV is that of a single superintelligence of which human minds will be tiny parts.

...whereupon we wonder something about total 'experience mass', and, if that argument doesn't go through, why the original Doomsday Argument / SH should either.

Comment author: shminux 29 May 2013 08:26:07AM *  0 points [-]

I wonder if this argument can be made precise enough to have its premises and all the intermediate assumptions examined. I remain skeptical of any forecast that far into the future. You presumably mean your confidence in the UFAI x-risk within the next 20-100 years as the minimum hurdle to overcome, with the eternal FAI paradise to follow.

Comment author: JonahSinick 28 May 2013 08:52:04PM 1 point [-]

My reason for mentioning AMF and global health is that doing so provides a concrete, pretty robustly researched example, rather than as to compare with efforts to improve the far future of humanity.

I think that working in global health in a reflective and goal directed way is probably better for improving global health than "earning to give" to AMF. Similarly, I think that working directly on things that bear on the long term future of humanity is probably a better way of improving the far future of humanity than "earning to give" to efforts along these lines.

I'll discuss particular opportunities to impact the far future of humanity later on.

Comment author: Eliezer_Yudkowsky 28 May 2013 10:25:36PM 10 points [-]

My reason for mentioning AMF and global health is that doing so provides a concrete, pretty robustly researched example

That depends on what you want to know, doesn't it? As far as I know the impact of AMF on x-risk, astronomical waste, and total utilons integrated over the future of the galaxies, is very poorly researched and not at all concrete. Perhaps some other fact about AMF is concrete and robustly researched, but is it the fact I need for my decision-making?

(Yes, let's talk about this later on. I'm sorry to be bothersome but talking about AMF in the same breath as x-risk just seems really odd. The key issues are going to be very different when you're trying to do something so near-term, established, without scary ambiguity, etc. as AMF.)

Comment author: JonahSinick 29 May 2013 12:27:40AM *  8 points [-]

I'm somewhat confused by the direction that this discussion has taken. I might be missing something, but I believe that the points related to AMF that I've made are:

  1. GiveWell's explicit cost-effectiveness estimate for AMF is much higher than the cost per DALY saved implied by the figure that MacAskill cited.

  2. GiveWell's explicit estimates for the cost-effectiveness of the best giving opportunities in the field of direct global health interventions have steadily gotten lower, and by conservation of expected evidence, one can expect this trend to continue.

  3. The degree of regression to the mean observed in practice suggests that there's less variance amongst the cost-effectiveness of giving opportunities than may initially appear to be the case.

  4. By choosing an altruistic career path, one can cut down on the number of small probability failure modes associated with what you do.

I don't remember mentioning AMF and x-risk reduction together at all. I recognize that it's in principle possible that the "earning to give" route is better for x-risk reduction than it is for improving global health, but I believe the analogy between the two domains is sufficiently strong that my remarks on AMF have relevance (on a meta-level, not on an object level).

Comment author: Eliezer_Yudkowsky 29 May 2013 01:36:04AM 6 points [-]

Yeah, I also have the feeling that I'm questioning you improperly in some fashion. I'm mostly driven by a sense that AMF is very disanalogous to the choices that face somebody trying to optimize x-risk charity (or rather total utilons over all future time, but x-risk seems to be the word we use for that nowadays). It seems though that we're trying to have a discussion in an ad-hoc fashion that should be tabled and delayed for explicit discussion in a future post, as you say.

Comment author: loup-vaillant 29 May 2013 12:48:24PM *  6 points [-]

If I may list some differences I perceive between AMF and MIRI:

  • AMF's impact is quite certain. MIRI's impact feels more like a long shot —or even a pipe dream.
  • AMF's impact is sizeable. MIRI's potential impact is astronomic.
  • AMF's impact is immediate. MIRI's impact is long term only.
  • AMF's have photos of children. MIRI have science fiction.
  • In mainstream circles, donating to AMF gives you pats in the back, while donating to MIRI gives you funny looks.

Near mode thinking will most likely direct one to AMF. MIRI probably requires one to shut up and multiply. Which is probably why I'm currently giving a little money to Greenpeace, despite being increasingly certain that it's far, far from the best choice.

Comment author: elharo 29 May 2013 01:59:43PM 1 point [-]

One more difference:

AMF's impact is very likely to be net positive for the world under all reasonable hypotheses.

MIRI appears to me to have a chance to be massively net negative for humanity. I.e. if AI of the level they predict is actually possible, MIRI might end up creating or assisting in the creation of UFAI that would not otherwise be created, or perhaps not created as soon.

Comment author: ESRogs 30 May 2013 02:15:08AM 0 points [-]

by conservation of expected evidence, one can expect this trend to continue

Not really related to the current discussion, but I want to make sure I understand the above statement. Is this assuming that the trend has not already been taken into account in forming the estimates?

Comment author: JonahSinick 30 May 2013 04:33:16AM 1 point [-]

Yes — the cost-effectiveness estimate has been adjusted every time a new issue has arisen, but on a case by case basis, without an attempt to extrapolate based on the historical trend.

Comment author: MichaelVassar 29 May 2013 01:19:01PM 0 points [-]

I tend to think that if one can make a for-profit entity, that's the best sort of vehicle to pursue most tasks, though occasionally, churches or governments have some value too.