Eliezer_Yudkowsky comments on Earning to Give vs. Altruistic Career Choice Revisited - Less Wrong

34 Post author: JonahSinick 02 June 2013 02:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (154)

You are viewing a single comment's thread.

Comment author: Eliezer_Yudkowsky 28 May 2013 05:48:46PM 33 points [-]

The top considerations that come into play when I advise someone whether to earn-to-give or work directly on x-risk look like this:

1) Does this person have a large comparative advantage at the direct problem domain? Top-rank math talent can probably do better at MIRI than at a hedge fund, since there are many mathematical talents competing to go into hedge funds and no guarantee of a good job, and the talent we need for inventing new basic math does not translate directly into writing the best QT machine learning programs the fastest.

2) Is this person going to be able to stay motivated if they go off on their own to earn-to-give, without staying plugged into the community? Alternatively, if the person's possible advantage is at a task that requires a lot of self-direction, will they be able to stay on track without requiring constant labor to keep them on track, since that kind of independent job is much harder to stick at then a 9-to-5 office job with supervision and feedback and cash bonuses?

Every full-time employee at a nonprofit requires at least 10 unusually generous donors or 1 exceptionally generous donor to pay their salary. For any particular person wondering how they should help this implies a strong prior bias toward earning-to-give. There are others competing to have the best advantage for the nonprofit's exact task, and also there are thousands of job opportunities out there that are competing to be the maximally-earning use of your exact talents - best-fits to direct-task-labor vs. earning-to-give should logically be rare, and they are.

The next-largest issue is motivation, and here again there are two sides to the story. The law student who goes in wanting to be an environmentalist (sigh) and comes out of law school accepting the internship with the highest-paying firm is a common anecdote, though now that I come to write it down, I don't particularly know of any gathered data. Earning to give can impose improbability in the form of likelihood that the person will actually give. Conversely, a lot of the most important work at the most efficient altruistic organizations is work that requires self-direction, which is also demanding of motivation.

I should pause here to remark that if you constrain yourself to 'straightforward' altruistic efforts in which the work done is clearly understandable and repeatable and everyone agrees on how wonderful it is, you will of course be constraining yourself very far away from the most efficient altruism - just like a grant committee that only wants to fund scientific research with a 100% chance of paying off in publications and prestige, or a VC that only wanted to fund companies that were certain to be defensible-appearing decisions, or someone who constrained their investments to assets that had almost no risk of going down. You will end up doing things that are nearly certain never to appear to future historians as a decisive factor in the history of Earth-originating intelligent life; this requires tolerance for not just risk but scary ambiguity. But if you want to work on things that might actually be decisive, you will end up in mostly uncharted territory doing highly self-directed work, and many people cannot do this. Just as many other people cannot sustain altruism without being surrounded by other altruists, but this can possibly be purchased elsewhere via living on the West or East Coast and hanging around with others who are earning-to-give or working directly.

These are the top considerations when someone asks me whether they should work directly or earn to support others working directly - the low prior, whether the exact fit of talent is great enough to overcome that prior, and whether the person can sustain motivation / self-direct.

Comment author: MichaelVassar 29 May 2013 01:18:02PM 16 points [-]

My main comment on this is that if self-direction is as important as it appears to be, it would seem to me that 'become self directed' really should be everyone's first priority if they can think of any way to do that. My second comment is that it seems to me that if one is self-directed and seeks appropriate mentorship, the expected value of pursuing a conventional career is very low compared to that of pursuing an entrepreneurial career. Conversely, mentorship or advice that doesn't account for the critical factor of how self-directed someone is, as well as a few other critical factors such at the disposition to explore options, respond to empirical feedback from the market, etc, is likely to be worse than useless.

Comment author: [deleted] 02 June 2013 09:29:36PM 1 point [-]

My second comment is that it seems to me that if one is self-directed and seeks appropriate mentorship, the expected value of pursuing a conventional career is very low compared to that of pursuing an entrepreneurial career.

Can you expand on this? How does one seek appropriate mentorship?

Comment author: JonahSinick 28 May 2013 08:28:53PM 1 point [-]

I'll also highight another point implicit in my post: even if one assumes that there's not enough funding in the nonprofit world for the projects of highest value, there may be such funding available in other contexts (for-profit, academic and government). This makes the argument for earning to give weaker.

I recognize that I haven't addressed the specific subject of Friendly AI research, and will do so in future posts.

Comment author: Eliezer_Yudkowsky 28 May 2013 08:32:29PM 2 points [-]

I understand if your priorities aren't our priorities. My concrete example reflex was firing, that's all.

Comment author: JonahSinick 28 May 2013 08:36:01PM 4 points [-]

I think that there's substantial overlap between my values and MIRI staff's values, and that the difference regarding the relative value of "earning to give" is epistemic rather than normative. But obviously there's a great deal more that needs to be said about the epistemic side, with reference to the concrete example of Friendly AI.

Comment author: Eliezer_Yudkowsky 28 May 2013 08:43:20PM 9 points [-]

I can imagine someone thinking that FHI was a better use of money than MIRI, or CFAR, or CSER, or the Foresight Institute, or brain-scanning neuroscience, or rapid-response vaccines, or any number of startups, but considering AMF as being in the running at all seems to require either a value difference or really really different epistemics about what affects the fate of future galaxies.

Comment author: Benja 28 May 2013 10:43:43PM 5 points [-]

Realistic amounts of difference in epistemics + the "humans best stick to the mainline probability" heuristic seem enough (where by "realistic" I mean "of the degree actually found in the world"). I.e., I honestly believe that there are many people out there who would care the hell about the fate of future galaxies if they alieved that they had any non-vanishing chance of significantly influencing that fate (and to choose the intervention that influences it in the desired direction).

Comment author: Eliezer_Yudkowsky 28 May 2013 11:10:16PM 1 point [-]

If you're one of 10^11 sentients to be born on Ancient Earth with a golden opportunity to influence a roughly 10^80-sized future, what exactly is a 'vanishing chance'... eh, let's all save it until later.

Comment author: Benja 28 May 2013 11:56:57PM *  10 points [-]

I meant that the alieved probability is small in absolute terms, not that it is small compared to the payoff. That's why I mentioned the "stick to the mainline probability" heuristic. I really do believe that there are many people who, if they alieved that they (or a group effort they could join) could change the probability of a 10^80-sized future by 10%, would really care; but who do not alieve that the probability is large enough to even register, as a probability; and whose brains will not attempt to multiply a not-even-registering probability with a humongous payoff. (By "alieving a probability" I simply mean processing the scenario the way one's brain processes things it assigns that amount of credence, not a conscious statement about percentages.)

This is meant as a statement about people's actual reasoning processes, not about what would be reasonable (though I did think that you didn't feel that multiplying a very small success probability with a very large payoff was a good reason to donate to MIRI; in any case seems to me that the more important unreasonableness is requesting mountains of evidence before alieving a non-vanishing probability for weird-sounding things).

[ETA: I find it hard to put a number on the not-even-registering probability the sort of person I have in mind might actually alieve, but I think a fair comparison is, say, the "LHC will create black holes" thing -- I think people will tend to process both in a similar way, and this does not mean that they would shrug it off if somebody counterfactually actually did drop a mountain of evidence about either possibility on their head.]

Comment author: Eliezer_Yudkowsky 29 May 2013 10:48:00PM 5 points [-]

though I did think that you didn't feel that multiplying a very small success probability with a very large payoff was a good reason to donate to MIRI

Because on a planet like this one, there ought to be some medium-probable way for you and a cohort of like-minded people to do something about x-risk, and if a particular path seems low probability, you should look for one that's at least medium-probability instead.

Comment author: Benja 29 May 2013 10:57:04PM 2 points [-]

Ok, fair enough. (I had misunderstood you on that particular point, sorry.)

Comment author: Mitchell_Porter 31 May 2013 01:45:32AM 4 points [-]

If there was ever a reliable indicator that you're wrong about something, it is the belief that you are special to the order of 1 in 10^70.

Comment author: Eliezer_Yudkowsky 31 May 2013 03:36:01AM 6 points [-]

So do you believe in the Simulation Hypothesis or the Doomsday Argument, then? All attempts to cash out that refusal-to-believe end in one or the other, inevitably.

Comment author: Mitchell_Porter 31 May 2013 02:28:59PM 6 points [-]

From where I stand, it's more like arcane meta-arguments about probability are motivating a refusal-to-doubt the assumptions of a prized scenario.

Yes, I am apriori skeptical of anything which says I am that special. I know there are weird counterarguments (SIA) and I never got to the bottom of that debate. But meta issues aside, why should the "10^80 scenario" be the rational default estimation of Earth's significance in the universe?

The 10^80 scenario assumes that it's physically possible to conquer the universe and that nothing would try to stop such a conquest, both enormous assumptions... astronomically naive and optimistic, about the cosmic prospects that await an Earth which doesn't destroy itself.

Comment author: komponisto 31 May 2013 04:33:23AM 2 points [-]

Doomsday for me, I think. Especially when you consider that it doesn't mean doomsday is literally imminent, just "imminent" relative to the kind of timescale that would be expected to create populations on the order of 10^80.

In other words, it fits with the default human assumption that civilization will basically continue as it is for another few centuries or millennia before being wiped out by some great catastrophe.

Comment author: shminux 31 May 2013 04:30:14AM 1 point [-]

Do you mind elaborating on this inevitability? It seems like there ought to be other assumptions involved. For example, I can easily imagine that humans will never be able to colonize even this one galaxy, or even any solar system other than this one. Or that they will artificially limit the number of individuals. Or maybe the only consistent CEV is that of a single superintelligence of which human minds will be tiny parts. All of these result in the rather small total number of individuals existing at any point in time.

Comment author: shminux 29 May 2013 08:26:07AM *  0 points [-]

I wonder if this argument can be made precise enough to have its premises and all the intermediate assumptions examined. I remain skeptical of any forecast that far into the future. You presumably mean your confidence in the UFAI x-risk within the next 20-100 years as the minimum hurdle to overcome, with the eternal FAI paradise to follow.

Comment author: JonahSinick 28 May 2013 08:52:04PM 1 point [-]

My reason for mentioning AMF and global health is that doing so provides a concrete, pretty robustly researched example, rather than as to compare with efforts to improve the far future of humanity.

I think that working in global health in a reflective and goal directed way is probably better for improving global health than "earning to give" to AMF. Similarly, I think that working directly on things that bear on the long term future of humanity is probably a better way of improving the far future of humanity than "earning to give" to efforts along these lines.

I'll discuss particular opportunities to impact the far future of humanity later on.

Comment author: Eliezer_Yudkowsky 28 May 2013 10:25:36PM 10 points [-]

My reason for mentioning AMF and global health is that doing so provides a concrete, pretty robustly researched example

That depends on what you want to know, doesn't it? As far as I know the impact of AMF on x-risk, astronomical waste, and total utilons integrated over the future of the galaxies, is very poorly researched and not at all concrete. Perhaps some other fact about AMF is concrete and robustly researched, but is it the fact I need for my decision-making?

(Yes, let's talk about this later on. I'm sorry to be bothersome but talking about AMF in the same breath as x-risk just seems really odd. The key issues are going to be very different when you're trying to do something so near-term, established, without scary ambiguity, etc. as AMF.)

Comment author: JonahSinick 29 May 2013 12:27:40AM *  8 points [-]

I'm somewhat confused by the direction that this discussion has taken. I might be missing something, but I believe that the points related to AMF that I've made are:

  1. GiveWell's explicit cost-effectiveness estimate for AMF is much higher than the cost per DALY saved implied by the figure that MacAskill cited.

  2. GiveWell's explicit estimates for the cost-effectiveness of the best giving opportunities in the field of direct global health interventions have steadily gotten lower, and by conservation of expected evidence, one can expect this trend to continue.

  3. The degree of regression to the mean observed in practice suggests that there's less variance amongst the cost-effectiveness of giving opportunities than may initially appear to be the case.

  4. By choosing an altruistic career path, one can cut down on the number of small probability failure modes associated with what you do.

I don't remember mentioning AMF and x-risk reduction together at all. I recognize that it's in principle possible that the "earning to give" route is better for x-risk reduction than it is for improving global health, but I believe the analogy between the two domains is sufficiently strong that my remarks on AMF have relevance (on a meta-level, not on an object level).

Comment author: Eliezer_Yudkowsky 29 May 2013 01:36:04AM 6 points [-]

Yeah, I also have the feeling that I'm questioning you improperly in some fashion. I'm mostly driven by a sense that AMF is very disanalogous to the choices that face somebody trying to optimize x-risk charity (or rather total utilons over all future time, but x-risk seems to be the word we use for that nowadays). It seems though that we're trying to have a discussion in an ad-hoc fashion that should be tabled and delayed for explicit discussion in a future post, as you say.

Comment author: loup-vaillant 29 May 2013 12:48:24PM *  6 points [-]

If I may list some differences I perceive between AMF and MIRI:

  • AMF's impact is quite certain. MIRI's impact feels more like a long shot —or even a pipe dream.
  • AMF's impact is sizeable. MIRI's potential impact is astronomic.
  • AMF's impact is immediate. MIRI's impact is long term only.
  • AMF's have photos of children. MIRI have science fiction.
  • In mainstream circles, donating to AMF gives you pats in the back, while donating to MIRI gives you funny looks.

Near mode thinking will most likely direct one to AMF. MIRI probably requires one to shut up and multiply. Which is probably why I'm currently giving a little money to Greenpeace, despite being increasingly certain that it's far, far from the best choice.

Comment author: ESRogs 30 May 2013 02:15:08AM 0 points [-]

by conservation of expected evidence, one can expect this trend to continue

Not really related to the current discussion, but I want to make sure I understand the above statement. Is this assuming that the trend has not already been taken into account in forming the estimates?

Comment author: JonahSinick 30 May 2013 04:33:16AM 1 point [-]

Yes — the cost-effectiveness estimate has been adjusted every time a new issue has arisen, but on a case by case basis, without an attempt to extrapolate based on the historical trend.

Comment author: MichaelVassar 29 May 2013 01:19:01PM 0 points [-]

I tend to think that if one can make a for-profit entity, that's the best sort of vehicle to pursue most tasks, though occasionally, churches or governments have some value too.

Comment author: JonahSinick 28 May 2013 07:44:42PM 1 point [-]

Every full-time employee at a nonprofit requires at least 10 unusually generous donors or 1 exceptionally generous donor to pay their salary.

If you define "generous" by "amount of capital" then this is tautologically true. But by this standard, extraordinarily wealthy people are capable of being exceptionally exceptionally exceptionally generous. I'd recur to my remark about the Giving Pledge. I believe that the projects of highest humanitarian value will generally get funded.

I should pause here to remark [...] but this can possibly be purchased elsewhere via living on the West or East Coast and hanging around with others who are earning-to-give or working directly.

In principle this could fall under the "unusual values" consideration that I raise above. But I don't think that the sociological phenomenon that you seem to be implying to exist prevails in practice. I think that there a lot of funders who are not risk-averse, and indeed, many who are actively attracted to high risk projects.

Comment author: Eliezer_Yudkowsky 28 May 2013 08:37:22PM 2 points [-]

Well, if James Simons wanted to retire from Renaissance and work on FAI full-time, it would not be entirely obvious to me that this was a bad move, but only if Simons had enough in the bank to also pay as much other top-flight math talent as could reasonably be used, and was already so paying, such that there was no marginal return to his further earning power relative to existing funds.

This situation has not yet arisen. Unfortunately.

Comment author: JonahSinick 28 May 2013 09:12:38PM *  3 points [-]

I think that James Simons is an example of someone with an unusually strong comparative advantage at making money. But this wouldn't necessarily have been clear a priori: if you put yourself in Simons' shoes in 1980 the expected earnings of going into finance would be much lower than his actual earnings turned out to be. So it's not clear that he would have done better to "earn to give" than doing something of direct humanitarian value (though maybe it was clear from the outset that his comparative advantage was in finance.)

Comment author: JonahSinick 28 May 2013 08:07:10PM *  0 points [-]

Edit: [Moved comment to a different place]

Comment author: John_Maxwell_IV 29 May 2013 07:10:04AM *  1 point [-]

Every full-time employee at a nonprofit requires at least 10 unusually generous donors or 1 exceptionally generous donor to pay their salary.

Isn't $36K/yr the modal MIRI salary? That doesn't feel like it should be too hard on a $100K/yr software developer salary considering that charitable donations are tax-deductible up to 50% of your income (supposedly). If one donated $36K/yr out of their software developer salary, at $64K/yr they'd still be earning much more than a typical nonprofit employee (heck, much more than a typical college graduate), and if they were to pretend they were working at a nonprofit and subsist on $36K/yr themselves, they could probably subsidize 2 people at the modal MIRI salary (after several years' worth of promotions/raises).

Comment author: shminux 29 May 2013 07:44:30AM 4 points [-]

$36k/yr salary works out to be about $50k-$70k gross expense (including benefits, insurance, taxes etc) for a regular employer, not sure how much it is for a non-profit like MIRI.

Comment author: Lumifer 28 August 2013 01:02:31AM 2 points [-]

The cost (to an organization) of an employee is more than just his salary, often considerably more. There is health insurance and other benefits, payroll taxes, infrastructure support (e.g. a computer, a desk to put it on, a room to put the desk in), etc.

Comment author: wubbles 30 May 2013 12:19:19AM 2 points [-]

10% is the charitable giving limit. There is another thing to be asked about, and that is the impact of the job. If I were to be a tax lawyer, I would be directly harming the ability of the US government to spend on social welfare programs. If I worked on Wall Street anywhere but Vanguard I would be bilking people out of their life savings, and at Vanguard I wouldn't be making $100 K a year. Someone working as a tobacco farmer to raise money for cancer research has some misplaced priorities.

Comment author: ESRogs 30 May 2013 01:18:54AM *  5 points [-]

Where is that 10% number coming from? Looks to me like the limit is at least 20% in the US, and up to 50% for some organizations.

(BTW, can someone from MIRI or anyone else tell us if they're a 50% organization?)

EDIT: and by the way, that's just the limit on what's tax-deductible. There's no legal limit on how much you can actually give.

Comment author: malo 04 June 2013 03:06:03AM 6 points [-]

MIRI is a 50% organization.

See IRS Exempt Organizations Select Check and click the “Deductibility Status”

Comment author: lukeprog 27 August 2013 10:16:08PM 1 point [-]

Malo knows this, but I'll say it publicly:

In general, we suspect there are few people for whom it's healthy to actually be giving away 50% of their income.

Comment author: somervta 28 August 2013 05:07:53AM 0 points [-]

I understand why you said this, but most people interested in this are interested in the transition from 10% to >10% (say, 20), not in 10% to 50%. I presume you would estimate a higher number for whom this is healthy?

Comment author: lukeprog 28 August 2013 07:28:21AM 1 point [-]

Yes.

Comment author: ESRogs 05 June 2013 04:11:25AM 1 point [-]

Awesome, thanks!

Comment author: John_Maxwell_IV 30 May 2013 08:36:18AM 2 points [-]

I guess we also have to worry about state and maybe even city-specific tax laws too, huh?

Comment author: Lumifer 28 August 2013 03:51:55PM 3 points [-]

I were to be a tax lawyer, I would be directly harming the ability of the US government to spend on social welfare programs.

You could always go work for the IRS. It employs a lot of tax lawyers.

But there's a bigger issue: you think that the work of a (privately employed) tax lawyer intrinsically harms the ability of the US government to spend? That belief has LOTS of issues. I'll start with two: One, why do you think the capability of the US government to spend is an unalloyed good thing? And two, do you happen to know the volume (say, in feet of shelf space) of the current tax laws, regulations, and rulings? I'd recommend you find out and then think about whether any moderately complicated business can comply with them without the help of a tax lawyer.

If I worked on Wall Street anywhere but Vanguard I would be bilking people out of their life savings

Sigh. First, Vanguard is not part of Wall Street. Second... you really should not believe everything the popular media keeps feeding you.

Comment author: private_messaging 28 August 2013 05:08:26PM *  1 point [-]

Tax lawyers can not decrease taxes taken by US government in the long run, because US government gets to make the law adjusting for the existence of tax lawyering. This is why I have absolutely no qualms about employing a tax lawyer in the US.

Comment author: wubbles 29 August 2013 01:42:52AM 0 points [-]

By "Wall Street" I'm including the Buy Side as well as the Sell Side. The big buyside firms like Fidelity and Charles Schwab sell products that most people shouldn't buy. Insurance probably has a better case to buy some actively managed products, or some exotic derivatives, but I don't know why it can't do it itself.

To the extent that finance reallocates risk it can provide a positive utility benefit. However, the very productive businesses have questionable utility. Promoting active trading, picking hot funds etc, all eat into the returns clients can expect. Justify the existence of Charles Schwab's S&P 500 index fund, with expense ratio twice that of Vanguards. The most profitable divisions of investment banks tend to be the ones with the least competition, and hence most questionable social benefit.

I'm aware Dodge and Cox is in SF, and Vanguard in Valley Forge, Blackrock in Princeton, etc. However, they are all on "the Street".

The IRS doesn't pay well: for government pay one might as well work for NASA and accomplish something fun.

Comment author: Eugine_Nier 28 August 2013 04:44:57AM 0 points [-]

If I were to be a tax lawyer, I would be directly harming the ability of the US government to spend on social welfare programs.

Government social welfare spending is notoriously inefficient. So if your client is at all generous with his money you're coming out ahead. Heck even if he doesn't give to charity but does use the money to invest in productive enterprises, you're probably coming out ahead. And that's before taking into account how you spend your money.

Comment author: CarlShulman 29 August 2013 02:50:00AM *  1 point [-]

10% is the charitable giving limit.

Not in the U.S. (note these are in pre-tax earnings, so they translate into less in foregone consumption than they do in donations made).

There are limits to how much you can deduct, but they're very high.

For most people, the limits on charitable contributions don't apply. Only if you contribute more than 20% of your adjusted gross income to charity is it necessary to be concerned about donation limits. If the contribution is made to a public charity, the deduction is limited to 50% of your contribution base. For example, if you have an adjusted gross income of $100,000, your deduction limit for that year is $50,000.

Regarding this:

If I worked on Wall Street anywhere but Vanguard I would be bilking people out of their life savings, and at Vanguard I wouldn't be making $100 K a year. Someone working as a tobacco farmer to raise money for cancer research has some misplaced priorities.

See this essay:

Goldman has 32,000 employees. An upper bound for the harm caused by the marginal employee is thus the total harm caused divided by 32,000. For the harm to outweigh the good, Goldman would therefore have to be killing at least 3.2 million young people each year, or doing something else that is similarly harmful. That would mean that Goldman Sachs would need to be responsible for around 5% of all deaths in the world. Bear in mind that Goldman Sachs only makes up 22% of American investment banking, and 3% of the American financial industry - if the rest of finance is similarly bad, then it would imply that finance is doing something as bad as causing all the deaths in the world..

Let’s consider the American financial industry in general. Upcoming Giving What We Can research estimates that it would take $200 billion a year to move everyone in the world above the $1.25 poverty line. That figure will only be $74 billion in 2030. The employees of the financial sector could do this if they transferred (e.g. via GiveDirectly) 30-75% of their salaries to those in extreme global poverty (depending on what date you want to achieve the goal by). In other words, if everyone in finance were Earning to Give, it would be possible to end extreme global poverty within the next twenty years. Harm would only dominate if the financial sector is doing something roughly as bad as single handedly causing all global poverty.

Comment author: ESRogs 30 May 2013 01:31:55AM 1 point [-]

If I worked on Wall Street ... I would be bilking people out of their life savings

Do you actually think that a finance professional who donated a significant portion of their income to effective charity would be doing more harm than good? Even given that you can save the life of a child in the developing world for on the order of $2000?

Comment author: NancyLebovitz 03 June 2013 12:24:24PM 2 points [-]

I don't think the problem is finance professionals in general-- it's finance professionals in particularly corrupt parts of the industry.

Figuring out in advance that a job is doing particularly corrupt work seems to be something that people are very bad at-- I don't know whether it's mostly that it would be hard for a neutral observer, or that people don't want the problems of dealing with the consequences to their own lives if they find that their job is destructive.

Comment author: ESRogs 05 June 2013 04:22:11AM -1 points [-]

Hmm, I was thinking the assumption (which I don't necessarily entirely agree with) was that finance professionals were simply earning money without providing any benefit to society, and so a net negative. It sounds like your comment assumes that some of them are actually actively doing harm (though perhaps unintentionally), beyond just taking their own paycheck's worth out of the productive economy. Is that your understanding?

Comment author: NancyLebovitz 05 June 2013 05:38:44AM -1 points [-]

The mortgage crisis was a result of banks being able to sell mortgages to other banks. This meant that the bank making the loan could make money just by the mortgage being initiated-- the first bank no longer had a strong interest in the loan being repaid.

There were some other pieces to the situation that I don't have clear in my mind at the moment, but I think there were incentives for the mortgage to actually not be repaid and the house to be taken by a bank.

One piece that I am clear on is that there were people who decided it wasn't worth it for banks to keep accurate track of who owned which mortgage, or what had been paid, or what had been agreed to.

This is stealing people's houses. It's a degree of damage which it's hard to imagine being covered by charity.

The other side of the story is that not every bank behaved like that-- not all of finance is fraudulent.

Comment author: Lumifer 28 August 2013 03:43:35PM 2 points [-]

The mortgage crisis was a result of banks being able to sell mortgages to other banks.

This is not true. In fact, this is probably not even wrong...

Comment author: EHeller 28 August 2013 04:34:41PM 1 point [-]

Its at least somewhat true, if perhaps not well stated- packaged mortgages and derivatives based on packaged mortgages (mortgages sold as investment vehicles to other banks and funds) played a very large role in the crisis.

Without "selling mortgages to other banks" the popping of the housing bubble wouldn't have turned into the liquidity crunch that started in 2008.

Comment author: Lumifer 28 August 2013 05:34:16PM 2 points [-]

So were mortgages by themselves. Without the widespread availability of mortgages "the popping of the housing bubble wouldn't have turned into the liquidity crunch" too. Or, for that matter, without the fact that the "standard" mortgage is a 30-year fixed -- not, say, a 1/1 ARM.

But anyway, the reason for the contagion from mortgages to liquidity wasn't the ability to sell mortgages. It was the mispricing of mortgage derivatives, specifically the widespread belief that certain tranches of collateralized mortgage obligations (CMOs) were effectively risk-free.

If you want to dig deeper, the real cause was the global asset bubble helped by the too-loose monetary policy in the mid-2000s.

Financial economics are complicated. Snap judgements from popular press rarely have much relationship to reality.

Comment author: ESRogs 05 June 2013 11:38:01PM 1 point [-]

This is stealing people's houses. It's a degree of damage which it's hard to imagine being covered by charity.

Is it really? I suppose this depends on how many houses any individual is responsible for and how much money they capture per house. I guess that second part is the real issue -- any individual who would be giving to charity probably only captures a fraction of what they earn for their firm.

But if you could capture the whole value of a predatory mortgage and convert it into developing world lives saved, it's not hard to imagine the numbers adding up. (One American family goes bankrupt and 20 Malawian children who otherwise would have don't die in childhood? On the face of it that looks like a pretty positive net outcome.)

If you can do outsized damage significantly beyond what you can capture as income though, then I suppose it gets a bit tougher to justify.

Comment author: Desrtopa 06 June 2013 12:51:21AM *  0 points [-]

(One American family goes bankrupt and 20 Malawian children who otherwise would have don't die in childhood? On the face of it that looks like a pretty positive net outcome.)

If we're talking about donations on the scale of the activities that went into the mortgage crisis, I think you'd start to suffer seriously diminishing returns.

Even if you didn't, there are other problems you'd run into, such as the limited ability of the Malawian (or other impoverished African) society and economy to accommodate such a sudden spike in children surviving to adulthood. The lives that you save from extermination at the hands of malaria or other preventable causes are probably mostly going to be relatively lousy or short due to other causes, pending much further investment.

Comment author: ESRogs 06 June 2013 06:41:04PM 1 point [-]

As I understood it, the hypothetical was a single individual deciding to work in finance and donate a large portion of their income to efficient charity. In that case I don't think the diminishing returns are so much of an issue.

Comment author: Osuniev 28 August 2013 10:54:48PM 0 points [-]

THIS. Although I`m unsure about the particulars you mention here, being an European, people and effective altruists need to realize that your job is INSIDE the world you live in. Estimating how much good you're producing is not just about how much money/time you're giving to effective charities, but also how much your way of life is helping/damaging the world.

Comment author: ygert 29 August 2013 02:59:30AM *  4 points [-]

I'm not convinced. The amount of saved lives, QALYs, or whatever you are counting that the US government welfare program gets per dollar is (or seems to be to me) quite a bit less than the amount that, say, the AMF could get with that money. I don't know how many dollars per QALY US government welfare manages to get, but I wouldn't be surprised if it were on the order of $1000-$10000 per QALY. And that's not even counting the fact that even if the US goverment had that bit more money from you not being a tax lawyer, that money would not all go to welfare and other such efficient (relative to what else the government spends money on) projects. I would imagine a fair portion would go to, say, bombing Syria, or hiring an extra parking-meter enforcer, or such inefficent stuff, that get an even worse $/QALY result.

And that is still not to mention the fact that some of that money would go to, say, funding the NSA to spy on your phone calls and read your email, or to the TSA to harass, strip-search, and detain you, which are net negatives.

And even that is not counting that MIRI may end up having a QALY/$ result far, far higher than anything the AMF or whoever could ever hope of possibly getting.

I'm not saying you're flat-out wrong, and it is something to take into consideration when figuring out the altruistic impact of your job, but taking into account these objections, it seems highly unlikely that the marginal dollar from the government goes far enough to weigh very heavily in ones analysis.

Comment author: Sithlord_Bayesian 29 August 2013 09:36:49PM 2 points [-]

On the topic of how much it takes to save a QALY in the US:

"Most, but not all, decision makers in the United States will conclude that interventions that cost less than $50,000 to $60,000 per QALY gained are reasonably efficient. An example is screening for hypertension, which costs $27,519 per life-year gained in 40-year-old men.3, 8 For interventions that cost $60,000 to approximately $175,000 per QALY, certain decision makers may find the interventions sufficiently efficient; most others will not agree."

-from http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1497852/

The first paragraph of this gives more on the cost of QALYs in the US. So, kidney dialysis is an intervention that is paid for by the government in the US, and it comes in at more than $100,000 per QALY saved.

Since marginal funding generally goes to pay for interventions which are no more effective than those already being paid for, I wouldn't expect the cost of a marginal QALY to be below (say) $50,000.

Comment author: Osuniev 29 August 2013 05:53:49PM 0 points [-]

I'm not sure if you were answering my comment or wubbles's one. What I was saying was that you need to take into account the negative impact your job and way of life have on the world.

I agree that the US government probably is terrible at using tax money to better the world.

Comment author: katydee 30 May 2013 01:31:39AM 0 points [-]

One potentially relevant note for anyone considering this is that 100 - 36 = 64, not 74.

Comment author: John_Maxwell_IV 30 May 2013 08:33:25AM *  1 point [-]

Thanks, fixed. I appreciate the correction... no need to retract your comment! :)

Comment author: katydee 30 May 2013 09:22:56AM *  3 points [-]

I know the comment was probably fine, but overall it seemed like it could be read as unnecessarily snarky and hence lower the tone-- PMing you the correction would have been a better move.

All in all I think that the standard for discussion here on LessWrong could be increased a lot if people stopped giving "wiseass" replies to things, were more forgiving of minor errors (while still pointing them out), and so on.

Be the change you want to see on LessWrong!

Comment author: somervta 28 August 2013 05:10:34AM 2 points [-]

FWIW, it didn't come off as snarky at all to me - I read it as exactly the sort of polite correction that I like on LW.