The reason that we live in good times is that markets give people a selfish incentive to seek to perform actions that maximize total utility across all humans in the relevant economy: namely, they get paid for their efforts. Without this incentive, people would gravitate to choosing actions that maximized their own individual utility, finding local optima that are not globally optimal. Capitalism makes us all into efficient little utilitarians, which we all benefit enormously from.  

The problem with charity, and especially efficient charity, is that the incentives for people to contribute to it are all messed up, because we don't have something analogous to the financial system for charities to channel incentives for efficient production of utility back to the producer. One effect of giving away lots of your money and effort to seriously efficient charity is that you create the counterpoint public choice problem to the special interests problem in politics. You harm a concentrated interest (friends, potential partners, children) in order to reward a diffuse interest (helping each of billions of people by a tiny amount).

The concentrated interest then retaliates, because by standard public choice theory it has an incentive to do so, but the diffuse interest just ignores you. Concretely, your friends think that you're weird and potential partners may, in the interest of their own future children, refrain from involvement with you. People in general may perceive you as being of lower status, both because of your reduced ability to signal status via conspicuous consumption if you give a lot of money away, and because of the weirdness associated with the most efficient charities. 

Anyone involved in futurism, singularitarianism etc, has probably been on the sharp end of this public choice problem. Presumably, anyone in the west who donated a socially optimal amount of money to charity (i.e. almost everything) would also be on the sharp end (though I know of no cases of someone donating 99.5% of their disposable income to any charity, so we have no examples). This is the Altruist's Burden.

 

Evidence

Do people around you really punish you for being an altruist? This claim requires some justification.

First off, I have personal experience in this area. Not me, but someone vitally important in the existential risks movement has been put under pressure by ver partner to participate less in existential risk so that the relationship would benefit. Of course, I cannot give details, and please don't ask for them or try to make guesses. I personally have suffered, as have many, from low-level punishment from and worsening of relationships with my family, and social pressure from friends; being perceived as weird. I have also become more weird - spending one's time optimally for social status and personal growth is not at all like spending one's time in a way so as to reduce existential risks. Furthermore, thinking that the world is in grave danger but only you and a select group of people understand makes you feel like you are in a cult due to the huge cognitive dissonance it induces. 

In terms of peer-reviewed research, it has been shown that status correlates with happiness via relative income. It has also been shown that (in men) romantic priming increases spending on "conspicuous luxuries but not on basic necessities" and it also "did induce more helpfulness in contexts in which they could display heroism or dominance". In women "mating goals boosted public—but not private— helping". This means that neither gender would seem to be using their time optimally in contributing to a cause that is not widely seen as worthy, and that men especially may be letting themselves down by spending a significant fraction of income on charity of any kind, unless it somehow signaled heroism (and therefore bravery) and dominance. 

The usual reference on purchase of moral satisfaction and scope insensitivity is this article by Eliezer, though there are many articles on it.

The studies on status and romantic priming constitute evidence (only a small amount each) that the concentrated interest -- the people around you -- do punish you. In theoretical terms, it should be the default hypothesis: either your effort goes to the many or it goes to the few around you. If you give less to the concentrated interest that is the few around you, they will give less to you. 

The result that people purchase moral satisfaction rather than maximizing social welfare further confirms this model: in fact it explains what charity we do have as signalling, and drives a wedge between the kind and extent of charity that is beneficial to you personally, and the kind and extent that maximizes your contribution to social welfare. 

 

Can you do well by doing good? 

Mutifoliaterose claimed that you can. In particular, he claimed that by carefully investigating efficient charity, and then donating a large fraction of your wealth, you will do well personally, because you will feel better about yourself. The refutation is that many people have found a more efficient way to purchase moral satisfaction: don't spend your time and energy on investigating efficient charity, make only a small donation, and use your natural human ability to neglect the scope of your donation. 

Spending time and effort on efficient charity in order to feel good about yourself doesn't make you feel any more good than not spending time on it, but it does cost you more money. 

The correct reason to spend most of your meager and hard-earned cash on efficient charity is because you already want to do good. But that is not an extra reason. 

My disagreement with Multifoliaterose's post is more fundamental than these details, though. "It's not to the average person's individual advantage to maximize average utility" is the fundamental theorem of social science. It's like when someone brings you a perpetual motion machine design. You know it's wrong, though yes, it is important to point out the specific error. 

Edit: some people in the comments have said that if you just donate a small amount (say 5% of disposable income) to an efficient but non-futurist charity, you can do very well yourself, and help people. Yes you can do well whilst doing some good, but the point is that it is a trade-off. Yes, I agree that there are points on this trade-off that are better than either extrema for a given utility function. 

New Comment
101 comments, sorted by Click to highlight new comments since: Today at 9:15 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Invalid logic. What the people around you generally want more of is your attention, to validate their sense of status or acceptance. If you were spending your time on some form of conspicuous consumption, this would be equally disliked as a resource drain.

TL;DR: any time you spend time and resources on something other than the people you're in relationship with, they're not going to like it that much. Altruism has fuck-all to do with it, except as your own signaling that you're a good person and the people you're with are selfish jerks.

Um, your "TL;DR" summary is longer than the rest of your comment. (Not that either actually is too long to read.)

4Roko14y
This still supports the conclusion that the action that is optimal for you personally doesn't maximize social welfare. Maximizing social welfare is just another thing that will make those around you like you less, and in the case of the most efficient charities it will be compounded by them thinking it is weird and cult-ish. In the context of signalling, that's just not the way it works; what matters is their impression of you. Perhaps you were thinking of someone very close like your wife, to whom you don't need to signal wealth?
5pjeby14y
Why would I want to signal wealth to anybody? What are you talking about? In some of the internet marketer circles I hang out in, it's almost gauche to not be involved in some sort of charitable endeavor. They are not particularly concerned about efficiency, true, but surely you can find some social circle that agrees with you. Hang out with GiveWell staff, if you must. ;-) IOW, your language both in the post and this comment continue to strike me as victim-thinking. It's not like we're all forced to interact with exactly one social circle.
7steven046114y
Why the bizarre absolute? People don't have perfect freedom choosing whom to interact with, and to the extent that they don't, Roko's thesis holds.
-1pjeby14y
But we have near-perfect freedom choosing whom not to interact with, and to choose our environment such that we either aren't dependent upon others' opinions of our status, or such that we are only interacting with those who have favorable perceptions of it. In marketing terminology, this is called, "finding your niche". ;-)
3cousin_it14y
I emphatically agree. My strategy of choice is signaling that I'm an exciting person (by trying to actually be an exciting person), and I can't imagine why charity would interfere with that.

What kind of activity are you talking about?

If it's saving children and birds, this doesn't make you less attractive to the opposite sex, quite the contrary.

If it's research work, move into a respectable academic setting. I don't think people view Judea Pearl, Marcus Hutter or Daniel Kahneman as dangerously weird, but each of them did more for "our cause" than most of us combined.

If it's advocacy, well, I kinda see why the spouses are complaining. Advocacy sucks, find something better to do with your life.

5EStokes14y
Not this A respectable academic setting doesn't seem optimal for getting work done Not necessarily this, though why does advocacy suck, in your opinion?
3Roko14y
For example, they might make you publish your work on AGI. This could be very bad.
0Alexandros14y
You could always publish impressive but unusable papers if you really wanted to.. Alternatively, if you have good AGI insights, just use them to help you find small improvements in current AGI research to keep people off your back. More overhead, but still, you're getting paid to do whatever you like with part of your time.. not bad.
0Will_Newsome14y
Upvoted for conciseness, but "A respectable academic setting doesn't seem optimal for getting work done"? Why do you say so? That is, of course it's not optimal, but what comparably expensive environment do you think would be better and why?
4EStokes14y
Huh. I was thinking of FAI as a typical contrarian cause, and that a respectable academic setting might be too strict for Eliezer to, say, work on the book or study math for a year. I wasn't thinking of other causes, nor do I know much about respectable academic settings. Unqualified guess for other causes. stealth edit
4JoshuaZ14y
The primary point of tenure is that it frees people up to study more or less whatever they please. Now, that only applies to academics who already have major successes behind them, but it isn't at all hard for an academic to spend a year studying something relevant to what they want to do. For that matter, one could just as easily say take a year long Masters in math, or audit relevant classes at a local college. You are overestimating the level of restriction that academic settings create.
2EStokes14y
Thanks.
1Roko14y
The point is that the extent and nature of charity that is best for you individually is not the same as that which maximizes social welfare. The optimal extent of charity for you personally might be 0. If might be optimal for you personally to go work as an actuary and retire at 40, or to pursue your personal interest in elliptic curves research. Whatever.
0cousin_it14y
I can see how taking charities seriously may drain you of resources. But I don't see how it applies to existential risk reduction activities. Have you invented some method of spending all your money to get FAI faster, or something? Yes, that was a dig at SIAI and similar institutions. I honestly have no idea why we need them. If academia doesn't work for him, Eliezer could have pursued his ideas and published them online while working a day job, as lots of scientists did. He'd make just the same impact.

I would not have been able to write and pursue a day job at the same time. You seem to have incredibly naive ideas about the amount of time and energy needed to accomplish worthwhile things. There are historical exceptions to this rule, but they are (a) exceptions and (b) we don't know how much faster they could have worked if they'd been full-time.

0cousin_it14y
A day job doesn't have to exhaust you. For example, I have a "day job" as a programmer where I show up at the office once a week, so I have more free time than I know what to do with. I don't believe you are less capable of finding such a job than me, and I don't believe that none of your major accomplishments were made while multitasking.
7Vladimir_Nesov14y
It's not trivial to find one that doesn't, and takes up only a fraction of your time. You need luck or ingenuity. It makes things simpler if you can just get that problem out of the way - after all, it's a simple matter, something we know how to do. Trivial (and not so trivial) inconveniences that have known resolutions should be removed, it's that simple.
-2Eliezer Yudkowsky14y
You're silly. I suppose if you started doing things in your free time that are as interesting as what I do in my professional full-time workdays I would pay attention to you again.
2cousin_it14y
You did a good thing: my last two top-level posts were partly motivated by this comment of yours. And for the record, at the same time as I was writing them, at my day job we launched a website with daily maps of forest fires in Russia that got us 40k visitors a day for awhile, got featured on major news sites and on TV, and got used by actual emergency teams. It's been a crazy month. Thankfully, right now Moscow is no longer covered in smoke and I can relax a little. Coincidentally, in that time I had several discussions with different people about the same topic. For some reason all of them felt that you have to be "serious" about whatever you do, do it "properly", etc. I just don't believe it. What matters is the results. There's no law of nature saying you can't get good results while viewing yourself as an amateur light-headed butterfly. In fact, I think it helps!
9Vladimir_Nesov14y
You have to work on systematically developing mastery though. Difficult problems (especially the ones without clear problem statements) require thousands of hours of background-building and familiarizing yourself with the problem to make steps in the right directions, even where these steps appear obvious and easy in retrospect, and where specific subproblems can be resolved easily without having that background. You need to be able to ask the right questions, not only to answer them. It doesn't seem natural to describe such work as an act of "amateur light-headed butterfly". Butterflies don't work in coal mines.
1cousin_it14y
Sorry, can't parse. Are you making any substantive argument? What's the difference between your worktime now and the free time you'd have if you worked an easy day job, or supported yourself with contract programming, or something? Is it only that there's more of it, or is there a qualitative difference?
8Eliezer Yudkowsky14y
Time, mental energy, focus. I cannot work two jobs and do justice to either of them. I am feeling viscerally insulted by your assertion that anything I do can be done in my spare time. Let's try that with nuclear engineers and physicists and lawyers and electricians, shall we? Oh, I'm sorry, was that work actually important enough to deserve a real effort or something?

Sorry, I didn't mean to insult you. Also I didn't downvote your comment, someone else did.

What worries me is the incongruity of it all. What if Einstein, instead of working as a patent clerk and doing physics at the same time, chose to set up a Relativity Foundation to provide himself with money? What if this foundation went on for ten years without actually publishing novel rigorous results, only doing advocacy for the forthcoming theory that will revolutionize the physics world? This is just, uh...

A day job is actually the second recourse that comes to mind. The first recourse is working in academia. There's plenty of people there doing research in logic, probability, computation theory, game theory, decision theory or any other topic you consider important. Robin Hanson is in academia. Nick Bostrom is in academia. Why build SIAI?

Just as an aside, note that Nick Bostrom is in academia in the Future of Humanity Institute at Oxford that he personally founded (as Eliezer founded SIAI) and that has been mostly funded by donations (like the SIAI), mainly those of James Martin. That funding stream allows the FHI to focus on the important topics that they do focus on, rather than devoting all their energy to slanting work in favor of the latest grant fad. FHI's ability to expand with new hires, and even to sustain operations, depends on private donations, although grants have also played important roles. Robin spent many years getting tenure, mostly focused on relatively standard topics.

One still needs financial resources to get things done in academia (and devoting one's peak years to tenure-optimized research in order to exploit post-tenure freedom has a sizable implicit cost, not to mention the opportunity costs of academic teaching loads). The main advantages, which are indeed very substantial, are increased status and access to funding from grant agencies.

0cousin_it14y
Thank you for the balanced answer. Are people in academia really unable to spend their "peak years" researching stuff like probability, machine learning or decision theory? I find this hard to believe.

Of course people spend their peak years working in those fields. If Eliezer took his decision theory stuff to academia he could pursue that in philosophy. Nick Bostrom's anthropic reasoning work is well-accepted in philosophy. But the overlap is limited. Robin Hanson's economics of machine intelligence papers are not taken seriously (as career-advancing work) by economists. Nick Bostrom's stuff on superintelligence and the future of human evolution is not career-optimal by a large margin on a standard philosophy track.

There's a growing (but still pretty marginal, in scale and status) "machine ethics" field, but analysis related to existential risk or superintelligence is much less career-optimal there than issues related to Predator drones and similar.

Some topics are important from an existential risk perspective and well-rewarded (which tends to result in a lot of talent working on them, with diminishing marginal returns) in academia. Others are important, but less rewarded, and there one needs slack to pursue them (donation funding for the FHI with a mission encompassing the work, tenure, etc).

There are various ways to respond to this. I see a lot of value in trying to seed certain areas, illuminating the problems in a respectable fashion so that smart academics (e.g. David Chalmers) use some of their slack on under-addressed problems, and hopefully eventually make those areas well-rewarded.

5Mitchell_Porter14y
For an important topic, it makes sense to have a dedicated research center. And in the end, SIAI is supposed to create a Friendly AI for real, not just to design it. As it turns out, SIAI also manages to serve many other purposes, like organizing the summits. As for FAI theory, I think it would have developed more slowly if Eliezer had apprenticed himself to a computer science department somewhere. However, I do think we are at a point where the template of the existing FAI solution envisaged by SIAI could be imitated by mainstream institutions. That solution is, more or less, figure out the utility function implicitly supposed by the human decision process, figure out the utility function produced by reflective idealization of the natural utility function, create a self-enhancing AI with this second utility function. I think that is an approach to ethical AI which could easily become the consensus idea of what should be done.
2RobinZ14y
Setting up a Relativity Foundation is a harder job than being a patent clerk.
3Vladimir_Nesov14y
The difference is the attention spent on contract programming. If this can be eliminated, it should be. And it can.
7Wei Dai14y
From what I understand, SIAI was meant to eventually support at least 10 full time FAI researchers/implementers. How is Eliezer supposed to "make the same impact" by doing research part time while working a day job?
1cousin_it14y
I think the hard problem is finding 10 capable and motivated researchers, and any such people would keep working even without SIAI. Eliezer can make impact the same way he always does: by proving to the Internet that the topic is interesting.

I think the hard problem is finding 10 capable and motivated researchers, and any such people would keep working even without SIAI.

Again: why isn't it obvious to you that it would be easier for these people to have a source of funding and a building to work in?

2Roko14y
No. Just no.
2cousin_it14y
Why? I gave the example of Wei Dai who works independently from the SIAI. If you know any people besides Eliezer who do comparable work at the SIAI, who are they?

The problem with your example is that I don't work on FAI, I work on certain topics of philosophical interest to me that happen to be relevant to FAI theory. If I were interested in actually building an FAI, I'd definitely want a secure source of funding for a whole team to work on it full time, and a building to work in. It seems implausible that that's not a big improvement (in likelihood of success) over a bunch of volunteers working part time and just collaborating over the Internet.

More generally, money tends to be useful for getting anything accomplished. You seem to be saying that FAI is an exception, and I really don't understand why... Or are you just saying that SIAI in particular is doing a bad job with the money that it's getting? If that's the case, why not offer some constructive suggestions instead of just making "digs" at it?

2cousin_it14y
I don't believe FAI is ready to be an engineering project. As Richard Hamming would put it, "we do not have an attack". You can't build a 747 before some hobbyist invents the first flyer. The "throw money and people at it" approach has been tried many times with AGI, how is FAI different? I think right now most progress should come from people like you, satisfying their personal interest. As for the best use of SIAI money, I'd use Givewell to get rid of it, or just throw some parties and have fun all around, because money isn't the limiting factor in making math breakthroughs happen.
7Wei Dai14y
I think the problem with that is that most people have multiple interests, or their interests can shift (perhaps subconsciously) based on considerations of money and status. FAI-related fields have to compete with other fields for a small pool of highly capable researchers, and the lack of money and status (which would come with funding) does not help. Me either, but I think that one, SIAI can use the money to support FAI-related research in the mean time, and two, given that time is not on our side, it seems like a good idea to build up the necessary institutional infrastructure to support FAI as an engineering project, just in case someone makes an unexpected theoretical breakthrough.
2Roko14y
Marcello, Anna Salamon, Carl Shulman, Nick Tarleton, plus a few up-and-coming people I am not acquainted with.
2Nick_Tarleton14y
I don't do any work comparable to Eliezer's.
5Vladimir_Nesov14y
Why don't you? You are brilliant, and you understand the problem statement, you merely need to study the right things to get started.
0[anonymous]14y
I don't do any original work comparable to Eliezer.
0[anonymous]14y
I don't do anything comparable to Eliezer.
0[anonymous]14y
Is their research secret? Any pointers?
0Roko14y
Marcello's research is secret, but not that of the others.
5cousin_it14y
Sorry for deleting my comment, I didn't think you'd answer it so quickly. For posterity, it said: "Is their research secret? Any pointers?" Here's the list of SIAI publications. Apart from Eliezer's writings, there's only one moderately interesting item on the list: Peter de Blanc's "convergence of expected utility" (or divergence, rather). That's... good, I guess? My point stands.
2Vladimir_Nesov14y
Is it secret why it's secret? I can't imagine.
2Roko14y
Yes. If anyone finds out why Marcello's research is secret, they have to be killed and cryopreserved for interrogation after the singularity.
5Vladimir_Nesov14y
Now why do you even ask why should people be afraid of something going terribly wrong at SIAI? Keeping it secret in order to avoid signaling the moment where it becomes necessary to keep it secret? Hmm...
5Vladimir_Nesov14y
Isn't it better to have an option of pursuing your research without having to work a day job? Presumably, this will allow you to focus more on research...

But... create a big organization that generates no useful output, except providing you with some money to live on? Is it really the path of least effort? SIAI has existed for 10 years now and here are its glorious accomplishments broken down by year. Frankly, I'd be less embarrassed if Eliezer were just one person doing research!

6Vladimir_Nesov14y
Yes well, in retrospect many things are seen as suboptimal. Remember that SIAI was founded back when Eliezer didn't figure out importance of Friendliness and thought we need a big concerted effort to develop an AGI. Later, he was unable to interest sufficiently qualified people to do the research on FAI (equivalently, to explain the problem so that qualified people would both understand it and take seriously). This led to blogging on Overcoming Bias and now Less Wrong, which does seem to be a successful, if insanely inefficient, way of explaining the problem. Current SIAI seems to have a chance of mutating into a source of funding for more serious FAI research, but as multifoliaterose points out, right now publicity seems to be a more efficient way to eventually getting things done, since we need to actually find researchers to make the accomplishments you protest about the absence of.
3cousin_it14y
Since you have advanced the state of the art both here and at decision-theory-workshop, I will take this opportunity to ask you: is your research funded by SIAI? Would it progress faster if it were? Is money the limiting factor?
1Vladimir_Nesov14y
I'll reply privately via e-mail (SIAI doesn't fund me, and it'd be helpful if a few unlikely things were different).
3cousin_it14y
For the record, Vladimir did reply.
3NancyLebovitz14y
The advantages to an organization are mutual support, improving the odds of continuity if something happens to Eliezar, and improving the odds of getting more people who can do high level work. I don't have a feeling for how fast new organizations for original thought and research should be expected to get things done. Anyone have information?
4cousin_it14y
I don't see who else does high level work at SIAI and who will continue it if Eliezer gets hit by a bus. Wei Dai had the most success building on Eliezer's ideas, but he's not a SIAI employee and SIAI didn't spark his interest in the topic.
-2Roko14y
Sure, easy: just donate 100% of your disposable income to SIAI.
0Roko14y
But research on AGI is not social utility maximizing. Advocacy about existential risks may be the social utility maximizing thing to do.
[-][anonymous]14y120

Two points:

One. Charity, up to a point, is not necessarily a trade-off. Just as adding a hobby can make you more productive at work by forcing you to be efficient with your time, adding a charitable commitment can force you to stop wasting money. There is a reason why the Judaeo-Christian tradition recommends tithing; a tenth of income is a good rule of thumb for an amount that's significant but not enough to make you noticeably poorer.

Two. When people have personal problems as a result of altruism, I suspect it's the nature of the charity (futurist ideas sound useless to a lot of people) or the nature of the commitment (giving more than a tenth of income, for example) or some interpersonal issue that the altruist doesn't understand. I want to emphasize that last possibility. If you know you have Asperger's, you should be extra skeptical about your own ability to explain interpersonal behavior.

9steven046114y
I wish the concept of "tithing" included spending a tenth of one's free time trying to optimize.
2Roko14y
At small levels of expenditure (<5% disposable income) , charitable spending is such a small expenditure that of course it won't make enough of a difference to negatively impact you enough that you notice. My strong suspicion is that if existential risk reducers could and wanted to pull off the trick of only devoting 5% of their spare mental energy existential risks, then there would be no problem, either in my case or in the cases of the people I mentioned. Perhaps there would be a problem with cognitive dissonance, but you could still apply the 5% rule: discount the extent to which you care about humanity as a whole versus near-mode things by a factor of 20.

Spending time and effort on efficient charity in order to feel good about yourself doesn't make you feel any more good than not spending time on it, but it does cost you more money. The correct reason to spend most of your meager and hard-earned cash on efficient charity is because you already want to do good. But that is not an extra reason.

Look, I think Multifolaterose made one good point that you either missed or for some reason chose not to address:

Increasing the amount you donate to efficient charity by one order of magnitude can radically improv... (read more)

1Benquo14y
Can you say more about how to realize these benefits? I haven't noticed what I've given to have any real effect on my character or well-being...
1Mass_Driver14y
Well, your mileage may vary. But here's Multifolaterose's report on self-esteem before: and after: To see why multifolaterose thinks it might happen to you, read the article, especially reason (C) for why happiness correlates only weakly with disposable income and the quotes from Singer's book. Hope that helps. Also, at the risk of being preachy or presumptuous, Multifolaterose doesn't predict that you'll get any significant character gains from throwing a few bucks around here and there -- you would have to give in an amount that begins to reflect your values. Spending 1% of your income on charity, e.g., suggests that you value yourself 100 times more than a stranger, which may not do much for your self-esteem.
0PhilGoetz14y
But if you know that you're doing charity in order to increase your U(charity), then it's not charity, and it doesn't work.
2Blueberry14y
I don't see why. You're still donating the money and you're still helping people. And doing it to increase your utility just shows you're the kind of person who feels better for donating money, which is a good thing.

I wouldn't call the problem public choice, since most kinds of charity devote resources away from your immediate social network but only a few attract problems. If you used GiveWell's standards of efficiency, and gave to Stop Tuberculosis or VillageReach, I doubt you'd run into problems. It sounds like these problems are arising with futurist type charities where you're devoting your efforts and resources to causes that people close to you don't understand and find weird and offputting, which is a different source of trouble.

6multifoliaterose14y
Yes, this is a major reason that I doubt that donating to SIAI is a good idea. I feel that: 1. In order for existential risk charities to do a good job, they need good researchers and donors. 2. In order for existential risk charities to attract good researchers and donors, public interest in and concern for existential risk must grow substantially. 3. In light of point 2, the most important task for an existential risk charity right now is to increase public interest in and concern for existential risk. 4. SIAI seems poorly suited to generating interest in and concern for existential risk and may very well be lowering the prestige attached to investigating existential risk rather than raising the prestige attached to investigating existential risk.
5Roko14y
This is a separate debate, but I think that you overestimate the ability of the general public, and society at large to be sane about existential risks, and AI risks especially. Though it is useful to have someone challenging the orthodoxy here: what evidence do you have that suggests that it is possible to get people to take this really seriously?
9multifoliaterose14y
I don't think it's unreasonable to hope that society can eventually get to a point where being an existential risk researcher has status similar to being a physics researcher. There's nothing intrinsically weird about the idea "there are things that could cause the extinction of the human race and it's a good idea to have some people studying them and thinking about how to avoid them." I think that the reason that general artificial intelligence research has such a bad reputation is that it's associated with a history of false alarms. I think that by adopting a gradualist approach of getting more and more of the intellectual elite to think about existential risk, it should be possible to gradually change attitudes about artificial intelligence research. I worry that SIAI might sound another "false alarm" or have institutional problems which further damage the credibility of existential risk research. My remark is related to the top level post. From your top level post it's clear that at the moment there are very strong negative pressures against people studying existential risk. I wish there weren't such pressures but they're there. It's plausible to me that these pressures make it much more difficult for you to existential risk research than you would be if existential risk research were more mainstream. It's also plausible to me that there are people who have something in common with you but who are unable to bear these pressures and so are deterred from working with you. For this reason, I think that the best way to facilitate existential risk research is to (a) Raise levels of public interest in making the world a better place. A very large of majority of the people so influenced will not work toward or fund existential risk research, but a small percentage will. (b) Get the educated public (the sorts of people who read semi scholarly books) interested in existential risk. (c) Get established scientific experts more interested in existential risk. In orde
8Roko14y
I think the problem is that the public is like a reinforcement learner, and won't believe claims that are based on long chains of reasoning. Rather, the public and society at large tends to wait for the thing in question to actually happen, so that they have "proof". Physics is OK because it has repeatedly proved both its value in making novel and astounding predictions that were then proved correct, and because those predictions had important practical consequences. Though there are clear exceptions where dreadful public epistemology has impacted physics: overreaction to the dangers of nuclear power being one. I think there's a fundamental point about how public epistemology works that I want to make here: the public operates like a dumb agent that is paranoid about not being tricked, and demands real physical proof of things when the bayesian probability with respect to a reasonable prior is already 99.9999... %. Widespread denial of evolution is one case; you can't show someone an ape evolving into a human.
2soreff14y
Good point! Perhaps part of the problem is that the public has been subjected to at least two millenia of warnings of existential risks - by the clergy... That's long enough, and the false alarms have been frequent enough and intense enough, that perhaps we have even genetically evolved some extra skepticism about them.
2ata14y
But do we (i.e. the human race in general) have any more skepticism about such claims than we used to? Most people still do believe in religions that include some form of eschatology. It might just be that scientific talk about existential risk seems like a competing meme to religious people (you're not allowed to believe in something that says the world won't end the way your religion says it will), while non-religious people may tend to see discussion of global catastrophe as in the genre of apocalyptic religion. (Then again, global warming doesn't seem to have that problem, so maybe it's just a marketing issue...)
6Eneasz14y
Couldn't this be corrected by hiring a Marketing firm? High Functioning Aspergers' can see the link from "hiring a Marketing firm" to "getting the public to believe nearly anything" is very strong and very reliable. It takes only a few tens of millions of dollars to convince the public to commit to billions of dollars in near-future losses (eg: tobacco industry, carbon polluters, election drives). This may not be desirable, but it is a fact, and if a rational agent wants to win then s/he should accept the fact and design with it.
4Roko14y
Another problem I want to mention: getting "established scientific experts" to take existential risk seriously is impeded by the fact that academia has no mechanism for assessing value of information. Academics are rewarded based upon how true the info they generate is, not on a combination of how true it is and how important it is. So we have more papers on dung beetle reproduction than on human extinction. Furthermore, academia is utterly paranoid about not causing the utterly dumb public to mistrust it, so it has to adhere to the public's standards about needing real physical proof for outlandish claims, rather than reasoning probabilistically about them using long, complex and somewhat subjective arguments. Lastly, to complicate things even more, academia is chaos. Nobody is in charge. It is inherently conservative and slow to change, even when there is real physical proff that it is mistaken -- most bad theories are buried along with their owners years after they have been shown to have a miniscule bayesian probability. Now there are a few academics at Oxford University doing x-risk research. But to grow that community to 1000's of researchers is going to be either very expensive and quite slow, or free and glacially slow.
1soreff14y
I would phrase this differently. Certain types of existential risks (nuclear war, asteroid impacts) seem to be studied in the mainstream. Perhaps the study of AGI-related existential risks is the key area pushed out of the mainstream?
2xamdam14y
I think your arguments would make sense if there was a general "let's deal with existential risks" program; I see SIAI concentrating specifically on the imminent possibility of uFAI. They feel they already have enough researchers for the specific problem, and they have some fund flow that saves them the effort to tap the more general public. They would rather use the resources they have to attack the problem itself. You may argue with the specific point of compromise, but it is not illogical. It-just-so-happens that "solving" uFAI risk would most likely solve all other problems by triggering a friendly Singularity, but that does not make SIAI a general existential-risk fighting unit.
0JoshuaZ14y
This seems unlikely to me. Even if you completely solve the problem of Friendly AI you might lack the processing power to implement it. Or it might turn out that there are fundamental limits which prevent a Singularity event from taking place. The first problem seems particularly relevant given that to someone concerned about uFAI, the goal presumably is to solve the Friendliness problem well before we're anywhere near actually having functional general AI. No one want this to be cut close and there's no a priori reason to think it would be cut close. (Indeed if it did seem to be getting cut close one could arguably use that as evidence that we're in a simulation and that this is a semifictionalized account with a timeline specifically engineered to create suspense and drama.)
1PhilGoetz14y
Why? How would you do it differently?
1Roko14y
But surely if you donated an amount that was social-utility optimizing to a charity like StopTB, you would personally be worse off, including because of negative effects from people close to you?
7Unnamed14y
That's true. If you gave everything you could, keeping only enough so that you could keep working & making money, that would probably be bad for you (including your social life). I suppose there's a Laffer-type curve for it. But most people don't give enough to be in the range where there are significant negative personal consequences to additional giving, and multifoliaterose's post didn't focus on those extreme levels of giving.
7Roko14y
It seems that the amount that he suggested was neither best for you nor best for social utility, so a trade-off. The argument I have against his post is the idea that the two incentives line up, whereas I think you and I agree that they trade-off against each other.
4multifoliaterose14y
My position is not that the two incentives line up perfectly. My post was suggesting the possibility that at the margin, most Americans would be happier if they donated noticeably more or donated noticeably better.
2Unnamed14y
I was also thinking at the margin. There are some margins where what helps the self and what helps social utility conflict, and some where they line up or are basically independent. At least in our demographic (well-educated people in OECD countries), I think that most people are at a point where giving more to effective non-weird charity would at least not be a noticeable decline for the self (and for some people it would be an improvement). There's likely to be more conflict for large increases in giving or for weird charities, but Roko's post seems to treat the conflict between self & social utility as more fundamental than that.
0Roko14y
Ok, I disagree with you. But point taken: the incentives could fail to line up perfectly, but still line up for small amounts of donation. It would be interesting if this disagreement were testable.
2Unknowns14y
The disagreement is easily testable, it just requires that enough people test multifoliaterose's suggestions. He says that he himself became happier by donating more. Do you think he isn't telling the truth? Of course, the disagreement will not be tested in practice, because no one or very few will be willing to test his suggestion, seeing that such a test would be quite expensive.
3multifoliaterose14y
Do you find my suggestion that such a test would be worth it for individual prospective donors to perform (based on expected returns considerations) unconvincing?
2Unknowns14y
I have no doubt it would be worth it. In fact, I expect you are right. Even giving a beggar $20 instead of $1 increased my happiness significantly. But due to people's selfishness, in general they will not be willing to test it even if the expected return is positive.

Just to underline something: multifoliaterose did give 5%. What's perhaps unusual is that he gave it in one swell foop.

IIRC, Americans give about 2%/year on the average, which implies it isn't all that unusual to give twice that much.

I doubt it's possible to stop seeing the untested effectiveness of most charities once you've seen it.

0Roko14y
I meant 5% of disposable income, i.e. once you've already paid for a living place, food, tax, car etc. This probably equates to 2% gross.

Have you considered that some of us might have utility functions that do have terms for socially distant people? Thus the charity can give direct utility to us, which seems ignored by the analysis.

Second, end points rarely are optimal. E.g. eating only tuna and nothing else could be unhealthy and weird, but that does not imply that eating some tuna is unhealthy or weird. Thus your analysis seems to miss the obvious answer.

4Roko14y
read the post: "The correct reason to spend most of your meager and hard-earned cash on efficient charity is because you already want to do good. But that is not an extra reason. "

How did this article go from -8 to +8?

It didn't. The related article that was at -8 was deleted.

5ShardPhoenix14y
Why was the other article deleted? Someone in another thread said something about a banned topic?
5A1987dM12y
Gotta love the text of the page I get to by following that link.
-1Blueberry14y
It got un-downvoted or upvoted sixteen times.

Is this the right place to engage in thread necromancy? We'll see.

I've been troubled by the radical altruism argument for some years, and never had a very satisfactory reason for rejecting it. But I just thought of an argument against it. In brief, if people believe that their obligation is to give just about everything they have to charity, then they have created a serious disincentive to create more wealth.

It starts with the argument against pure socialism. In that system, each person works as hard as he or she can in order to produce for the good of so... (read more)