I just put in 2700 USD, the current balance of my bank account, and I'll find some way to put in more by the end of the challenge.
Not that I don't think your donation is admirable, but I'm curious how you are able to donate your entire bank account without running the risk of not being able to respond to a black-swan event appropriately and your future well-being and ability to donate to SIAI being compromised?
Do you think it's rational in general for people to donate all their savings to the SIAI?
I have a high limit credit card which I pay off every month, no other form of debt, no expenses until my next paycheck, a very secure, well-paying job with good health insurance, significant savings in the form of stocks and bonds, and several family members and friends who would be willing to help me in the event of some catastrophe.
I prepare and structure my life such that I can take action without fear. I attribute most of this to reading the book Your Money Or Your Life while I was in college. My only regret is that I can afford to give more, but fail to have the cash on hand due to lifestyle expenditures and saving for my own personal future.
Wait ... I assume you're planning to actually mail the check too?
Yes, I mailed the check, too, just after writing the comment. (And I wrote and mailed it to SIAI. No tricks, it really is a donation.)
I would be surprised if karma scaled linearly with dollars over that range.
And to encourage others to donate, let it be known that I just made a 500 euro (about 655 USD) donation.
I donated $100 yesterday. I hope to donate more by the end of the matching period, but for now that's around my limit (I don't have much money).
I put in $500, really pinches in Indian rupees (Rs. 23,000+). Hoping for the best to happen next year with a successful book release and promising research to be done.
On the one hand, I absolutly abhore SIAI. On the other hand, I'd love to turn my money into karma...
/joke
$100
I just donated $1,370. The reason why it's not a round number is interesting, and I'll write a Discussion post about it in a minute. EDIT: Here it is.
Also, I find it interesting that (before my donation) the status bar for the challenge was at $8,500, and the donations mentioned here totaled (by my estimation) about $6,700 of that...
I seem to remember reading a comment saying that if I make a small donation now, it makes it more likely I'll make a larger donation later, so I just donated £10.
Ben Franklin effect, as well as consistency bias. Good on you for turning a bug into a feature.
Donated $500 CAD just now.
By the way, SIAI is still more than 31,000 US dollars away from its target.
Darn it; I j just made my annual donation a few days ago, but hopefully my employer's matching donation will come in during the challenge period. I will make sure to make my 2011 donation during the matching period (i.e. well before January 20th), in an amount no less than $1000.
Wow, SIAI has succeeded in monetizing Less Wrong by selling karma points. This is either a totally awesome blunder into success or sheer Slytherin genius.
I have donated a small amount of money.
The Singularity is now a little bit closer and safer because of your efforts. Thank you. We will send a receipt for your donations and our newsletter at the end of the year. From everyone at the Singularity Institute – our deepest thanks.
I do hope they mean they will send a receipt and newsletter by e-mail, and not by physical mail.
I feel rather uncomfortable at seeing someone mention that he donated, and getting a response which indirectly suggests that he's being irrational and should have donated more.
The idea is that the optimal method of donation is to donate as much as possible to one charity. Splitting your donations between charities is less effective, but still benefits each. They actually have a whole page about how valuable small donations are, so I doubt they'd hold a grudge against you for making one.
Actions which increase utility but do not maximise it aren't "pointless". If you have two charities to choose from, £100 to spend, and you get a constant 2 utilons/£ for charity A and 1 utilon/£ for charity B, you still get a utilon for each pound you donate to B, even if to get 200 utilons you should donate £100 to A. It's just the wrong word to apply to the action, even assuming that someone who says he's donated a small amount is also saying that he's donated a small proportion of his charitable budget (which it turns out wasn't true in this case).
I am intimately familiar with how splitting donations into fine slivers is very much in the interests of all charities except the very largest;
Not the largest, the neediest.
As charities become larger, the marginal value of the next donation goes down; they become less needy. In an efficient market for philanthropy you could donate to random charities and it would work as well as buying random stocks. We do NOT have an efficient market in philanthropy.
New Year's resolution is not to donate to things until I check if there's a matching donation drive starting the next week :( Anyway, donated a little extra because of all the great social pressure from everyone's amazing donations here. Will donate more when I have an income.
Just donated $500.
(At one time I had an excuse for waiting. But plainly I won't get confirmation on a price for cryonics-themed life insurance by the deadline, and should likely have donated sooner).
In a possibly bad decision, I put a $1000 check in the mailbox with the intent of going out and transferring the money to my checking account later today. That puts them at $123,700 using Silas' count.
I have not donated a significant amount before, but will donate $500 IF someone else will (double) match it.
Why did the SIAI remove the Grant Proposals page? http://singinst.org/grants/challenge#grantproposals
EDIT: Donated $500, in response to wmorgan's $1000
Excellent! Donated $500. Whether yours is a counter-bluff or not ;)
This is by far the most I've donated to a charity. I spent yesterday assessing my financial situation, something I've only done in passing because of my fairly comfortable position. It has always felt smart to me to ignore the existence of my excess cash, but I have a fair amount of it and the recent increase of discussion about charity has made me reassess where best to locate it. I will be donating to SENS in the near future, probably more than I have to SIAI. I'm aware of the argument for giving everything to a single charity, but it seems even Eli is conflicted about giving advice about SIAI vs. SENS, given this discussion.
I recently read that investing in the stock market (casually, not as a trader or anything) in the hopes that your wealth will grow such that you can donate even more at a later time is erroneous because the charity could be doing the same thing, with more of it. Is this true, and does anyone know if the SIAI, or SENS does this? It seems to me that both of these organizations have immediate use for pretty much all money they receive and do not invest at all. How much would my money have to make in an investment account to be able to contribute more (adjusting for inflation) in the future?
a book on rationality, the first draft of which was just completed
If Eliezer's reading this: Congratulations!
I just sent 15 USD to each the SIAI, VillageReach and The Khan Academy.
I am aware of and understand this but felt more comfortable to diversify right now. I also know it is not much, I'll have to somehow force myself to buy less shiny gadgets and rather donate more. Generally I have to be less inclined to the hoarding of money in favor of giving.
Someone is promoting this @reddit
http://www.reddit.com/r/singularity/comments/es7do/tallinnevans_125000_singularity_challenge_less/
every contribution to the Singularity Institute up until January 20, 2011 will be matched dollar-for-dollar, up to a total of $125,000.
Anyone willing to comment on that as a rationalist incentive? Presumably I'm supposed to think "I want more utility to SIAI so I should donate at a time when my donation is matched so SIAI gets twice the cash" and not "they have money which they can spare and are willing to donate to SIAI but will not donate it if their demands are not met within their timeframe, that sounds a lot like coercion/blackmail&q...
It's a symmetrical situation. Suppose that A prefers having $1 in his personal luxury budget to having $1 in SIAI, but prefers having $2 in SIAI to having a mere $1 in his personal luxury budget. Suppose that B has the same preferences (regarding his own personal luxury budget, vs SIAI).
Then A and B would each prefer not-donating to donating, but they would each prefer donating-if-their-donation-gets-a-match to not-donating. And so a matching campaign lets them both achieve their preferences.
This is a pretty common situation -- for example, lots of people are unwilling to give large amounts now to save lives in the third world, but would totally be willing to give $1k if this would cause all other first worlders to do so, and would thereby prevent all the cheaply preventable deaths. Matching grants are a smaller version of the same.
According to the page, they (we) made it to the full $125,000/250,000! Does anyone know what percentage this is of all money the SIAI has raised?
Try to be objective and consider whether a donation to the Singularity Institute is the most efficient charitable "investment"? Here's a simple argument that it's most unlikely. What's the probability that posters would stumble on the very most efficient investment: it requires research. Rationalists don't accede this way to the representativeness heuristic, which leads the donor to choose the recipient readily accessible to consciousness.
Relying on heuristics where their deployment is irrational, however, isn't the main reason the Singularity In...
Your argument applies to any donation of any sort, in fact to any action of any sort. What is the probability that the thing I am currently doing is the best possible thing to do? Why, its basically zero. Should I therefore not do it?
Referring to the SIAI as a cause "some posters stumbled on" is fairly inaccurate. It is a cause that a number of posters are dedicating their lives to, because in their analysis it is among the most efficient uses of their energy. In order to find a more efficient cause, I not only have to do some research, I have to do more research than the rational people who created SIAI (this isn't entirely true, but it is much closer to the truth than your argument). The accessibility of SIAI in this setting may be strong evidence in its favor (this isn't a coincidence; one reason to come to a place where rational people talk is that it tends to make good ideas more accessible than bad ones).
I am not donating myself. But for me there is some significant epistemic probability that the SIAI is in fact fighting for the most efficient possible cause, and that they are the best-equipped people currently fighting for it. If you have some information or an arg...
I don't know if it is a good idea to donate to SIAI. From my perspective, there is a significant chance that it is a good idea, but also a significant chance that it isn't. I think everyone here recognizes the possibility that money going to the SIAI will accomplish nothing good. I either have a higher estimate for that possibility, or a different response to uncertainty. I strongly suspect that I will be better informed in the future, so my response is to continue earning interest on my money and only start donating to anything when I have a better idea of what is going on (or if I die, in which case the issue is forced).
The main source of uncertainty is whether the SIAI's approach is useful for developing FAI. Based on its output so far, my initial estimate is "probably not" (except insofar as they successfully raise awareness of the issues). This is balanced by my respect for the rationality and intelligence of the people involved in the SIAI, which is why I plan to wait until I get enough (logical) evidence to either correct "probably not" or to correct my current estimates about the fallibility of the people working with the SIAI.
there's no reason to think whatever miniscule probability the Singularity Institute assigns to the hopeful outcome is a better estimate than would be had by postulating reverse miniscule effects.
When I get in my car to drive to the grocery store, do you think there is any reason to favor the hypothesis that I will arrive at the grocery store over all the a priori equally unlikely hypotheses that I arrive at some other destination?
I usually think about this, not as expected utility calculations based on negligible probabilities of vast outcomes being just as likely as their negations, but as them being altogether unreliable, because our numerical intuitions outside the ranges we're calibrated for are unreliable.
For example, when trying to evaluate the plausibility of an extra $500 giving SIAI an extra 1 out of 7 billion chance of succeeding, there is something in my mind that wants to say "well, geez, 1e-10 is such a tiny number, why not?"
Which demonstrates that my brain isn't calibrated to work with numbers in that range, which is no surprise.
So I do best to set aside my unreliable numerical intuitions and look for other tools with which to evaluate that claim.
Upvoting but nitpicking one aspect:
It is obvious that a number of smart people have decided that SIAI is currently the most important cause to devote their time and money to. This in itself constitutes an extremely strong form of evidence.
No. It isn't very strong evidence by itself. Jonathan Sarfati is a chess master, published chemist, and a prominent young earth creationist. If we added all the major anti-evolutionists together it would easily include not just Sarfati but also William Dembski, Michael Behe, and Jonathan Wells, all of whom are pretty intelligent. There are some people less prominently involved who are also very smart such as Forrest Mims.
This is not the only example of this sort. In general, we live in a world where there are many, many smart people. That multiple smart people care about something can't do much beyond locate the hypothesis. One distinction is that they most smart people who have looked at the SIAI have come away not thinking they are crazy, which is a very different situation from the sort of example given above, but by itself smart people having an interest is not strong evidence.
(Also, on a related note, see this subthread here which made it clear that what smart people think, even if one has a general consensus among smart people is not terribly reliable.)
The rational reasons to signal are outlined in the post Why Our Kind Can't Cooperate, and there are more good articles with the Charity tag.
My personal reasons for supporting SIAI are outlined entirely in this comment.
Please inform me if anyone knows of a better charity.
I consider the above form of futurism to be the "narrow view". It considers too few possibilities over too short a timespan.
I'm not academic enough to provide the defense you're looking for. Instead, I'll do what I did at the end of the above linked thread, and say you should read more source material. And no, I don't know what the best material is. And yes, this is SIAI's problem. They really do suck at marketing. I think it'd be pretty funny if they failed because they didn't have a catchy slogan...
I will give one probability estimate, since I already linked to it: SIAI fails in their mission AND all homo sapiens are extinct by the year 2100: 90 percent. I'm donating in the hopes of reducing that estimate as much as possible.
It's been downvoted - I guess - because it sits on the wrong side of a very interesting dynamic: what I call the "outside view dismissal" or "outside view attack". It goes like this:
A: From the outside, far too many groups discover that their supported cause is the best donation avenue. Therefore, be skeptical of any group advocating their preferred cause as the best donation avenue.
B: Ah, but this group tries to the best of their objective abilities to determine the best donation avenue, and their cause has independently come out as the best donation avenue. You might say we prefer it because it's the best, not the other way around.
A: From the outside, far too many groups claim to prefer it because it's the best and not the other way around. Therefore, be skeptical of any group claiming they prefer a cause because it is the best.
B: Ah, but this group has spent a huge amount of time and effort training themselves to be good at determining what is best, and an equal amount of time training themselves to notice common failure modes like reversing causal flows because it looks better.
A: From the outside, far too many groups claim such training for it to be true. Th...
I can't speak for anyone else, but I downvoted it because of the deadly combination of:
A. Unfriendly snarkiness, i.e. scare-quoting "rationalists" and making very general statements about the flaws of LW without any suggestions for improvements, and without a tone of constructive criticism.
B. Incorrect content, i.e. not referencing this article which is almost certainly the primary reason there are so many comments saying "I donated", and the misuse of probability in the first paragraph.
If it were just A, then I could appreciate the comment for making a good point and do my best to ignore the antagonism. If it were just B, then the comment is cool because it creates an opportunity to correct a mistake in a way that benefits both the original commenter and others, and adds to the friendly atmosphere of the site.
The combination, though, results in comments that don't add anything at all, which is why I downvoted srdiamond's comment.
Downvoted parent and grandparent. The grandparent because:
I had left it alone until I saw it given unwarranted praise and a meta karma challenge.
I find it really disheartening every time I come on here to find that a community of "rationalists" is so quick to muffle anyone who disagrees with LW collective opinion.
See the replies to all similar complaints.
Michael Anissimov posted the following on the SIAI blog:
Thanks to the generosity of two major donors; Jaan Tallinn, a founder of Skype and Ambient Sound Investments, and Edwin Evans, CEO of the mobile applications startup Quinly, every contribution to the Singularity Institute up until January 20, 2011 will be matched dollar-for-dollar, up to a total of $125,000.
Interested in optimal philanthropy — that is, maximizing the future expected benefit to humanity per charitable dollar spent? The technological creation of greater-than-human intelligence has the potential to unleash an “intelligence explosion” as intelligent systems design still more sophisticated successors. This dynamic could transform our world as greatly as the advent of human intelligence has already transformed the Earth, for better or for worse. Thinking rationally about these prospects and working to encourage a favorable outcome offers an extraordinary chance to make a difference. The Singularity Institute exists to do so through its research, the Singularity Summit, and public education.
We support both direct engagements with the issues as well as the improvements in methodology and rationality needed to make better progress. Through our Visiting Fellows program, researchers from undergrads to Ph.Ds pursue questions on the foundations of Artificial Intelligence and related topics in two-to-three month stints. Our Resident Faculty, up to four researchers from three last year, pursues long-term projects, including AI research, a literature review, and a book on rationality, the first draft of which was just completed. Singularity Institute researchers and representatives gave over a dozen presentations at half a dozen conferences in 2010. Our Singularity Summit conference in San Francisco was a great success, bringing together over 600 attendees and 22 top scientists and other speakers to explore cutting-edge issues in technology and science.
We are pleased to receive donation matching support this year from Edwin Evans of the United States, a long-time Singularity Institute donor, and Jaan Tallinn of Estonia, a more recent donor and supporter. Jaan recently gave a talk on the Singularity and his life at a entrepreneurial group in Finland. Here’s what Jaan has to say about us:
“We became the dominant species on this planet by being the most intelligent species around. This century we are going to cede that crown to machines. After we do that, it will be them steering history rather than us. Since we have only one shot at getting the transition right, the importance of SIAI’s work cannot be overestimated. Not finding any organisation to take up this challenge as seriously as SIAI on my side of the planet, I conclude that it’s worth following them across 10 time zones.”
– Jaan Tallinn, Singularity Institute donor
Make a lasting impact on the long-term future of humanity today — make a donation to the Singularity Institute and help us reach our $125,000 goal. For more detailed information on our projects and work, contact us at institute@intelligence.org or read our new organizational overview.
-----
Kaj's commentary: if you haven't done so recently, do check out the SIAI publications page. There are several new papers and presentations, out of which I thought that Carl Shulman's Whole Brain Emulations and the Evolution of Superorganisms made for particularly fascinating (and scary) reading. SIAI's finally starting to get its paper-writing machinery into gear, so let's give them money to make that possible. There's also a static page about this challenge; if you're on Facebook, please take the time to "like" it there.
(Full disclosure: I was an SIAI Visiting Fellow in April-July 2010.)