I just put in 5100 USD, the current balance of my bank account, and I'll find some way to put in more by the end of the challenge.
Thank you SO MUCH for the clarification VNKKET linked to. I was worried. I would usually discourage someone from donating all of their savings to any cause including this one, but in this case it looks like you have thought it through and what you are doing a) make sense and b) is the result of a well thought out lifestyle optimization process.
I'd be happy to talk with you or exchange email (my email is public) to discuss the details, both to better learn to optimize my life and to try to help you with yours, since I expect that efforts will be high return, given the evidence that you are a person who actually does the things that you think would be good lifestyle optimizations at least some of the time.
I'm also desperately interested in better characterizing people who optimize their lifestyles and who try to live without fear etc.
Good to hear about the successes, but I am still skeptical about this one:
Since the beginning of 2010, we have:...
Held a wildly successful Rationality Minicamp.
I have yet to see any actual substantiation for this claim beyond the SIAI blog's say-so and a few qualitative individual self-reports. I have not seen any attempt to extend and replicate this success, nor evidence that would even be possible.
If it actually were a failure, how would we know? Would anyone there even admit it, or prefer to avoid making its leaders look bad?
Sorry to be the bad guy there, but this claim has been floating around for a while and looks like it will become one of those things "everyone knows".
Wasn't there something similar a while ago? ... yes there was. I can reasonably assume there will be others in the future. You are trying to get people to donate by appealing to an artificial sense of urgency ("Now is your chance to" , "Donate now" ). Beware that this triggers dark arts alarm bells.
Nevertheless, I have now donated an amount of money.
Only on this site would you see perfectly ordinary charity fundraising techniques described as "dark arts", while in the next article over, the community laments the poor image of the concept of beheading corpses and then making them rise again.
To be fair, it's just the heads that rise again, not the rest of the corpse... ah, I'm not helping, am I? :-)
If I've just met someone at a party, I'll tend to say "I'm having my head frozen"
I usually offer my name and ask them theirs.
I'm quite often asked about my necklace, and I'll say "It's my contract of immortality with the Cult of the Severed Head", or in some contexts, "It's my soul" or "It's my horcrux".
The key thing is for your voice to make it clear that you're not at all afraid and that you think this is what the high-prestige smart people do. Show the tiniest trace of defensiveness and they'll pounce.
Agreed; I'd personally like if a planned schedule for major grants was disclosed regularly, maybe annually.
Anyway, I donated 500 USD.
I just donated Round(1000 Pi / 3) USD. I also had Google doing an employer match.
Strangely enough, I went through the 'donate publicly' link, but chose not to use facebook, and in the end it called me 'Anonymous Donor'.
I am happy to see that the success of the previous matching program is being followed up with additional matching funds, and that there is such a broad base of sponsors. I have donated $2000 on top of my typical annual donation.
There's a major conflict of interest in accepting donations from Clippy.
I would accept donations from Lucifer himself if he was silly enough to give them to me. I don't see a problem. :)
2011 has been a huge year for Artificial Intelligence. With the IBM computer Watson defeating two top Jeopardy! champions in February, it’s clear that the field is making steady progress.
Do people here generally think that this is true? I don't see much of an intersection between Watson and AI; it seems like a few machine learning algorithms that approach Jeopardy problems in an extremely artificial way, much like chess engines approach playing chess. (Are chess engines artificial intelligence too?)
I actually do think it's a big deal, as well as being flashy, though not an extremely big deal. Something along the lines of the best narrow AI accomplishment of any given year and the flashiest of any given 3-5 year period.
$10k for the most efficient instrument of existential risk reduction, the most efficient way to do good.
I've donated $512 on top of my monthly donation.
The safety implications of advanced AI form one of the most important (and under-appreciated) ideas out there right now. It's an issue that humanity needs to think long and hard about. So I think that by organizing conferences and writing papers, SIAI are doing pretty much the right thing. I don't think they're perfect, but for me the way to help with that is by getting involved.
I am glad that people are standing up and showing their support, and also that people are voicing criticisms and showing that they are really thinking about the issue.
I hope to see some of you Oct 15-16 in New York!
I'm not entirely sure that I believe the premise of this game. Essentially, the claim is that 20 of SingInst's regular donors have extra money lying around that they are willing to donate to SingInst iff someone else donates the same amount. What do the regular donors intend to do with the money otherwise? Have they signed a binding agreement to all get together and blow the money on a giant party? Otherwise, why would they not just decide to donate it to SingInst at the end of the matching period anyway?
This seems relevant:
Five: US tax law prohibits public charities from getting too much support from big donors.
Under US tax law, a 501(c)(3) public charity must maintain a certain percentage of "public support". As with most tax rules, this one is complicated. If, over a four-year period, any one individual donates more than 2% of the organization's total support, anything over 2% does not count as "public support". If a single donor supported a charity, its public support percentage would be only 2%. If two donors supported a charity, its public support percentage would be at most 4%. Public charities must maintain a public support percentage of at least 10% and preferably 33.3%. Small donations - donations of less than 2% of our total support over a four-year period - count entirely as public support. Small donations permit us to accept more donations from our major supporters without sending our percentage of public support into the critical zone. Currently, the Singularity Institute is running short on public support - so please don't think that small donations don't matter!
Here's my totally non-binding plan for my $1100 extra dollars that really were just lying around, budgeted but projected to not be spent: If we meet the full challenge, donate $1100 to SingInst and have Microsoft match it as well. If we meet only e.g. 80%, donate 80% of $1100 and have Microsoft match it, and spend the rest on a party I wouldn't have had otherwise and link y'all to tasteful pictures. That's a x3 multiplier on ~1% of the $125,000.
Before your post, bentarm, my plan was somewhat different but I estimate it gave at least a 2.9x multiplier.
I understand the SI needs money and I understand a lot of discussion about this has ensued elsewhere, but I'm still skeptical that I can have the most impact with my money by donating to the SI, when I could be funding malaria nets, for instance.
There are two questions here that deserve separate consideration: donating to existential risk reduction vs. other (nearer-term, lower-uncertainty) philanthropy, and donating to SI vs. other x-risk reduction efforts. It seems to me that you should never be weighing SI against malaria nets directly; if you would donate to (SI / malaria nets) conditional on their effectiveness, you've already decided (for / against) x-risk reduction and should only be considering alternatives like (FHI / vaccination programs).
Thanks. You're right I've been thinking about it wrong, I'll have to reconsider how I approach philanthropy. It's valuable to donate to research anyway, since research is what comes up with things like "malaria nets".
Glad I could help. Thanks for letting me know.
It's valuable to donate to research anyway, since research is what comes up with things like "malaria nets".
Good point; under uncertainty about x-risk vs. near-term philanthropy you might donate to organizations that could help answer that question, like GiveWell or SI/FHI.
I haven't watched the presentation, but 8 lives corresponds to only a one in a billion chance of averting human extinction per donated dollar, which corresponds (neglecting donation matching and the diminishing marginal value of money) to roughly a 1 in 2000 chance of averting human extinction from a doubling of the organization's budget for a year. That doesn't sound obviously crazy to me, though it's more than I'd attribute to an organization just on the basis that it claimed to be reducing extinction risk.
Note that the large number used in this particular back-of-envelope calculation is the world population of several billion, not the still much larger numbers involved in astronomical waste.
Keep in mind that estimation is the best we have. You can't appeal to Nature for not having been given a warning that meets a sufficient standard of rigor. Avoiding all actions of uncertain character dealing with huge consequences is certainly a bad strategy. Any one of such actions might have a big chance of not working out, but not taking any of them is guaranteed to be unhelpful.
My impression is that the risk of immediate extinction due to nuclear war is very small but that a nuclear war could cripple civilization to the point of not being able to recover enough to affect a positive singularity; also it would plausibly increase other x-risks - intuitively, nuclear war would destabilize society, and people are less likely to take safety precautions in an unstable society when developing advanced technologies than they otherwise would be. I'd give a subjective estimate of 0.1% - 1% of nuclear war preventing a positive singularity.
Don't we make this choice daily by choosing our preferred brand over Ethical Bean at Starbucks?
I hear the ethics at Starbucks are rather low-quality and in any case, surely Starbucks isn't the cheapest place to purchase ethics.
Bah! Listen, Eliezer, I'm tired of all your meta-hipsterism!
"Hey, let's get some ethics at Starbucks" "Nah, it's low-quality; I only buy a really obscure brand of ethics you've probably never heard of called MIRI". "Hey man, you don't look in good health, maybe you should see a doctor" "Nah, I like a really obscure form of healthcare, I bet you're not signed up for it, it's called 'cryonics'; it's the cool thing to do". "I think I like you, let's date" "Oh, I'm afraid I only date polyamorists; you're just too square". "Oh man, I just realized I committed hindsight bias the other day!" "I disagree, it's really the more obscure backfire effect which just got published a year or two ago." "Yo, check out this thing I did with statistics" "That's cool. Did you use Bayesian techniques?"
Man, forget you!
/angrily sips his obscure mail-order loose tea, a kind of oolong you've never heard of (Formosa vintage tie-guan-yin)
If you keep looking down the utility gradient, it's harder to escape local maxima because you're facing backwards.
This comment has been brought to you by me switching from Dvorak to Colemak.
Yeah, I intend to donate a good portion to Village Reach after I do some more thorough research on charity.
If you already know your decision the value of the research is nil.
Having $1000 pre-filled makes me feel uncomfortable. I can understand the reasoning behind anchoring to a higher number, and I can't explain much behind why it makes me feel uncomfortable about contributing at all. Perhaps a running average pre-fill like the indie game Humble Bundle 3 would be better.
Such testing may show
I'm not sure if this is bad word choice, but if you genuinely don't know the results then it seems disingenuous to focus on one of the three specific results without offering any further support for that stance. (If you do know the results then I would love to see them ^.^)
Not many people heard about the Singularity Summit in Salt Lake City. Here is part of Luke Nosek's talk that struck me:
...I was a futurist all my life... but there was a strange detour [as a result of] my time with Paypal...
We all [the "Paypal mafia"] went off and started more companies [Yelp, YouTube, etc.]... and what you'd do in your 20s if you got that level of success was, "Well, I have some money. Now I need some more." This was the mentality...
In 2008 this changed for me, almost like a spiritual conversion. I met... William and M
I just noticed this hasn't been posted to SL4; I could do it, but maybe better someone from singinst?
From the SingInst blog:
Thanks to the generosity of several major donors†, every donation to the Singularity Institute made now until August 31, 2011 will be matched dollar-for-dollar, up to a total of $125,000.
Donate now!
(Visit the challenge page to see a progress bar.)
Now is your chance to double your impact while supporting the Singularity Institute and helping us raise up to $250,000 to help fund our research program and stage the upcoming Singularity Summit… which you can register for now!
† $125,000 in backing for this challenge is being generously provided by Rob Zahra, Quixey, Clippy, Luke Nosek, Edwin Evans, Rick Schwall, Brian Cartmell, Mike Blume, Jeff Bone, Johan Edström, Zvi Mowshowitz, John Salvatier, Louie Helm, Kevin Fischer, Emil Gilliam, Rob and Oksana Brazell, Guy Srinivasan, John Chisholm, and John Ku.
2011 has been a huge year for Artificial Intelligence. With the IBM computer Watson defeating two top Jeopardy! champions in February, it’s clear that the field is making steady progress. Journalists like Torie Bosch of Slate have argued that “We need to move from robot-apocalypse jokes to serious discussions about the emerging technology.” We couldn’t agree more — in fact, the Singularity Institute has been thinking about how to create safe and ethical artificial intelligence since long before the Singularity landed on the front cover of TIME magazine.
The last 1.5 years were our biggest ever. Since the beginning of 2010, we have:
In the coming year, we plan to do the following:
We appreciate your support for our high-impact work. As PayPal co-founder and Singularity Institute donor Peter Thiel said:
Donate now, and seize a better than usual chance to move our work forward. Credit card transactions are securely processed through Causes.com, Google Checkout, or PayPal. If you have questions about donating, please call Amy Willey at (586) 381-1801.