From the SingInst blog:
Thanks to the generosity of several major donors†, every donation to the Singularity Institute made now until August 31, 2011 will be matched dollar-for-dollar, up to a total of $125,000.
(Visit the challenge page to see a progress bar.)
Now is your chance to double your impact while supporting the Singularity Institute and helping us raise up to $250,000 to help fund our research program and stage the upcoming Singularity Summit… which you can register for now!
† $125,000 in backing for this challenge is being generously provided by Rob Zahra, Quixey, Clippy, Luke Nosek, Edwin Evans, Rick Schwall, Brian Cartmell, Mike Blume, Jeff Bone, Johan Edström, Zvi Mowshowitz, John Salvatier, Louie Helm, Kevin Fischer, Emil Gilliam, Rob and Oksana Brazell, Guy Srinivasan, John Chisholm, and John Ku.
2011 has been a huge year for Artificial Intelligence. With the IBM computer Watson defeating two top Jeopardy! champions in February, it’s clear that the field is making steady progress. Journalists like Torie Bosch of Slate have argued that “We need to move from robot-apocalypse jokes to serious discussions about the emerging technology.” We couldn’t agree more — in fact, the Singularity Institute has been thinking about how to create safe and ethical artificial intelligence since long before the Singularity landed on the front cover of TIME magazine.
The last 1.5 years were our biggest ever. Since the beginning of 2010, we have:
- Held our annual Singularity Summit, in San Francisco. Speakers included Ray Kurzweil, James Randi, Irene Pepperberg, and many others.
- Held the first Singularity Summit Australia and Singularity Summit Salt Lake City.
- Held a wildly successful Rationality Minicamp.
- Published seven research papers, including Yudkowsky’s much-awaited ‘Timeless Decision Theory‘.
- Helped philosopher David Chalmers write his seminal paper ‘The Singularity: A Philosophical Analysis‘, which has sparked broad discussion in academia, including an entire issue of Journal of Consciousness Studies and a book from Springer devoted to responses to Chalmers’ paper.
- Launched the Research Associates program.
- Brought MIT cosmologist Max Tegmark onto our advisory board, published our Singularity FAQ, and much more.
In the coming year, we plan to do the following:
- Hold our annual Singularity Summit, in New York City this year.
- Publish three chapters in the upcoming academic volume The Singularity Hypothesis, along with several other papers.
- Improve organizational transparency by creating a simpler, easier-to-use website that includes Singularity Institute planning and policy documents.
- Publish a document of open research problems related to Friendly AI, to clarify the research space and encourage other researchers to contribute to our mission.
- Add additional skilled researchers to our Research Associates program.
- Publish well-researched documents making the case for existential risk reduction as optimal philanthropy.
- Diversify our funding sources by applying for targeted grants and advertising our affinity credit card program.
We appreciate your support for our high-impact work. As PayPal co-founder and Singularity Institute donor Peter Thiel said:
“I’m interested in facilitating a forum in which there can be… substantive research on how to bring about a world in which AI will be friendly to humans rather than hostile… [The Singularity Institute represents] a combination of very talented people with the right problem space [they’re] going after… [They’ve] done a phenomenal job… on a shoestring budget. From my perspective, the key question is always: What’s the amount of leverage you get as an investor? Where can a small amount make a big difference? This is a very leveraged kind of philanthropy.”
Donate now, and seize a better than usual chance to move our work forward. Credit card transactions are securely processed through Causes.com, Google Checkout, or PayPal. If you have questions about donating, please call Amy Willey at (586) 381-1801.
Thanks for your thoughtful comment.
I have little idea of how likely it is but a nuclear winter could seriously hamper human mobility.
Widespread radiation would further hamper human mobility.
Redeveloping preexisting infrastructure could require natural resources on of order of magnitude comparable to the infrastructure that we have today. Right now we have the efficient market hypothesis to help out with natural resource shortage, but upsetting the trajectory of our development could exacerbate the problem.
Note that a probability of 0.1% isn't so large (even taking account all of the other things that could interfere with a positive singularity).
Reasoning productively about the expected value of these things presently seems to me to be too difficult (but I'm open to changing my mind if you have ideas).
With the exception of natural resource shortage (which I mentioned above) I doubt that this is within an order of magnitude of significance of other relevant factors provided that we're talking about a delay on the order of fewer than 100 years (maybe similarly for a delay of 1000 years; I would have to think about it).
Similarly, I doubt that this would be game-changing.
These seem worthy of further contemplation - is the development of future technologies more likely to go in Australia than in the current major powers, etc.
This seems reasonable. As I mentioned, I presently attach high expected x-risk reduction to nuclear war prevention but my confidence is sufficiently unstable at present so that the value devoting resources to gather more information outweighs the value of donating to nuclear war reduction charities.
Yes. In the course of researching nuclear threat reduction charities I hope to learn what options are on the table.
On the other hand there may not be low hanging fruit attached to thinking about weird, hard-to-think-about technologies like MNT. I do however plan on looking into the Foresight Institute.
Thanks for clarifying and I hope your research goes well. If I'm not mistaken, you can see the 0.1% calculation as the product of three things: the probability nuclear war happens, the probability that if it happens it's such that it prevents any future positive singularities that otherwise would have happened, and the probability a positive singularity would otherwise have happened. If the first and third probabilities are, say, 1/5 and 1/4, then the answer will be 1/20 of the middle probability, so your 0.1%-1% answer corresponds to a 2%-20% chance that ... (read more)