Holy pickled waffles on a pogo stick. Thanks, dude.
Is there anything you're willing to say about how you acquired that dough? My model of you has earned less in a lifetime.
I value my free time far too much to work for a living. So your model is correct on that count. I had planned to be mostly unemployed with occasional freelance programming jobs, and generally keep costs down.
But then a couple years ago my hobby accidentally turned into a business, and it's doing well. "Accidentally" because it started with companies contacting me and saying "We know you're giving it away for free, but free isn't good enough for us. We want to buy a bunch of copies." And because my co-founder took charge of the negotiations and other non-programming bits, so it still feels like a hobby to me.
Both my non-motivation to work and my willingness to donate a large fraction of my income have a common cause, namely thinking of money in far-mode, i.e. not alieving The Unit of Caring on either side of the scale.
you're too young (and didn't have much income before anyway) to have significant savings.
Err, I haven't yet earned as much from the lazy entrepreneur route as I would have if I had taken a standard programming job for the past 7 years (though I'll pass that point within a few months at the current rate). So don't go blaming my cohort's age if they haven't saved and/or donated as much as me. I'm with Rain in spluttering at how people can have an income and not have money.
Under the assumption that being rewarded with karma can motivate someone to make a donation, but if they make a donation, they do not respond to karma as an incentive when deciding how much to donate, then upvoting any donation is the best policy for maximizing money to SI. I'm not sure how realistic that model is, but it seems intuitive to me.
What do you expect to happen? We don't have enough users giving karma for donation to sustain a linear exchange rate in the [$20, $20000] range. Unless, I suppose, we give up any attempt at fine resolution over the [$1, $500] range.
In practice, what most people are probably doing is picking a threshold (possibly $0) beyond which they give karma for a donation. This could be improved: you could pick a large threshold beyond which you give 1 karma, and give fractional karma (by flipping a biased coin) below that threshold. However, if the large threshold were anywhere close to $20000, and your fractional karma scales linearly, then you would pretty much never give karma to the other donations.
Edit: after doing some simulations, I'm no longer sure the fractional approach is an improvement. It gives interesting graphs, though!
If we knew the Singularity Institute's approximate budget, we could fix this by assuming log-utility in money, but this is complicated.
"No, she wouldn't say anything to me about Lucius afterwards, except to stay away from him. So during the Incident at the Potions Shop, while Professor McGonagall was busy yelling at the shopkeeper and trying to get everything under control, I grabbed one of the customers and asked them about Lucius."
Draco's eyes were wide again. "Did you really?"
Harry gave Draco a puzzled look. "If I lied the first time, I'm not going to tell you the truth just because you ask twice."
Nice quote.
"Really?" is more polite to say than "I find that hard to believe, can you provide confirming evidence" or "[citation needed]", though. Also, sometimes people actually will say "No, I was kidding" if you ask them.
There is a largely innocuous conversation below this comment which has been banned in its entirety. Who did this? Why?
I continue to donate $1000 a month, and intend to reduce my retirement savings next year so I can donate more.
I have been donating $100 monthly on a subscription payment and will continue to do so.
Easier on the cash-flow than a lump donation. More fuzzies per year, too.
I just donated $250. Can't afford as much as last year; I switched to a lower-paying job that makes me happier.
I am looking forward the the ebooks. I hope you'll provide them in ePub format, for those of us who prefer that. [I was pleased to donate $40, which should soon be matched by my employer as part of the employee-match program, thus getting me double-matched!]
I donated 250$.
Update: No, I apparently did not. For some reason the transfer from Google Checkout got rejected, and now PayPal too. Does anyone have an idea what might've gone wrong? I've a Hungarian bank account. My previous SI donations were fine, even with the same credit card if I recall correctly, and I'm sure that my card is still prefectly valid.
I'm having the same problem. I used the card to buy modafinil yesterday, which might raise a red flag in fraud detection software? But if you're having it too, I'd update in the direction of it being a problem on SIAI's end.
Has anyone successfully donated since Kutta posted?
edit - Amazon is declining my card as well.
edit 2 - It's sorted out now, just donated £185.
I just verified that donations in general are working via PayPal and Google Checkout. We'll investigate this specific issue to see where the problem is.
This is great news. Thanks to Edwin Evans, Mihaly Barasz, Rob Zahra, Alexei Andreev, Jeff Bone, Michael Blume, Guy Srinivasan, and Kevin Fischer for providing matching funds!
I have some money that I was saving for something like this, but I also just saw Eliezer's (very convincing) request for CFAR donations yesterday and heard a rumor that SIAI was trying to get people to donate to CFAR because they needed it more.
This seems weird to me because I would expect that with SIAI's latest announcement they have shifted from waterline-raising/community-building to more technical areas where CFAR success would be of less help to them, but I'd be very interested in hearing from an SIAI higher-up whether they really want my money or whether they would prefer I give it to CFAR instead.
1) In the long run, for CFAR to succeed, it has to be supported by a CFAR donor base that doesn't funge against SIAI money. I expect/hope that CFAR will have a substantially larger budget in the long run than SIAI. In the long run, then, marginal x-risk minimizers should be donating to SIAI.
2) But since CFAR is at a very young and very vital stage in its development and has very little funding, it needs money right now. And CFAR really really needs to succeed for SIAI to be viable in the long-term.
So my guess is that a given dollar is probably more valuable at CFAR right this instant, and we hope this changes very soon (due to CFAR having its own support base)...
...but...
...SIAI has previously supported CFAR, is probably going to make a loan to CFAR in the future, and therefore it doesn't matter as much exactly which organization you give to right now, except that if one maxes out its matching funds you probably want to donate to the other until it also maxes...
...and...
...even the judgment about exactly where a marginal dollar spent is more valuable is, necessarily, extremely uncertain to me. My own judgment favors CFAR at the current margins, but it's a very tough decision....
Thank you; that helps clarify the issue for me. Since people who know more seem to think it's a tossup and SIAI motivates me more, I gave $250 to them.
And CFAR really really needs to succeed for SIAI to be viable in the long-term.
That's an extremely strong claim. Is that actually your belief? Not merely that CFAR success would be useful to SIAI success? There is no alternate plan for SIAI to be successful that doesn't rely on CFAR?
I have backup plans, but they tend to look a lot like "Try founding CFAR again."
I don't know of any good way to scale funding or core FAI researchers for SIAI without rationalists. There's other things I could try, and would if necessary try, but I spent years trying various SIAI-things before LW started actually working. Just because I wouldn't give up no matter what, doesn't mean there wouldn't be a fairly large chunk of success-probability sliced off if CFAR failed, and a larger chunk of probability sliced off if I couldn't make any alternative to CFAR work.
I realize a lot of people think it shouldn't be impossible to fund SIAI without all that rationality stuff. They haven't tried it. Lots of stuff sounds easy if you haven't tried it.
How much money would you need magicked to allow you to shed fundraising and infrastructure, etc, and just hire and hole up with a dream team of hyper-competent maths wonks? Restated, at which set amount would SIAI be comfortably able to aggressively pursue its long-term research?
[SI has now] shifted from waterline-raising/community-building to more technical areas where CFAR success would be of less help to them
Remember that the original motivation for the waterline-raising/community-building stuff at SI was specifically to support SI's narrower goals involving technical research. Eliezer wrote in 2009 that "after years of bogging down [at SI] I threw up my hands and explicitly recursed on the job of creating rationalists," because Friendly AI is one of those causes that needs people to be "a bit more self-aware about their motives and the nature of signaling, and a bit more moved by inconvenient cold facts."
So, CFAR's own efforts at waterline-raising and community-building should end up helping SI in the same way Less Wrong did, even though SI won't capture all or even most of that value, and even though CFAR doesn't teach classes on AI risk.
I've certainly found it to be the case that on average, people who get in contact with SI via an interest in rationality tend to be more useful than people who get in contact with SI via an interest in transhumanism or the singularity. (Though there are plenty of exceptions! E.g. Edwin Evans, Ri...
Why...?
Oh, right...
Basically, it's because I think both organizations Do Great Good with marginal dollars at this time, but the world is too uncertain to tell whether marginal dollars do more good at CFAR or SI. (X-risk reducers confused by this statement probably have a lower estimate of CFAR's impact on x-risk reduction than I do.) For normal humans who make giving decisions mostly by emotion, giving to the one they're most excited about should cause them to give the maximum amount they're going to give. For weird humans who make giving decisions mostly by multiplication, well, they've already translated "whichever organization you're most excited to support" into "whichever organization maximizes my expected utility [at least, with reference to the utility function which represents my philanthropic goals]."
I assume a mailed cheque will work?
This post made me super excited. I was just thinking about donating before I found this. Now I really have to. Thanks for the initiative.
I've just donated 500 Canadian dollars to the Singularity Institute (at the moment, 1 Canadian dollar = 1.01 US dollar).
[edited]
As part of Singularity University's acquisition of the Singularity Summit, we will be changing our name and ...
OK, this is big news. Don't know how I missed this one.
In general, I'd say that people's desire to be anonymous should be respected unless there's a very good reason to override it, and solving a puzzle is not a very good reason.
It helps people like me, who look at it almost like a competition. The more people competing ,the merrier.
Yeah, I wanted to catch Jaan Tallinn on the Top Donors page to prove some random middle-class person could do better charity than the rich types, but he keeps pulling further ahead and I dropped a couple places in the rankings :-/ Gotta work harder!
Not sure if anyone else noticed, but the end date was pushed back until Jan 20. Although personally, I would rather donate to CFAR (and have done so, $500, and another $500 before the fundraiser timeframe.)
...unless the donation bar is lagging, slightly less than 1/3rd the hoped-for sum has been filled, with only about 11 days remaining. That's rather worrisome.
Do we get some kind of reasonable guarantee that there won't in the future be an even better matching offer (say a tripling of our impact), or is the idea here that the value of an SIAI donation is heavily time discounted?
We've never done such a drive in the past and have no current plans for one. We do have a pretty heavy discount rate. Sorry i can't say more.
Yep!
To stay honest though, if someone is reading this thread and planning to do this, they should contact SI now with the amount they're willing to match during a future drive... otherwise they're highly liable to fall prey to donor akrasia.
How encouraging is it to people to see comments saying people donated? To me it just seems like kinda self aggrandizing karma whoring. Have you read this thread and been influenced to donate or to donate more?
I was influenced both to donate and to donate more. Social proof is very powerful. I also would not have posted if I didn't think it would encourage people to donate or donate more.
I highly support changing your name--there's all sorts of bad juju associated with the term "singularity". My advice, keep the new name as bland as possible, avoiding anything with even a remote chance of entering the popular lexicon. The term "singularity" has suffered the same fate as "cybernetics".
Cross-posted here.
(The Singularity Institute maintains Less Wrong, with generous help from Trike Apps, and much of the core content is written by salaried SI staff members.)
Thanks to the generosity of several major donors,† every donation to the Singularity Institute made now until January 20t (deadline extended from the 5th) will be matched dollar-for-dollar, up to a total of $115,000! So please, donate now!
Now is your chance to double your impact while helping us raise up to $230,000 to help fund our research program.
(If you're unfamiliar with our mission, please see our press kit and read our short research summary: Reducing Long-Term Catastrophic Risks from Artificial Intelligence.)
Now that Singularity University has acquired the Singularity Summit, and SI's interests in rationality training are being developed by the now-separate CFAR, the Singularity Institute is making a major transition. Most of the money from the Summit acquisition is being placed in a separate fund for a Friendly AI team, and therefore does not support our daily operations or other programs.
For 12 years we've largely focused on movement-building — through the Singularity Summit, Less Wrong, and other programs. This work was needed to build up a community of support for our mission and a pool of potential researchers for our unique interdisciplinary work.
Now, the time has come to say "Mission Accomplished Well Enough to Pivot to Research." Our community of supporters is now large enough that qualified researchers are available for us to hire, if we can afford to hire them. Having published 30+ research papers and dozens more original research articles on Less Wrong, we certainly haven't neglected research. But in 2013 we plan to pivot so that a much larger share of the funds we raise is spent on research.
Accomplishments in 2012
Future Plans You Can Help Support
In the coming months, we plan to do the following:
(Other projects are still being surveyed for likely cost and strategic impact.)
We appreciate your support for our high-impact work! Donate now, and seize a better than usual chance to move our work forward. Credit card transactions are securely processed using either PayPal or Google Checkout. If you have questions about donating, please contact Louie Helm at (510) 717-1477 or louie@intelligence.org.
† $115,000 of total matching funds has been provided by Edwin Evans, Mihaly Barasz, Rob Zahra, Alexei Andreev, Jeff Bone, Michael Blume, Guy Srinivasan, and Kevin Fischer.
I will mostly be traveling (for AGI-12) for the next 25 hours, but I will try to answer questions after that.