I value my free time far too much to work for a living. So your model is correct on that count. I had planned to be mostly unemployed with occasional freelance programming jobs, and generally keep costs down.
But then a couple years ago my hobby accidentally turned into a business, and it's doing well. "Accidentally" because it started with companies contacting me and saying "We know you're giving it away for free, but free isn't good enough for us. We want to buy a bunch of copies." And because my co-founder took charge of the negotiations and other non-programming bits, so it still feels like a hobby to me.
Both my non-motivation to work and my willingness to donate a large fraction of my income have a common cause, namely thinking of money in far-mode, i.e. not alieving The Unit of Caring on either side of the scale.
you're too young (and didn't have much income before anyway) to have significant savings.
Err, I haven't yet earned as much from the lazy entrepreneur route as I would have if I had taken a standard programming job for the past 7 years (though I'll pass that point within a few months at the current rate). So don't go blaming my cohort's age if they haven't saved and/or donated as much as me. I'm with Rain in spluttering at how people can have an income and not have money.
Under the assumption that being rewarded with karma can motivate someone to make a donation, but if they make a donation, they do not respond to karma as an incentive when deciding how much to donate, then upvoting any donation is the best policy for maximizing money to SI. I'm not sure how realistic that model is, but it seems intuitive to me.
What do you expect to happen? We don't have enough users giving karma for donation to sustain a linear exchange rate in the [$20, $20000] range. Unless, I suppose, we give up any attempt at fine resolution over the [$1, $500] range.
In practice, what most people are probably doing is picking a threshold (possibly $0) beyond which they give karma for a donation. This could be improved: you could pick a large threshold beyond which you give 1 karma, and give fractional karma (by flipping a biased coin) below that threshold. However, if the large threshold were anywhere close to $20000, and your fractional karma scales linearly, then you would pretty much never give karma to the other donations.
Edit: after doing some simulations, I'm no longer sure the fractional approach is an improvement. It gives interesting graphs, though!
If we knew the Singularity Institute's approximate budget, we could fix this by assuming log-utility in money, but this is complicated.
"No, she wouldn't say anything to me about Lucius afterwards, except to stay away from him. So during the Incident at the Potions Shop, while Professor McGonagall was busy yelling at the shopkeeper and trying to get everything under control, I grabbed one of the customers and asked them about Lucius."
Draco's eyes were wide again. "Did you really?"
Harry gave Draco a puzzled look. "If I lied the first time, I'm not going to tell you the truth just because you ask twice."
I donated 250$.
Update: No, I apparently did not. For some reason the transfer from Google Checkout got rejected, and now PayPal too. Does anyone have an idea what might've gone wrong? I've a Hungarian bank account. My previous SI donations were fine, even with the same credit card if I recall correctly, and I'm sure that my card is still prefectly valid.
I'm having the same problem. I used the card to buy modafinil yesterday, which might raise a red flag in fraud detection software? But if you're having it too, I'd update in the direction of it being a problem on SIAI's end.
Has anyone successfully donated since Kutta posted?
edit - Amazon is declining my card as well.
edit 2 - It's sorted out now, just donated £185.
I have some money that I was saving for something like this, but I also just saw Eliezer's (very convincing) request for CFAR donations yesterday and heard a rumor that SIAI was trying to get people to donate to CFAR because they needed it more.
This seems weird to me because I would expect that with SIAI's latest announcement they have shifted from waterline-raising/community-building to more technical areas where CFAR success would be of less help to them, but I'd be very interested in hearing from an SIAI higher-up whether they really want my money or whether they would prefer I give it to CFAR instead.
1) In the long run, for CFAR to succeed, it has to be supported by a CFAR donor base that doesn't funge against SIAI money. I expect/hope that CFAR will have a substantially larger budget in the long run than SIAI. In the long run, then, marginal x-risk minimizers should be donating to SIAI.
2) But since CFAR is at a very young and very vital stage in its development and has very little funding, it needs money right now. And CFAR really really needs to succeed for SIAI to be viable in the long-term.
So my guess is that a given dollar is probably more valuable at CFAR right this instant, and we hope this changes very soon (due to CFAR having its own support base)...
...but...
...SIAI has previously supported CFAR, is probably going to make a loan to CFAR in the future, and therefore it doesn't matter as much exactly which organization you give to right now, except that if one maxes out its matching funds you probably want to donate to the other until it also maxes...
...and...
...even the judgment about exactly where a marginal dollar spent is more valuable is, necessarily, extremely uncertain to me. My own judgment favors CFAR at the current margins, but it's a very tough decision....
I have backup plans, but they tend to look a lot like "Try founding CFAR again."
I don't know of any good way to scale funding or core FAI researchers for SIAI without rationalists. There's other things I could try, and would if necessary try, but I spent years trying various SIAI-things before LW started actually working. Just because I wouldn't give up no matter what, doesn't mean there wouldn't be a fairly large chunk of success-probability sliced off if CFAR failed, and a larger chunk of probability sliced off if I couldn't make any alternative to CFAR work.
I realize a lot of people think it shouldn't be impossible to fund SIAI without all that rationality stuff. They haven't tried it. Lots of stuff sounds easy if you haven't tried it.
[SI has now] shifted from waterline-raising/community-building to more technical areas where CFAR success would be of less help to them
Remember that the original motivation for the waterline-raising/community-building stuff at SI was specifically to support SI's narrower goals involving technical research. Eliezer wrote in 2009 that "after years of bogging down [at SI] I threw up my hands and explicitly recursed on the job of creating rationalists," because Friendly AI is one of those causes that needs people to be "a bit more self-aware about their motives and the nature of signaling, and a bit more moved by inconvenient cold facts."
So, CFAR's own efforts at waterline-raising and community-building should end up helping SI in the same way Less Wrong did, even though SI won't capture all or even most of that value, and even though CFAR doesn't teach classes on AI risk.
I've certainly found it to be the case that on average, people who get in contact with SI via an interest in rationality tend to be more useful than people who get in contact with SI via an interest in transhumanism or the singularity. (Though there are plenty of exceptions! E.g. Edwin Evans, Ri...
Why...?
Oh, right...
Basically, it's because I think both organizations Do Great Good with marginal dollars at this time, but the world is too uncertain to tell whether marginal dollars do more good at CFAR or SI. (X-risk reducers confused by this statement probably have a lower estimate of CFAR's impact on x-risk reduction than I do.) For normal humans who make giving decisions mostly by emotion, giving to the one they're most excited about should cause them to give the maximum amount they're going to give. For weird humans who make giving decisions mostly by multiplication, well, they've already translated "whichever organization you're most excited to support" into "whichever organization maximizes my expected utility [at least, with reference to the utility function which represents my philanthropic goals]."
Yeah, I wanted to catch Jaan Tallinn on the Top Donors page to prove some random middle-class person could do better charity than the rich types, but he keeps pulling further ahead and I dropped a couple places in the rankings :-/ Gotta work harder!
I was influenced both to donate and to donate more. Social proof is very powerful. I also would not have posted if I didn't think it would encourage people to donate or donate more.
I highly support changing your name--there's all sorts of bad juju associated with the term "singularity". My advice, keep the new name as bland as possible, avoiding anything with even a remote chance of entering the popular lexicon. The term "singularity" has suffered the same fate as "cybernetics".
Cross-posted here.
(The Singularity Institute maintains Less Wrong, with generous help from Trike Apps, and much of the core content is written by salaried SI staff members.)
Thanks to the generosity of several major donors,† every donation to the Singularity Institute made now until January 20t (deadline extended from the 5th) will be matched dollar-for-dollar, up to a total of $115,000! So please, donate now!
Now is your chance to double your impact while helping us raise up to $230,000 to help fund our research program.
(If you're unfamiliar with our mission, please see our press kit and read our short research summary: Reducing Long-Term Catastrophic Risks from Artificial Intelligence.)
Now that Singularity University has acquired the Singularity Summit, and SI's interests in rationality training are being developed by the now-separate CFAR, the Singularity Institute is making a major transition. Most of the money from the Summit acquisition is being placed in a separate fund for a Friendly AI team, and therefore does not support our daily operations or other programs.
For 12 years we've largely focused on movement-building — through the Singularity Summit, Less Wrong, and other programs. This work was needed to build up a community of support for our mission and a pool of potential researchers for our unique interdisciplinary work.
Now, the time has come to say "Mission Accomplished Well Enough to Pivot to Research." Our community of supporters is now large enough that qualified researchers are available for us to hire, if we can afford to hire them. Having published 30+ research papers and dozens more original research articles on Less Wrong, we certainly haven't neglected research. But in 2013 we plan to pivot so that a much larger share of the funds we raise is spent on research.
Accomplishments in 2012
Future Plans You Can Help Support
In the coming months, we plan to do the following:
(Other projects are still being surveyed for likely cost and strategic impact.)
We appreciate your support for our high-impact work! Donate now, and seize a better than usual chance to move our work forward. Credit card transactions are securely processed using either PayPal or Google Checkout. If you have questions about donating, please contact Louie Helm at (510) 717-1477 or louie@intelligence.org.
† $115,000 of total matching funds has been provided by Edwin Evans, Mihaly Barasz, Rob Zahra, Alexei Andreev, Jeff Bone, Michael Blume, Guy Srinivasan, and Kevin Fischer.
I will mostly be traveling (for AGI-12) for the next 25 hours, but I will try to answer questions after that.