If anyone would like to help with fundraising for Singularity Institute (I know the OP expressed interest in the other thread), I can offer coordination and organizing support to help make your efforts more successful.
louie.helm AT singinst.org
I also have ideas for people who would like to help but don't know where to start. Please contact me if you're interested!
For instance, you should probably publicly join SIAI's Facebook Cause Page and help us raise money there. We are stuck at $9,606.01... just a tad shy of $10,000 (Which would be a nice psychological milestone to pass!) This makes us roughly the #350th most popular philanthropic cause on Causes.com... and puts us behind other pressing global concerns like the "Art Creation Foundation For Children in Jacmel Haiti" and "Romania Animal Rescue". Seriously!
And, Yes: Singularity Institute does have other funds that were not raised on this site... but so do these other causes! It wouldn't hurt to look more respectable on public fundraising sites while simultaneously helping to raise money in a very efficient, publicly visible way. One project that might be helpful would be for someone to publicly track our assen...
The organization reported $118,803.00 in theft in 2009 resulting in a year end asset balance lower than expected. The SIAI is currently pursuing legal restitution.
It isn't much harder to steal code than to steal money from a bank account. Given the nature of research being conducted by the SIAI, one of the first and most important steps would have to be to think about adequate security measures.
If you are a potential donor interested to mitigate risks from AI then before contributing money you will have to make sure that your contribution does not increase those risks even further.
If you believe that risks from AI are to be taken seriously then you should demand that any organisation that studies artificial general intelligence has to establish significant measures against third-party intrusion and industrial espionage that is at least on par with the biosafety level 4 required for work with dangerous and exotic agents.
It might be the case that the SIAI does already employ various measures against the possibility of theft of sensitive information, yet any evidence that hints at the possibility of weak security should be taken seriously. Especially the possibility that there are potentially untrustworthy people who can access critical material should be examined.
Upvoted for raising some important points. Ceteris paribus, one failure of internal controls is nontrivial evidence of future ones.
For these purposes one should distinguish between sections of the organization. Eliezer Yudkowsky and Marcello Herreshoff's AI work is a separate 'box' from other SIAI activities such as the Summit, Visiting Fellows program, etc. Eliezer is far more often said to be too cautious and secretive with respect to that than the other way around.
Note: the comment has been completely rewritten since the original wave of downvoting. It's much better now.
Personally, I expect even moderately complicated problems -- especially novel ones -- to not scale or decompose at all cleanly.
So, leaving aside all questions about who is smarter than whom, I don't expect a thousand smart people working an hour a month on a project to be nearly as productive as one smart person working eight hours a day.
If you could share your reasons for expecting otherwise, I might find them enlightening.
Is any other information publicly available at the moment about the theft? The total amount stolen is large enough that it's noteworthy as part of a consideration of SIAI's financial practices.
Yes, I want to know that steps have been taken to minimize the possibility future thefts.
I don't know all the details of the financial controls, but I do know that members of the board have been given the ability to monitor bank transactions and accounts online at will to detect unusual activity and have been reviewing transactions at the monthly board meetings. There has been a major turnover of the composition of the board, which now consists of three major recent donors, plus Michael Vassar and Eliezer Yudkowsky.
Also, the SIAI pressed criminal charges rather than a civil suit to cultivate an internal reputation to deter repeats.
For one thing, the two risks are interrelated. Money is the life's blood of any organization. (Indeed, Eliezer regularly emphasizes the importance of funding to SIAI's mission). Thus, a major financial blow like this is a serious impediment to SIAI's ability to do its job. If you seriously believe that SIAI is significantly reducing existential risk to mankind, then this theft represents an appreciable increase in existential risk to mankind.
Second, SIAI's leadership is selling us an approach to thinking which is supposed to be of general application. i.e. it's not just for preventing ai-related disasters, it's also supposed to make you better at achieving more mundane goals in life. I can't say for sure without knowing the details, but if a small not-for-profit has 100k+ stolen from it, it very likely represents a failure of thinking on the part of the organization.
I agree. It reminds me of a fictional dialogue from a movie about the Apollo 1 disaster:
Clinton Anderson: [at the senate inquiry following the Apollo 1 fire] Colonel, what caused the fire? I’m not talking about wires and oxygen. It seems that some people think that NASA pressured North American to meet unrealistic and arbitrary deadlines and that in turn North American allowed safety to be compromised.
Frank Borman: I won’t deny there’s been pressure to meet deadlines, but safety has never been intentionally compromised.
Clinton Anderson: Then what caused the fire?
Frank Borman: A failure of imagination. We’ve always known there was the possibility of fire in a spacecraft. But the fear was that it would happen in space, when you’re 180 miles from terra firma and the nearest fire station. That was the worry. No one ever imagined it could happen on the ground. If anyone had thought of it, the test would’ve been classified as hazardous. But it wasn’t. We just didn’t think of it. Now who’s fault is that? Well, it’s North American’s fault. It’s NASA’s fault. It’s the fault of every person who ever worked on Apollo. It’s my fault. I didn’t think the test was hazardous. No one did. I wish to God we had.
Does anyone know if the finances of the Cryonics Institute or Alcor have been similarly dissected and analyzed? That kind of paper could literally be the difference between life and death for many of us.
I would be willing to do this work, but I need some "me" time first. The SIAI post took a bunch of spare time and I'm behind on my guitar practice. So let me relax a bit and then I'll see what I can find. I'm a member of Alcor and John is a member of CI and we've already noted some differences so maybe we can split up that work.
Seconded about investment. That could reduce the collective action problem. One possible implementation here.
If there were a 100-page post written about choosing between Alcor and CI, I'd read it. I plan to be hustling people to sign up for cryonics until I'm a glass statue myself, so the more up-to-date information and transparency, the better.
This GiveWell thread includes a transcript of a discussion between GiveWell and SIAI representatives.
Michael Vassar is working on an idea he calls the "Persistent Problems Group" or PPG. The idea is to assemble a blue-ribbon panel of recognizable experts to make sense of the academic literature on very applicable, popular, but poorly understood topics such as diet/nutrition. This would have obvious benefits for helping people understand what the literature has and hasn't established on important topics; it would also be a demonstration that there is such a thing as "skill at making sense of the world."
I am a little surprised about the existence of the Persistent Problems Group; it doesn't sound like it has a lot to do with SIAI's core mission (mitigating existential risk, as I understand it). I'd be interested in hearing more about that group and the logic behind the project.
Overall the transcript made me less hopeful about SIAI.
'Persistent Problems Group'? What is this, an Iain Banks novel? :)
(On a side-note, that sounds like a horrible idea. 'Yes, let's walk right into those rapidly revolving blades! Surely our rationality will protect us.')
"Michael Vassar's Persistent Problems Group idea does need funding, though it may or may not operate under the SIAI umbrella."
It sounds like they have a similar concern.
Is there any reason to believe that the Persistent Problems Group would do better at making sense of the literature than people who write survey papers? There are lots of survey papers published on various topics in the same journals that publish the original research, so if those are good enough we don't need yet another level of review to try to make sense of things.
I think of it this way:
Thank you very much for doing this. You've clearly put a lot of effort into making it both thorough and readable.
Formulate methods of validating the SIAI’s execution of goals.
Seconded. Being able to measure the effectiveness of the institute is important both for maintaining the confidence of their donors, and for making progress towards their long-term goals.
There's an issue related to the Singularity Summits that is tangential but worth mentioning: Even if one assigns a very low probability to a Singularity-type event occurring the summits are still doing a very good job getting interesting ideas about technology and their possible impact on the world out their and promoting a lot of interdisciplinary thinking that might not occur otherwise. I was also under the impression that the Summits were revenue negative, and even given that I would have argued that they are productive enough to be a good thing.
This is awesome. Thanks for all your hard work. I hope you will consider updating it in place when the 2010 form becomes available?
Please add somewhere near the top what the SIAI acronym stands for and a brief mention of what they do. I suggest, "The Singularity Institute for Artificial Intelligence (SIAI) is a non-profit research group working on problems related to existential risk and artificial intelligence, and the co-founding organization behind Less Wrong."
Michael Vassar isn't paying himself enough. $52K/yr is not much in either New York City or San Francisco. Or North Dakota, for that matter.
SIAI seems to be paying the minimum amount that leaves each worker effective instead of scrambling to reduce expenses or find other sources of income. Presumably, SIAI has a maximum that it judges each worker to be worth, and Eliezer and Michael are both under their maximums. That leaves the question of where these salaries fall in that range.
I believe Michael and Eliezer are both being paid near their minimums because they know SIAI is financially constrained and very much want to see it succeed, and because their salaries seem consistent with at-cost living in the Bay Area.
I'm speculating on limited data, but the most likely explanation for the salary disparity is that Eliezer's minimum is higher, possibly because Michael's household has other sources of income. I don't think marriage factors into the question.
The reason my salary shows as $95K in 2009 is that Paychex screwed up and paid my first month of salary for 2010 in the 2009 tax year. My actual salary was, I believe, constant or roughly so through 2008-2010.
The previous draft, with 91 comments: http://lesswrong.com/r/discussion/lw/5fo/siai_fundraising/
The older draft contains some misinformation. Much is corrected in the new version. I would prefer people use the new version.
I don't know exactly how much the SIAI is spending on food and fancy tablecloths at the Singularity Summit, but I don't think I care: it's growing and showing better results on the revenue chart each year.
Not too much at all -- at the 2010 Summit, food/coffee was served at Boudin nearby rather than using the more expensive catering service of the hotel.
Awesome idea for a post! You've clearly done a lot of thorough research, and I appreciate the fact that you're sharing it with everyone here.
Images are hosted on TinyPic.com and may not be visible behind corporate firewalls.
I understand that there is an image hosting service on LW accessible though the article editor. Any particular reason it was not convenient to use? It's generally better to keep the content local to avoid issues with external services suddenly breaking in a few months/years.
In terms of suggestions for SIAI, I'd like to see SIAI folks write up their thinking on the controversial AI topics that SIAI has taken a stand on, such as this, this, and the likelihood of hard takeoff. When I talk to Eliezer, there's a lot that he seems to take for granted that I haven't seen any published explanation of his thinking for. I get the impression he's had a lot of unproductive AI discussions with people who aren't terribly rational, but AI seems like an important enough topic for him to try to identify and prevent the failure modes that th...
This is an excellent post! Does anyone know of a similar examination of the Future of Humanity Institute which is led by Nick Bostrom? I just can't evaluate if FHI or SIAI has greater potential to reduce existential risks. And, maybe even more importantly, does the FHI need donations as badly as the SIAI? Any suggestions?
the SIAI will have to grapple with the high cost of recruiting top tier programmers
Hm, well they're not looking for coders now: Please remember that, at this present time, we are looking for breadth of mathematical experience, not coding skill as such. (Source.) Additionally, I emailed Michael Vassar a description of a rough plan to develop my ability to write code that runs correctly on the first try and he seemed uncertain about the value of implementing it. (Of course, maybe he's uncertain because the other plans of mine I'd shared seemed just as ...
Maybe, but if SIAI's goal is just to employ Eliezer for as little money as possible then that's not an important consideration.
The real reason SIAI wants to pay Eliezer money beyond what he needs to subsist by is so he can buy luxuries for himself, have financial security, have whatever part of his brain that associates high income with high status be satisfied, and feel good about his employment at SIAI. These are good reasons to pay Eliezer more than a living wage. If Eliezer didn't have any utility for money beyond the first $50K I don't think it would be sensible to pay it to him more than that. I don't see how hypothetical programming careers come in to any of this.
ETA: I guess maybe the hypotheticals could be important if we're trying to encourage young geniuses to follow Eliezer's path instead of getting careers in industry?
Yep. The way it actually works is that I'm on the critical path for our organizational mission, and paying me less would require me to do things that take up time and energy in order to get by with a smaller income. Then, assuming all goes well, future intergalactic civilizations would look back and think this was incredibly stupid; in much the same way that letting billions of person-containing brains rot in graves, and humanity allocating less than a million dollars per year to the Singularity Institute, would predictably look pretty stupid in retrospect. At Singularity Institute board meetings we at least try not to do things which will predictably make future intergalactic civilizations think we were being willfully stupid. That's all there is to it, and no more.
I have an image of Eliezer queued up in a coffee shop, guiltily eyeing up the assortment of immodestly priced sugary treats. The reptilian parts of his brain have commandeered the more recently evolved parts of his brain into fervently computing the hedonic calculus of an action that other, more foolish types, might misclassify as a sordid instance of discretionary spending. Caught staring into the glaze of a particularly sinful muffin, he now faces a crucial choice. A cognitive bias, thought to have been eradicated from his brain before the SIAI was founded, seizes its moment. "I'll take the triple chocolate muffin thank you" Eliezer blurts out. "Are you sure?" the barista asks. "Well I can't be 100% sure. But the future of intergalactic civilizations may very well depend on it!"
In accordance with the general fact that "calories in - calories out" is complete bullshit, I've had to learn that sweet things are not their caloric content, they are pharmaceutical weight-gain pills with effects far in excess of their stated caloric content. So no, I wouldn't be able to eat a triple chocolate muffin, or chocolate cake, or a donut, etcetera. But yes, when I still believed the bullshit and thought the cost was just the stated caloric content, I sometimes didn't resist.
Luckily a juicy porterhouse steak is a nice stand-in for a triple chocolate muffin. Unfortunately they don't tend to sell them at coffee shops.
Perhaps I'll end my career as a mathematician to start a paleo coffee shop.
I fully expect that less than 0.1% of mathematicians are working on math anywhere near as important as starting a chain of paleo coffee shops. What are you working on?
Formulate methods of validating the SIAI’s execution of goals. It appears that the Summit is an example of efficient execution of the reducing existential risk goal by legitimizing the existential risk and AGI problem space and by building networks among interested individuals. How will donors verify the value of SIAI core research work in coming years?
This is the key to assessing organizational effectiveness. There are various outputs we can measure: The growth of the LW community and its activities are surely important ones. We might also want to have...
The money is isn’t missing, though.
I assume the "is" is a typo.
Also: thank you for this post. My confidence in the SIAI has been bolstered.
Given that Eliezer earns more than me (or at least did while $AUD wasn't as strong) I am a little curious as to how much he donates to charity? I mean, if he is going to call other to donate...
It's a little trite to say it but there is an underlying topic there of some interest. A balance between conflicting signalling incentives as well as the real practical question of how actual caring works in practice at the extreme end.
I am a little curious as to how much he donates to charity? I mean, if he is going to call other to donate...
His call takes the form "work where you have a comparitive advantage, and donate the money where it will do the most expected good." In his case, his comparative advantage lines up exactly with his expectation of maximum good, so the only rational way for him to give money to charity is to reduce his salary until further reductions would reduce his efficacy at saving the world.
Which is what he's said he does.
Disclaimer
I suppose it's sort of a Disclaimer (you have not owned anything which is now SIAI), but Disclosure seems more accurate still.
Thank you for the summary, I have considered looking for information on SIAI's costs before, so presenting it in a readable way is helpful.
Hm. I'd rather have seen more of the analysis on whether what they do with the money is useful, but this is something.
12/13/2011 - A 2011 update with data from the 2010 fiscal year is in progress. Should be done by the end of the week or sooner.
Disclaimer
Notes
Introduction
Acting on gwern's suggestion in his Girl Scout Cookie analysis, I decided to look at SIAI funding. After reading about the Visiting Fellows Program and more recently the Rationality Boot Camp, I decided that the SIAI might be something I would want to support. I am concerned with existential risk and grapple with the utility implications. I feel that I should do more.
I wrote on the mini-boot camp page a pledge that I would donate enough to send someone to rationality mini-boot camp. This seemed to me a small cost for the potential benefit. The SIAI might get better at building rationalists. It might build a rationalist who goes on to solve a problem. Should I donate more? I wasn’t sure. I read gwern’s article and realized that I could easily get more information to clarify my thinking.
So I downloaded the SIAI’s Form 990 annual IRS filings and started to write down notes in a spreadsheet. As I gathered data and compared it to my expectations and my goals, my beliefs changed. I now believe that donating to the SIAI is valuable. I cannot hide this belief in my writing. I simply have it.
My goal is not to convince you to donate to the SIAI. My goal is to provide you with information necessary for you to determine for yourself whether or not you should donate to the SIAI. Or, if not that, to provide you with some direction so that you can continue your investigation.
The SIAI's Form 990's are available at GuideStar and Foundation Center. You must register in order to access the files at GuideStar.
SIAI Financial Overview
The Singularity Institute for Artificial Intelligence (SIAI) is a public organization working to reduce existential risk from future technologies, in particular artificial intelligence. "The Singularity Institute brings rational analysis and rational strategy to the challenges facing humanity as we develop cognitive technologies that will exceed the current upper bounds on human intelligence." The SIAI are also the founders of Less Wrong.
The graphs above offer an accurate summary of SIAI financial state since 2002. Sometimes the end of year balances listed in the Form 990 doesn’t match what you’d get if you did the math by hand. These are noted as discrepancies between the filed year end balance and the expected year end balance or between the filed year start balance and the expected year start balance.
The SIAI has generated a revenue surplus every year except 2008. The 2008 deficit appears to be a cashing out of excess surplus from 2007. Asset growth indicates that the SIAI is good at utilizing the funds it has available, without overspending. The organization is expanding it’s menu of services, but not so fast that it risks going broke.
Nonetheless, current asset balance is insufficient to sustain a year of operation at existing rate of expenditure. Significant loss of revenue from donations would result in a shrinkage of services. Such a loss of revenue may be unlikely, but a reasonable goal for the organization would be to build up a year's reserves.
Revenue
Revenue is composed of public support, program service (events/conferences held, etc), and investment interest. The "Other" category tends to include Amazon.com affiliate income, etc.
Income from public support has grown steadily with a notable regular increase starting in 2006. This increase is a result of new contributions from big donors. As an example, public support in 2007 is largely composed of significant contributions from Peter Thiel ($125k), Brian Cartmell ($75k), and Robert F. Zahra Jr ($123k) for $323k total in large scale individual contributions (break down below).
In 2007 the SIAI started receiving income from program services. Currently all "Program Service" revenue is from operation of the Singularity Summit. In 2010 the summit generated surplus revenue for the SIAI. This is a significant achievement, as it means the organization has created a sustainable service that could fund further services moving forward.
A specific analysis of the summit is below.
Expenses
Expenses are composed of grants paid to winners, benefits paid to members, officer compensation, contracts, travel, program services, and an other category.
The contracts column in the chart below includes legal and accounting fees. The other column includes administrative fees and other operational costs. I didn’t see reason to break the columns down further. In many cases the Form 990s provide more detailed itemization. If you care about how much officers spent on gas or when they bought new computers you might find the answers in the source.
I don’t have data for 2000 or 2001, but left the rows in the spreadsheet in case it can be filled in later.
Program expenses have grown over the years, but not unreasonably. Indeed, officer compensation has declined steadily for several years. The grants in 2002, 2003, and 2004 were paid to Eliezer Yudkowsky for work relevant to Artificial Intelligence.
The program expenses category includes operating the Singularity Summit, Visiting Fellows Program, etc. Some of the cost of these programs is also included in the other category. For example, the 2007 Singularity Summit is reported as costing $101,577.00 but this total amount is accounted for in multiple sections.
It appears that 2009 was a more productive year than 2008 and also less expensive. 2009 saw a larger Singularity Summit than in 2008 and also the creation of the Visiting Fellows Program.
Big Donors
This is not an exhaustive list of contributions. The SIAI’s 2009 filing details major support donations for several previous years. Contributions in the 2010 column are derived from http://intelligence.org/donors. Known contributions of less than $5,000 are excluded for the sake of brevity. The 2006 donation from Peter Thiel is sourced from a discussion with the SIAI.
Peter Thiel and several other big donors compose the bulk of the organization's revenue. It would be good to see a broader base of donations moving forward. Note, however, that the base of donations has been improving. I don't have the 2010 Form 990 yet, but it appears to be the best year yet in terms of both the quantity of donations and the number of individual donors (based on conversation with SIAI members).
Officer Compensation
In 2002 to 2005 Eliezer Yudkowsky received compensation in the form of grants from the SIAI for AI research. It is noted in the Form 990s that no public funds were used for Eliezer’s research grants as he is also an officer. Starting in 2006 all compensation for key officers is reported as salaried instead of in the form of grants.
Compensation spiked in 2006, the same year of greatly increased public support. Nonetheless, officer compensation has decreased steadily despite continued increases in public support. It appears that the SIAI has been managing it’s resources carefully in recent years, putting more money into programs than into officer compensation.
Eliezer's base compensation as salary increased 20% in 2008. It seems reasonable to compare Eliezer's salary with that of professional software developers. Eliezer would be able to make a fair amount more working in private industry as a software developer.
Mr. Yudkowsky clarifies: "The reason my salary shows as $95K in 2009 is that Paychex screwed up and paid my first month of salary for 2010 in the 2009 tax year. My actual salary was, I believe, constant or roughly so through 2008-2010." In this case we would expect to see the 2010 Form 990 show a month reduced salary.
Moving forward, the SIAI will have to grapple with the high cost of recruiting top tier programmers and academics to do real work. I believe this is an argument for the SIAI improving its asset sheet. More money in the bank means more of an ability to take advantage of recruitment opportunities if they present themselves.
Singularity Summit
Founded in 2006 by the SIAI in cooperation with Ray Kurzweil and Peter Thiel, the Singularity Summit focuses on a broad number of topics related to the Singularity and emerging technologies. (1)
The Singularity Summit was free until 2008 when the SIAI chose to begin charging registration fees and accepting sponsorships. (2)
Attendee counts are estimates drawn from SIAI Form 990 filings. 2010 is purported to be the largest conference so far. Beyond the core conference attendees, hundreds of thousands of online viewers are reached through recordings of the Summit sessions. (A)
The cost of running the summit has increased annually, but revenue from sponsorships and registration have kept pace. The conference may have logistic and administrative costs, but it doesn't really impact the SIAI budget. This makes the conference a valuable blend of outreach and education. If the conference convinces someone to donate or in some way directly support work against existential risks, the benefits are effectively free (or at the very least come at no cost to other programs).
Is the Singularity Summit successful?
It’s difficult to evaluate the success of conferences. So many of the benefits are realized downstream of the actual event. Nonetheless, the attendee counts and widening exposure seem to bring immense value for the cost. Several factors contribute to a sense that the conference is a success:
When discussing “future shock levels” -- gaps in exposure to and understanding of futurist concepts -- Eliezer Yudkowsky wrote, “In general, one shock level gets you enthusiasm, two gets you a strong reaction - wild enthusiasm or disbelief, three gets you frightened - not necessarily hostile, but frightened, and four can get you burned at the stake.” (7) Most futurists are familiar with this sentiment. Increased public exposure to unfamiliar concepts through the positive media coverage brought about by the Singularity Summit works to improve the legitimacy of those concepts and reduce future shock.
The result is that hard problems get easier to solve. Experts interested in helping, but afraid of social condemnation, will be more likely to do core research. The curious will be further motivated to break problems down. Vague far-mode thinking about future technologies will, for a few, shift into near-mode thinking about solutions. Public reaction to what would otherwise be shocking concepts will shift away from the extreme. The future becomes more conditioned to accept the real work and real costs of battling existential risk.
SIAI Milestones
This is not a complete list of SIAI milestones, but covers quite a few of the materials and events that the SIAI has produced over the years.
2005
2006
2007
2008
2009
Significant detail on 2009 achievements is available here. More publications are available here.
Papers and talks from SIAI fellows produced in 2009:
* Text for this list of papers reproduced from here.
A list of achievements, papers, and talks from 2010 is pending. See also the Singularity Summit content links above.
Further Editorial Thoughts...
Prior to doing this investigation I had some expectation that the SIAI was a money losing operation. I didn’t expect the Singularity Summit to be making money. I had an expectation that Eliezer probably made around $70k (programmer money discounted for being paid by a non-profit). I figured the SIAI had a broad donor base of small donations. I was off base on all counts.
I had some expectation that the SIAI was a money losing operation.
I had weak confidence in this belief, as I don’t know a lot about the finances of public organizations. The SIAI appears to be managing its cash reserves well. It would be good to see the SIAI build up some asset reserves so that it could operate comfortably in years where public support dips or so that it could take advantage of unexpected opportunities.
Overall, the allocation of funds strikes me as highly efficient.
I didn’t expect the Singularity Summit to be making money.
This was a surprising finding, although I incorrectly conditioned my expectation from experiences working with game industry conferences. I don't know exactly how much the SIAI is spending on food and fancy tablecloths at the Singularity Summit, but I don't think I care: it's growing and showing better results on the revenue chart each year. If you attend the conference and contribute to the event you add pure value. As discussed above, the benefits of the conference appear to be very far in the “reducing existential risk” category. Losing the Summit would be a blow to ensuring a safe future.
I know that the Summit will not itself do the hard work of dissolving and solving problems, or of synthesizing new theories, or of testing those theories, or of implementing solutions. The value of the Summit lies in its ability to raise awareness of the work that needs to be done, to create networks of people to do that work, to lower public shock at the implications of that work, and generate funding for those doing that work.
I had an expectation that Eliezer probably made around $70k.
Eliezer's compensation is slightly more than I thought. I'm not sure what upper bound I would have balked at or would balk at. I do have some concern about the cost of recruiting additional Research Fellows. The cost of additional RFs has to be weighed against new programs like Visiting Fellows.
At the same time, the organization has been able to expand services without draining the coffers. A donor can hold a strong expectation that the bulk of their donation will go toward actual work in the form of salaries for working personnel or events like the Visiting Fellows Program.
I figured the SIAI had a broad donor base of small donations.
I must have been out to lunch when making this prediction. I figured the SIAI was mostly supported by futurism enthusiasts and small scale rationalists.
The organization has a heavy reliance on major donor support. I would expect the 2010 filing to reveal a broadening of revenue, but I do not expect the organization to have become independent of big donor support. Big donor support is a good thing to have, but more long term stability would be provided by a broader base of supporters.
My suggestions to the SIAI:
Moving forward:
John Salvatier provided me with good insight into next steps for gaining further clarity into the SIAI’s operational goals, methodology, and financial standing.
Conclusion
At present, the financial position of the SIAI seems sound. The Singularity Summit stands as a particular success that should be acknowledged. The ability for the organization to reduce officer compensation at the same time it expands programs is also notable.
Tax documents can only tell us so much. A deeper picture of the SIAI would work to reveal more of the moving parts within the organization. It would provide a better account of monthly activities and provide a means to measure future success or failure. The question for many supporters will not be “should I donate” but “should I continue to donate?” A question that can be answered by increased and ongoing transparency.
It is important that those who are concerned with existential risks, AGI, and the safety of future technologies and who choose to donate to the SIAI take a role in shaping a positive future for the organization. Donating in support of AI research is valuable, but donating and also telling others about the donation is far more valuable.
Consider the Sequence post ‘Why Our Kind Can’t Cooperate.’ If the SIAI is an organization worth supporting, and given that they are working in a problem space that currently only has strong traction with “our kind,” then there is a risk of the SIAI failing to reach its maximum potential because donors do not coordinate successfully. If you are a donor, stand up and be counted. Post on Less Wrong and describe why you donated. Let the SIAI post your name. Help other donors see that they aren’t acting alone.
Similarly, if you are critical of the SIAI think about why and write it up. Create a discussion and dig into the details. The path most likely to increase existential risk is the one where rational thinkers stay silent.
The SIAI’s current operating budget and donor revenue is very small. It is well within our community’s ability to effect change.
My research has led me to the conclusion I should donate to the SIAI (above my previous pledge in support of rationality boot camp). I already donate to Alcor and am an Alcor member. I have to determine an amount for the SIAI that won't cause wife aggro. Unilateral household financial decisions increase my personal existential risk. :P I will update this document or make a comment post when I know more.
References:
My working spreadsheet is here.
(1) http://www.singularitysummit.com/
(2) http://lesswrong.com/lw/ts/singularity_summit_2008/
(3) http://www.popsci.com/scitech/article/2009-10/singularity-summit-2009-singularity-near
(4) http://www.popularmechanics.com/technology/engineering/robots/4332783
(5) http://www.guardian.co.uk/technology/2008/nov/06/artificialintelligenceai-engineering
(6) http://www.time.com/time/health/article/0,8599,2048138-1,00.html
(7) http://www.sl4.org/shocklevels.html
(A) Summit Content