Eliezer_Yudkowsky comments on Thoughts on the Singularity Institute (SI) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (1270)
Update: My full response to Holden is now here.
As Holden said, I generally think that Holden's objections for SI "are either correct (especially re: past organizational competence) or incorrect but not addressed by SI in clear argumentative writing (this includes the part on 'tool' AI)," and we are working hard to fix both categories of issues.
In this comment I would merely like to argue for one small point: that the Singularity Institute is undergoing comprehensive changes — changes which I believe to be improvements that will help us to achieve our mission more efficiently and effectively.
Holden wrote:
Louie Helm was hired as Director of Development in September 2011. I was hired as a Research Fellow that same month, and made Executive Director in November 2011. Below are some changes made since September. (Pardon the messy presentation: LW cannot correctly render tables in comments.)
SI before Sep. 2011: Very few peer-reviewed research publications.
SI today: More peer-reviewed publications coming in 2012 than in all past years combined. Additionally, I alone have a dozen papers in development, for which I am directing every step of research and writing, and will write the final draft, but am collaborating with remote researchers so as to put in only 5%-20% of the total hours required myself.
SI before Sep. 2011: No donor database / a very broken one.
SI today: A comprehensive donor database.
SI before Sep. 2011: Nearly all work performed directly by SI staff.
SI today: Most work outsourced to remote collaborators so that SI staff can focus on the things that only they can do.
SI before Sep. 2011: No strategic plan.
SI today: A strategic plan developed with input from all SI staff, and approved by the Board.
SI before Sep. 2011: Very little communication about what SI is doing.
SI today: Monthly progress reports, plus three Q&As with Luke about SI research and organizational development.
SI before Sep. 2011: No list of the research problems SI is working on.
SI today: A long, fully-referenced list of research problems SI is working on.
SI before Sep. 2011: Very little direct management of staff and projects.
SI today: Luke monitors all projects and staff work, and meets regularly with each staff member.
SI before Sep. 2011: Almost no detailed tracking of the expense of major SI projects (e.g. Summit, papers, etc.). The sole exception seems to be that Amy was tracking the costs of the 2011 Summit in NYC.
SI today: Detailed tracking of the expense of major SI projects for which this is possible (Luke has a folder in Google docs for these spreadsheets, and the summary spreadsheet is shared with the Board).
SI before Sep. 2011: No staff worklogs.
SI today: All staff members share their worklogs with Luke, Luke shares his worklog with all staff plus the Board.
SI before Sep. 2011: Best practices not followed for bookkeeping/accounting; accountant's recommendations ignored.
SI today: Meetings with consultants about bookkeeping/accounting; currently working with our accountant to implement best practices and find a good bookkeeper.
SI before Sep. 2011: Staff largely separated, many of them not well-connected to the others.
SI today: After a dozen or so staff dinners, staff much better connected, more of a team.
SI before Sep. 2011: Want to see the basics of AI Risk explained in plain language? Read The Sequences (more than a million words) or this academic book chapter by Yudkowsky.
SI today: Want to see the basics of AI Risk explained in plain language? Read Facing the Singularity (now in several languages, with more being added) or listen to the podcast version.
SI before Sep. 2011: Very few resources created to support others' research in AI risk.
SI today: IntelligenceExplosion.com, Friendly-AI.com, list of open problems in the field, with references, AI Risk Bibliography 2012, annotated list of journals that may publish papers on AI risk, a partial history of AI risk research, and a list of forthcoming and desired articles on AI risk.
SI before Sep. 2011: A hard-to-navigate website with much outdated content.
SI today: An entirely new website that is easier to navigate and has much new content (nearly complete; should launch in May or June).
SI before Sep. 2011: So little monitoring of funds that $118k was stolen in 2010 before SI noticed. (Note that we have won stipulated judgments to get much of this back, and have upcoming court dates to argue for stipulated judgments to get the rest back.)
SI today: Our bank accounts have been consolidated, with 3-4 people regularly checking over them.
SI before Sep. 2011: SI publications exported straight to PDF from Word or Google Docs, sometimes without even author names appearing.
SI today: All publications being converted into slick, useable LaTeX template (example), with all references checked and put into a central BibTeX file.
SI before Sep. 2011: No write-up of our major public technical breakthrough (TDT) using the mainstream format and vocabulary comprehensible to most researchers in the field (this is what we have at the moment).
SI today: Philosopher Rachael Briggs, whose papers on decision theory have been twice selected for the Philosopher's Annual, has been contracted to write an explanation of TDT and publish it in one of a select few leading philosophy journals.
SI before Sep. 2011: No explicit effort made toward efficient use of SEO or our (free) Google Adwords.
SI today: Highly optimized use of Google Adwords to direct traffic to our sites; currently working with SEO consultants to improve our SEO (of course, the new website will help).
(Just to be clear, I think this list shows not that "SI is looking really great!" but instead that "SI is rapidly improving and finally reaching a 'basic' level of organizational function.")
And note that these improvements would not and could not have happened without more funding than the level of previous years - if, say, everyone had been waiting to see these kinds of improvements before funding.
Really? That's not obvious to me. Of course you've been around for all this and I haven't, but here's what I'm seeing from my vantage point...
Recent changes that cost very little:
Stuff that costs less than some other things SI had spent money on, such as funding Ben Goertzel's AGI research or renting downtown Berkeley apartments for the later visiting fellows:
Do you disagree with these estimates, or have I misunderstood what you're claiming?
A lot of charities go through this pattern before they finally work out how to transition from a board-run/individual-run tax-deductible band of conspirators to being a professional staff-run organisation tuned to doing the particular thing they do. The changes required seem simple and obvious in hindsight, but it's a common pattern for it to take years, so SIAI has been quite normal, or at the very least not been unusually dumb.
(My evidence is seeing this pattern close-up in the Wikimedia Foundation, Wikimedia UK (the first attempt at which died before managing it, the second making it through barely) and the West Australian Music Industry Association, and anecdotal evidence from others. Everyone involved always feels stupid at having taken years to achieve the retrospectively obvious. I would be surprised if this aspect of the dynamics of nonprofits had not been studied.)
edit: Luke's recommendation of The Nonprofit Kit For Dummies looks like precisely the book all the examples I know of needed to have someone throw at them before they even thought of forming an organisation to do whatever it is they wanted to achieve.
Things that cost money:
I don't think this response supports your claim that these improvements "would not and could not have happened without more funding than the level of previous years."
I know your comment is very brief because you're busy at minicamp, but I'll reply to what you wrote, anyway: Someone of decent rationality doesn't just "try things until something works." Moreover, many of the things on the list of recent improvements don't require an Amy, a Luke, or a Louie.
I don't even have past management experience. As you may recall, I had significant ambiguity aversion about the prospect of being made Executive Director, but as it turned out, the solution to almost every problem X has been (1) read what the experts say about how to solve X, (2) consult with people who care about your mission and have solved X before, and (3) do what they say.
When I was made Executive Director and phoned our Advisors, most of them said "Oh, how nice to hear from you! Nobody from SingInst has ever asked me for advice before!"
That is the kind of thing that makes me want to say that SingInst has "tested every method except the method of trying."
Donor database, strategic plan, staff worklogs, bringing staff together, expenses tracking, funds monitoring, basic management, best-practices accounting/bookkeeping... these are all literally from the Nonprofits for Dummies book.
Maybe these things weren't done for 11 years because SI's decision-makers did make good plans but failed to execute them due to the usual defeaters. But that's not the history I've heard, except that some funds monitoring was insisted upon after the large theft, and a donor database was sorta-kinda-not-really attempted at one point. The history I've heard is that SI failed to make these kinds of plans in the first place, failed to ask advisors for advice, failed to read Nonprofits for Dummies, and so on.
Money wasn't the barrier to doing many of those things, it was a gap in general rationality.
I will agree, however, that what is needed now is more money. We are rapidly becoming a more robust and efficient and rational organization, stepping up our FAI team recruiting efforts, stepping up our transparency and accountability efforts, and stepping up our research efforts, and all those things cost money.
At the risk of being too harsh… When I began to intern with the Singularity Institute in April 2011, I felt uncomfortable suggesting that people donate to SingInst, because I could see it from the inside and it wasn't pretty. (And I'm not the only SIer who felt this way at the time.)
But now I do feel comfortable asking people to donate to SingInst. I'm excited about our trajectory and our team, and if we can raise enough support then we might just have a shot at winning after all.
Luke has just told me (personal conversation) that what he got from my comment was, "SIAI's difficulties were just due to lack of funding" which was not what I was trying to say at all. What I was trying to convey was more like, "I didn't have the ability to run this organization, and knew this - people who I hoped would be able to run the organization, while I tried to produce in other areas (e.g. turning my back on everything else to get a year of FAI work done with Marcello or writing the Sequences) didn't succeed in doing so either - and the only reason we could hang on long enough to hire Luke was that the funding was available nonetheless and in sufficient quantity that we could afford to take risks like paying Luke to stay on for a while, well before we knew he would become Executive Director".
Does Luke disagree with this clarified point? I do not find a clear indicator in this conversation.
Update: I came out of a recent conversation with Eliezer with a higher opinion of Eliezer's general rationality, because several things that had previously looked to me like unforced, forseeable mistakes by Eliezer now look to me more like non-mistakes or not-so-forseeable mistakes.
You're allowed to say these things on the public Internet?
I just fell in love with SI.
Well, at our most recent board meeting I wasn't fired, reprimanded, or even questioned for making these comments, so I guess I am. :)
Well, all we really know is that he chose to. It may be that everyone he works with then privately berated him for it.
That said, I share your sentiment.
Actually, if SI generally endorses this sort of public "airing of dirty laundry," I encourage others involved in the organization to say so out loud.
It's Luke you should have fallen in love with, since he is the one turning things around.
On the other hand I can count with one hand the number of established organisations I know of that would be sociologically capable of ceding power, status and control to Luke the way SingInst did. They took an untrained intern with essentially zero external status from past achievements and affiliations and basically decided to let him run the show (at least in terms of publicly visible initiatives). It is clearly the right thing for SingInst to do and admittedly Luke is very tall and has good hair which generally gives a boost when it comes to such selections - but still, making the appointment goes fundamentally against normal human behavior.
(Where I say "count with one hand" I am not including the use of any digits thereupon. I mean one.)
It doesn't matter that I completely understand why this phrase was included, I still found it hilarious in a network sitcom sort of way.
Consider the implications in light of the HoldenKarnofsky's critique about SI pretensions to high rationality.
Rationality is winning.
SI, at the same time as it was claiming extraordinary rationality, was behaving in ways that were blatantly irrational.
Although this is supposedly due to "the usual causes," rationality (winning) subsumes overcoming akrasia.
HoldenKarnofsky is correct that SI made claims for its own extraordinary rationality at a time when its leaders weren't rational.
Further: why should anyone give SI credibility today—when it stands convicted of self-serving misrepresentation in the recent past?
You've misread the post - Luke is saying that he doesn't think the "usual defeaters" are the most likely explanation.
Correct.
As a minor note, observe that claims of extraordinary rationality do not necessarily contradict claims of irrationality. The sanity waterline is very low.
Do you mean to imply in context here that the organizational management of SIAI at the time under discussion was above average for a nonprofit organization? Or are you just making a more general statement that a system can be irrational while demonstrating above average rationality? I certainly agree with the latter.
Just to let you know, you've just made it on my list of the very few LW regulars I no longer bother replying to, due to the proven futility of any communications. In your case it is because you have a very evident ax to grind, which is incompatible with rational thought.
This comment seems strange. Is having an ax to grind opposed to rationality? Then why does Eliezer Yudkowsky, for example, not hesitate to advocate for causes such as friendly AI? Doesn't he have an ax to grind? More of one really, since this ax chops trees of gold.
It would seem intellectual honesty would require that you say you reject discussions with people with an ax to grind, unless you grind a similar ax.
This makes me wonder... What "for dummies" books should I be using as checklists right now? Time to set a 5-minute timer and think about it.
What did you come up with?
I haven't actually found the right books yet, but these are the things where I decided I should find some "for beginners" text. the important insight is that I'm allowed to use these books as skill/practice/task checklists or catalogues, rather than ever reading them all straight through.
General interest:
Career
Networking
Time management
Fitness
For my own particular professional situation, skills, and interests:
Risk management
Finance
Computer programming
SAS
Finance careers
Career change
Web programming
Research/science careers
Math careers
Appraising
Real Estate
UNIX
For fitness, I'd found Liam Rosen's FAQ (the 'sticky' from 4chan's /fit/ board) to be remarkably helpful and information-dense. (Mainly, 'toning' doesn't mean anything, and you should probably be lifting heavier weights in a linear progression, but it's short enough to be worth actually reading through.)
The For Dummies series is generally very good indeed. Yes.
The largest concern from reading this isn't really what it brings up in management context, but what it says about the SI in general. Here an area where there's real expertise and basic books that discuss well-understood methods and they didn't do any of that. Given that, how likely should I think it is that when the SI and mainstream AI people disagree that part of the problem may be the SI people not paying attention to basics?
(nods) The nice thing about general-purpose techniques for winning at life (as opposed to domain-specific ones) is that there's lots of evidence available as to how effective they are.
I doubt there's all that much of a correlation between these things to be honest.
Precisely. For example of one existing base: the existing software that searches for solutions to engineering problems. Such as 'self improvement' via design of better chips. Works within narrowly defined field, to cull the search space. Should we expect state of the art software of this kind to be beaten by someone's contemporary paperclip maximizer? By how much?
Incredibly relevant to AI risk, but analysis can't be faked without really having technical expertise.
\
I remember that, when Anna was managing the fellows program, she was reading books of the "for dummies" genre and trying to apply them... it's just that, as it happened, the conceptual labels she accidentally happened to give to the skill deficits she was aware of were "what it takes to manage well" (i.e. "basic management") and "what it takes to be productive", rather than "what it takes to (help) operate a nonprofit according to best practices". So those were the subjects of the books she got. (And read, and practiced.) And then, given everything else the program and the organization was trying to do, there wasn't really any cognitive space left over to effectively notice the possibility that those wouldn't be the skills that other people afterwards would complain that nobody acquired and obviously should have known to. The rest of her budgeted self-improvement effort mostly went toward overcoming self-defeating emotional/social blind spots and motivated cognition. (And I remember Jasen's skill learning focus was similar, except with more of the emphasis on emotional self-awareness and less on management.)
I remember Anna went out of her way to get advice from people who she already knew, who she knew to be better than her at various aspects of personal or professional functioning. And she had long conversations with supporters who she came into contact with for some other reasons; for those who had executive experience, I expect she would have discussed her understanding of SIAI's current strategies with them and listened to their suggestions. But I don't know how much she went out of her way to find people she didn't already have reasonably reliable positive contact with, to get advice from them.
I don't know much about the reasoning of most people not connected with the fellows program about the skills or knowledge they needed. I think Vassar was mostly relying on skills tested during earlier business experience, and otherwise was mostly preoccupied with the general crisis of figuring out how to quickly-enough get around the various hugely-saliently-discrepant-seeming-to-him psychological barriers that were causing everyone inside and outside the organization to continue unthinkingly shooting themselves in the feet with respect to this outside-evolutionary-context-problem of existential risk mitigation. For the "everyone outside's psychological barriers" side of that, he was at least successful enough to keep SIAI's public image on track to trigger people like David Chalmers and Marcus Hutter into meaningful contributions to and participation in a nascent Singularity-studies academic discourse. I don't have a good idea what else was on his mind as something he needed to put effort into figuring out how to do, in what proportions occupying what kinds of subjective effort budgets, except that in total it was enough to put him on the threshold of burnout. Non-profit best practices apparently wasn't one of those things though.
But the proper approach to retrospective judgement is generally a confusing question.
The general pattern, at least post-2008, may have been one where the people who could have been aware of problems felt too metacognitively exhausted and distracted by other problems to think about learning what to do about them, and hoped that someone else with more comparative advantage would catch them, or that the consequences wouldn't be bigger than those of the other fires they were trying to put out.
There were also several attempts at building parts of a strategy document or strategic plan, which together took probably 400-1800 hours. In each case, the people involved ended up determining, from how long it was taking, that, despite reasonable-seeming initial expectations, it wasn't on track to possibly become a finished presentable product soon enough to justify the effort. The practical effect of these efforts was instead mostly just a hard-to-communicate cultural shared understanding of the strategic situation and options -- how different immediate projects, forms of investment, or conditions in the world might feed into each other on different timescales.
There was an accountant (who herself already cost like $33k/yr as the CFO, despite being split three ways with two other nonprofits) who would have been the one informally expected to have been monitoring for that sort of thing, and to have told someone about it if she saw something, out of the like three paid administrative slots at the time... well, yeah, that didn't happen.
I agree with a paraphrase of John Maxwell's characterization: "I'd rather hear Eliezer say 'thanks for funding us until we stumbled across some employees who are good at defeating their akrasia and [had one of the names of the things they were aware they were supposed to] care about [happen to be "]organizational best practices["]', because this seems like a better depiction of what actually happened." Note that this was most of the purpose of the Fellows program in the first place -- to create an environment where people could be introduced to the necessary arguments/ideas/culture and to help sort/develop those people into useful roles, including replacing existing management, since everyone knew there were people who would be better at their job than they were and wished such a person could be convinced to do it instead.
FWIW, I never knew the purpose of the VF program was to replace existing SI management. And I somewhat doubt that you knew this at the time, either. I think you're just imagining this retroactively given that that's what ended up happening. For instance, the internal point system used to score people in the VFs program had no points for correctly identifying organizational improvements and implementing them. It had no points for doing administrative work (besides cleaning up the physical house or giving others car rides). And it had no points for rising to management roles. It was all about getting karma on LW or writing conference papers. When I first offered to help with the organization directly, I was told I was "too competent" and that I should go do something more useful with my talent, like start another business... not "waste my time working directly at SI."
Seems like a fair paraphrase.
This inspired me to make a blog post: You need to read Nonprofit Kit for Dummies.
... which Eliezer has read and responded to, noting he did indeed read just that book in 2000 when he was founding SIAI. This suggests having someone of Luke's remarkable drive was in fact the missing piece of the puzzle.
Fascinating! I want to ask "well, why didn't it take then?", but if I were in Eliezer's shoes I'd be finding this discussion almost unendurably painful right now, and it feels like what matters has already been established. And of course he's never been the person in charge of that sort of thing, so maybe he's not who we should be grilling anyway.
Obviously we need How to be Lukeprog for Dummies. Luke appears to have written many fragments for this, of course.
Beating oneself up with hindsight bias is IME quite normal in this sort of circumstance, but not actually productive. Grilling the people who failed makes it too easy to blame them personally, when it's a pattern I've seen lots and lots, suggesting the problem is not a personal failing.
Agreed entirely - it's definitely not a mark of a personal failing. What I'm curious about is how we can all learn to do better at the crucial rationalist skill of making use of the standard advice about prosaic tasks - which is manifestly a non-trivial skill.
That book looks like the basic solution to the pattern I outline here, and from your description, most people who have any public good they want to achieve should read it around the time they think of getting a second person involved.
Donald Rumsfeld
...this was actually a terrible policy in historical practice.
That only seems relevant if the war in question is optional.
Rumsfeld is speaking of the Iraq war. It was an optional war, the army turned out to be far understrength for establishing order, and they deliberately threw out the careful plans for preserving e.g. Iraqi museums from looting that had been drawn up by the State Department, due to interdepartmental rivalry.
This doesn't prove the advice is bad, but at the very least, Rumsfeld was just spouting off Deep Wisdom that he did not benefit from spouting; one would wish to see it spoken by someone who actually benefited from the advice, rather than someone who wilfully and wantonly underprepared for an actual war.
Indeed. The proper response, which is surely worth contemplation, would have been:
Sun Tzu
Given the several year lag between funding increases and the listed improvements, it appears that this was less a result of a prepared plan and more a process of underutilized resources attracting a mix of parasites (the theft) and talent (hopefully the more recent staff additions).
Which goes towards a critical question in terms of future funding: is SIAI primarily constrained in its mission by resources or competence?
Of course, the related question is: what is SIAI's mission? Someone donating primarily for AGI research might not count recent efforts (LW, rationality camps, etc) as improvements.
What should a potential donor expect from money invested into this organization going forward? Internally, what are your metrics for evaluation?
Edited to add: I think that the spin-off of the rationality efforts is a good step towards answering these questions.
This seems like a rather absolute statement. Knowing Luke, I'll bet he would've gotten some of it done even on a limited budget.
Luke and Louie Helm are both on paid staff.
I'm pretty sure their combined salaries are lower than the cost of the summer fellows program that SI was sponsoring four or five years ago. Also, if you accept my assertion that Luke could find a way to do it on a limited budget, why couldn't somebody else?
Givewell is interested in finding charities that translate good intentions into good results. This requires that the employees of the charity have low akrasia, desire to learn about and implement organizational best practices, not suffer from dysrationalia, etc. I imagine that from Givewell's perspective, it counts as a strike against the charity if some of the charity's employees have a history of failing at any of these.
I'd rather hear Eliezer say "thanks for funding us until we stumbled across some employees who are good at defeating their akrasia and care about organizational best practices", because this seems like a better depiction of what actually happened. I don't get the impression SI was actively looking for folks like Louie and Luke.
Yes to this. Eliezer's claim about the need for funding may suffer many of Luke's criticisms above. But usually the most important thing you need is talent and that does require funding.