Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Eliezer_Yudkowsky comments on Thoughts on the Singularity Institute (SI) - Less Wrong

256 Post author: HoldenKarnofsky 11 May 2012 04:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1270)

You are viewing a single comment's thread. Show more comments above.

Comment author: lukeprog 11 May 2012 08:13:02AM *  54 points [-]

note that these improvements would not and could not have happened without more funding than the level of previous years

Really? That's not obvious to me. Of course you've been around for all this and I haven't, but here's what I'm seeing from my vantage point...

Recent changes that cost very little:

  • Donor database
  • Strategic plan
  • Monthly progress reports
  • A list of research problems SI is working on (it took me 16 hours to write)
  • IntelligenceExplosion.com, Friendly-AI.com, AI Risk Bibliography 2012, annotated list of journals that may publish papers on AI risk, a partial history of AI risk research, and a list of forthcoming and desired articles on AI risk (each of these took me only 10-25 hours to create)
  • Detailed tracking of the expenses for major SI projects
  • Staff worklogs
  • Staff dinners (or something that brought staff together)
  • A few people keeping their eyes on SI's funds so theft would be caught sooner
  • Optimization of Google Adwords

Stuff that costs less than some other things SI had spent money on, such as funding Ben Goertzel's AGI research or renting downtown Berkeley apartments for the later visiting fellows:

  • Research papers
  • Management of staff and projects
  • Rachael Briggs' TDT write-up
  • Best-practices bookkeeping/accounting
  • New website
  • LaTeX template for SI publications; references checked and then organized with BibTeX
  • SEO

Do you disagree with these estimates, or have I misunderstood what you're claiming?

Comment author: Eliezer_Yudkowsky 12 May 2012 04:04:19AM 18 points [-]

Things that cost money:

  • Amy Willey
  • Luke Muehlhauser
  • Louie Helm
  • CfAR
  • trying things until something worked
Comment author: lukeprog 14 May 2012 10:07:06AM 65 points [-]

I don't think this response supports your claim that these improvements "would not and could not have happened without more funding than the level of previous years."

I know your comment is very brief because you're busy at minicamp, but I'll reply to what you wrote, anyway: Someone of decent rationality doesn't just "try things until something works." Moreover, many of the things on the list of recent improvements don't require an Amy, a Luke, or a Louie.

I don't even have past management experience. As you may recall, I had significant ambiguity aversion about the prospect of being made Executive Director, but as it turned out, the solution to almost every problem X has been (1) read what the experts say about how to solve X, (2) consult with people who care about your mission and have solved X before, and (3) do what they say.

When I was made Executive Director and phoned our Advisors, most of them said "Oh, how nice to hear from you! Nobody from SingInst has ever asked me for advice before!"

That is the kind of thing that makes me want to say that SingInst has "tested every method except the method of trying."

Donor database, strategic plan, staff worklogs, bringing staff together, expenses tracking, funds monitoring, basic management, best-practices accounting/bookkeeping... these are all literally from the Nonprofits for Dummies book.

Maybe these things weren't done for 11 years because SI's decision-makers did make good plans but failed to execute them due to the usual defeaters. But that's not the history I've heard, except that some funds monitoring was insisted upon after the large theft, and a donor database was sorta-kinda-not-really attempted at one point. The history I've heard is that SI failed to make these kinds of plans in the first place, failed to ask advisors for advice, failed to read Nonprofits for Dummies, and so on.

Money wasn't the barrier to doing many of those things, it was a gap in general rationality.

I will agree, however, that what is needed now is more money. We are rapidly becoming a more robust and efficient and rational organization, stepping up our FAI team recruiting efforts, stepping up our transparency and accountability efforts, and stepping up our research efforts, and all those things cost money.

At the risk of being too harsh… When I began to intern with the Singularity Institute in April 2011, I felt uncomfortable suggesting that people donate to SingInst, because I could see it from the inside and it wasn't pretty. (And I'm not the only SIer who felt this way at the time.)

But now I do feel comfortable asking people to donate to SingInst. I'm excited about our trajectory and our team, and if we can raise enough support then we might just have a shot at winning after all.

Comment author: Eliezer_Yudkowsky 21 May 2012 04:29:45AM 32 points [-]

Luke has just told me (personal conversation) that what he got from my comment was, "SIAI's difficulties were just due to lack of funding" which was not what I was trying to say at all. What I was trying to convey was more like, "I didn't have the ability to run this organization, and knew this - people who I hoped would be able to run the organization, while I tried to produce in other areas (e.g. turning my back on everything else to get a year of FAI work done with Marcello or writing the Sequences) didn't succeed in doing so either - and the only reason we could hang on long enough to hire Luke was that the funding was available nonetheless and in sufficient quantity that we could afford to take risks like paying Luke to stay on for a while, well before we knew he would become Executive Director".

Comment author: Will_Sawin 12 June 2012 05:23:10AM 1 point [-]

Does Luke disagree with this clarified point? I do not find a clear indicator in this conversation.

Comment author: lukeprog 28 August 2013 07:40:42PM *  10 points [-]

Update: I came out of a recent conversation with Eliezer with a higher opinion of Eliezer's general rationality, because several things that had previously looked to me like unforced, forseeable mistakes by Eliezer now look to me more like non-mistakes or not-so-forseeable mistakes.

Comment author: MarkusRamikin 14 May 2012 03:41:32PM 28 points [-]

You're allowed to say these things on the public Internet?

I just fell in love with SI.

Comment author: lukeprog 26 May 2012 12:33:50AM *  21 points [-]

You're allowed to say these things on the public Internet?

Well, at our most recent board meeting I wasn't fired, reprimanded, or even questioned for making these comments, so I guess I am. :)

Comment author: TheOtherDave 14 May 2012 04:20:43PM 8 points [-]

Well, all we really know is that he chose to. It may be that everyone he works with then privately berated him for it.
That said, I share your sentiment.
Actually, if SI generally endorses this sort of public "airing of dirty laundry," I encourage others involved in the organization to say so out loud.

Comment author: shminux 14 May 2012 06:04:43PM 18 points [-]

I just fell in love with SI.

It's Luke you should have fallen in love with, since he is the one turning things around.

Comment author: wedrifid 26 May 2012 02:24:14AM 43 points [-]

It's Luke you should have fallen in love with, since he is the one turning things around.

On the other hand I can count with one hand the number of established organisations I know of that would be sociologically capable of ceding power, status and control to Luke the way SingInst did. They took an untrained intern with essentially zero external status from past achievements and affiliations and basically decided to let him run the show (at least in terms of publicly visible initiatives). It is clearly the right thing for SingInst to do and admittedly Luke is very tall and has good hair which generally gives a boost when it comes to such selections - but still, making the appointment goes fundamentally against normal human behavior.

(Where I say "count with one hand" I am not including the use of any digits thereupon. I mean one.)

Comment author: Matt_Simpson 19 July 2012 07:05:00PM 7 points [-]

...and admittedly Luke is very tall and has good hair which generally gives a boost when it comes to such selections...

It doesn't matter that I completely understand why this phrase was included, I still found it hilarious in a network sitcom sort of way.

Comment author: [deleted] 14 May 2012 07:58:32PM *  0 points [-]

Consider the implications in light of the HoldenKarnofsky's critique about SI pretensions to high rationality.

  1. Rationality is winning.

  2. SI, at the same time as it was claiming extraordinary rationality, was behaving in ways that were blatantly irrational.

  3. Although this is supposedly due to "the usual causes," rationality (winning) subsumes overcoming akrasia.

  4. HoldenKarnofsky is correct that SI made claims for its own extraordinary rationality at a time when its leaders weren't rational.

  5. Further: why should anyone give SI credibility today—when it stands convicted of self-serving misrepresentation in the recent past?

Comment author: ciphergoth 15 May 2012 06:26:06AM 5 points [-]

You've misread the post - Luke is saying that he doesn't think the "usual defeaters" are the most likely explanation.

Comment author: lukeprog 25 May 2012 05:42:34PM 3 points [-]


Comment author: thomblake 14 May 2012 08:03:44PM 5 points [-]

As a minor note, observe that claims of extraordinary rationality do not necessarily contradict claims of irrationality. The sanity waterline is very low.

Comment author: TheOtherDave 14 May 2012 09:12:55PM 5 points [-]

Do you mean to imply in context here that the organizational management of SIAI at the time under discussion was above average for a nonprofit organization? Or are you just making a more general statement that a system can be irrational while demonstrating above average rationality? I certainly agree with the latter.

Comment author: ciphergoth 15 May 2012 06:30:46AM 9 points [-]

Are you comparing it to the average among nonprofits started, or nonprofits extant? I would guess that it was well below average for extant nonprofits, but about or slightly above average for started nonprofits. I'd guess that most nonprofits are started by people who don't know what they're doing and don't know what they don't know, and that SI probably did slightly better because the people who were being a bit stupid were at least very smart, which can help. However, I'd guess that most such nonprofits don't live long because they don't find a Peter Thiel to keep them alive.

Comment author: David_Gerard 16 May 2012 11:07:48AM 6 points [-]

Your assessment looks about right to me. I have considerable experience of averagely-incompetent nonprofits, and SIAI looks normal to me. I am strongly tempted to grab that "For Dummies" book and, if it's good, start sending copies to people ...

Comment author: TheOtherDave 15 May 2012 12:44:48PM 0 points [-]

In the context of thomblake's comment, I suppose nonprofits started is the proper reference class.

Comment author: thomblake 15 May 2012 01:51:19PM 0 points [-]

Or are you just making a more general statement that a system can be irrational while demonstrating above average rationality?

Yes, this.

On an arbitrary scale I just made up, below 100 degrees of rationality is "irrational", and 0 degrees of rationality is "ordinary". 50 is extraordinarily rational and yet irrational.

Comment author: shminux 14 May 2012 08:10:09PM *  0 points [-]

Just to let you know, you've just made it on my list of the very few LW regulars I no longer bother replying to, due to the proven futility of any communications. In your case it is because you have a very evident ax to grind, which is incompatible with rational thought.

Comment author: metaphysicist 14 May 2012 08:34:42PM 1 point [-]

This comment seems strange. Is having an ax to grind opposed to rationality? Then why does Eliezer Yudkowsky, for example, not hesitate to advocate for causes such as friendly AI? Doesn't he have an ax to grind? More of one really, since this ax chops trees of gold.

It would seem intellectual honesty would require that you say you reject discussions with people with an ax to grind, unless you grind a similar ax.

Comment author: shminux 14 May 2012 08:46:21PM *  1 point [-]

From http://www.usingenglish.com: "If you have an axe to grind with someone or about something, you have a grievance, a resentment and you want to get revenge or sort it out." One can hardly call the unacknowledged emotions of resentment and needing a revenge/retribution compatible with rationality. srdiamond piled a bunch of (partially correct but irrelevant in the context of my comment) negative statements about SI, making these emotions quite clear.

Comment author: metaphysicist 14 May 2012 09:17:48PM 0 points [-]

That's a restrictive definition of "ax to grind," by the way—it's normally used to mean any special interest in the subject: "an ulterior often selfish underlying purpose <claims that he has no ax to grind in criticizing the proposed law>" (Merriam-Webster's Collegiate Dictionary)

But I might as well accept your meaning for discussion purposes. If you detect unacknowledged resentment in srdiamond, don't you detect unacknowledged ambition in Eliezer Yudkowsky?

There's actually good reason for the broader meaning of "ax to grind." Any special stake is a bias. I don't think you can say that someone who you think acts out of resentment, like srdiamond, is more intractably biased than someone who acts out of other forms of narrow self-interest, which almost invariably applies when someone defends something he gets money from.

I don't think it's a rational method to treat people differently, as inherently less rational, when they seem resentful. It is only one of many difficult biases. Financial interest is probably more biasing. If you think the arguments are crummy, that's something else. But the motive--resentment or finances--should probably have little bearing on how a message is treated in serious discussion.

Comment author: Benquo 14 May 2012 02:21:30PM *  17 points [-]

This makes me wonder... What "for dummies" books should I be using as checklists right now? Time to set a 5-minute timer and think about it.

Comment author: [deleted] 26 May 2012 11:38:50PM 5 points [-]

What did you come up with?

Comment author: Benquo 28 May 2012 09:02:01PM *  4 points [-]

I haven't actually found the right books yet, but these are the things where I decided I should find some "for beginners" text. the important insight is that I'm allowed to use these books as skill/practice/task checklists or catalogues, rather than ever reading them all straight through.

General interest:

  • Career

  • Networking

  • Time management

  • Fitness

For my own particular professional situation, skills, and interests:

  • Risk management

  • Finance

  • Computer programming

  • SAS

  • Finance careers

  • Career change

  • Web programming

  • Research/science careers

  • Math careers

  • Appraising

  • Real Estate

  • UNIX

Comment author: grendelkhan 28 March 2013 02:43:27PM 0 points [-]

For fitness, I'd found Liam Rosen's FAQ (the 'sticky' from 4chan's /fit/ board) to be remarkably helpful and information-dense. (Mainly, 'toning' doesn't mean anything, and you should probably be lifting heavier weights in a linear progression, but it's short enough to be worth actually reading through.)

Comment author: David_Gerard 14 May 2012 03:32:38PM 0 points [-]

The For Dummies series is generally very good indeed. Yes.

Comment author: JoshuaZ 14 May 2012 03:44:03PM 15 points [-]

The largest concern from reading this isn't really what it brings up in management context, but what it says about the SI in general. Here an area where there's real expertise and basic books that discuss well-understood methods and they didn't do any of that. Given that, how likely should I think it is that when the SI and mainstream AI people disagree that part of the problem may be the SI people not paying attention to basics?

Comment author: TheOtherDave 14 May 2012 04:17:42PM 4 points [-]

(nods) The nice thing about general-purpose techniques for winning at life (as opposed to domain-specific ones) is that there's lots of evidence available as to how effective they are.

Comment author: ciphergoth 21 May 2012 06:06:19PM 1 point [-]

I doubt there's all that much of a correlation between these things to be honest.

Comment author: private_messaging 16 May 2012 01:43:25PM *  0 points [-]

Precisely. For example of one existing base: the existing software that searches for solutions to engineering problems. Such as 'self improvement' via design of better chips. Works within narrowly defined field, to cull the search space. Should we expect state of the art software of this kind to be beaten by someone's contemporary paperclip maximizer? By how much?

Incredibly relevant to AI risk, but analysis can't be faked without really having technical expertise.

Comment author: Steve_Rayhawk 21 October 2012 10:10:58AM *  13 points [-]

these are all literally from the Nonprofits for Dummies book. [...] The history I've heard is that SI [...]


failed to read Nonprofits for Dummies,

I remember that, when Anna was managing the fellows program, she was reading books of the "for dummies" genre and trying to apply them... it's just that, as it happened, the conceptual labels she accidentally happened to give to the skill deficits she was aware of were "what it takes to manage well" (i.e. "basic management") and "what it takes to be productive", rather than "what it takes to (help) operate a nonprofit according to best practices". So those were the subjects of the books she got. (And read, and practiced.) And then, given everything else the program and the organization was trying to do, there wasn't really any cognitive space left over to effectively notice the possibility that those wouldn't be the skills that other people afterwards would complain that nobody acquired and obviously should have known to. The rest of her budgeted self-improvement effort mostly went toward overcoming self-defeating emotional/social blind spots and motivated cognition. (And I remember Jasen's skill learning focus was similar, except with more of the emphasis on emotional self-awareness and less on management.)

failed to ask advisors for advice,

I remember Anna went out of her way to get advice from people who she already knew, who she knew to be better than her at various aspects of personal or professional functioning. And she had long conversations with supporters who she came into contact with for some other reasons; for those who had executive experience, I expect she would have discussed her understanding of SIAI's current strategies with them and listened to their suggestions. But I don't know how much she went out of her way to find people she didn't already have reasonably reliable positive contact with, to get advice from them.

I don't know much about the reasoning of most people not connected with the fellows program about the skills or knowledge they needed. I think Vassar was mostly relying on skills tested during earlier business experience, and otherwise was mostly preoccupied with the general crisis of figuring out how to quickly-enough get around the various hugely-saliently-discrepant-seeming-to-him psychological barriers that were causing everyone inside and outside the organization to continue unthinkingly shooting themselves in the feet with respect to this outside-evolutionary-context-problem of existential risk mitigation. For the "everyone outside's psychological barriers" side of that, he was at least successful enough to keep SIAI's public image on track to trigger people like David Chalmers and Marcus Hutter into meaningful contributions to and participation in a nascent Singularity-studies academic discourse. I don't have a good idea what else was on his mind as something he needed to put effort into figuring out how to do, in what proportions occupying what kinds of subjective effort budgets, except that in total it was enough to put him on the threshold of burnout. Non-profit best practices apparently wasn't one of those things though.

But the proper approach to retrospective judgement is generally a confusing question.

the kind of thing that makes me want to say [. . .]

The general pattern, at least post-2008, may have been one where the people who could have been aware of problems felt too metacognitively exhausted and distracted by other problems to think about learning what to do about them, and hoped that someone else with more comparative advantage would catch them, or that the consequences wouldn't be bigger than those of the other fires they were trying to put out.

strategic plan [...] SI failed to make these kinds of plans in the first place,

There were also several attempts at building parts of a strategy document or strategic plan, which together took probably 400-1800 hours. In each case, the people involved ended up determining, from how long it was taking, that, despite reasonable-seeming initial expectations, it wasn't on track to possibly become a finished presentable product soon enough to justify the effort. The practical effect of these efforts was instead mostly just a hard-to-communicate cultural shared understanding of the strategic situation and options -- how different immediate projects, forms of investment, or conditions in the world might feed into each other on different timescales.

expenses tracking, funds monitoring [...] some funds monitoring was insisted upon after the large theft

There was an accountant (who herself already cost like $33k/yr as the CFO, despite being split three ways with two other nonprofits) who would have been the one informally expected to have been monitoring for that sort of thing, and to have told someone about it if she saw something, out of the like three paid administrative slots at the time... well, yeah, that didn't happen.

I agree with a paraphrase of John Maxwell's characterization: "I'd rather hear Eliezer say 'thanks for funding us until we stumbled across some employees who are good at defeating their akrasia and [had one of the names of the things they were aware they were supposed to] care about [happen to be "]organizational best practices["]', because this seems like a better depiction of what actually happened." Note that this was most of the purpose of the Fellows program in the first place -- to create an environment where people could be introduced to the necessary arguments/ideas/culture and to help sort/develop those people into useful roles, including replacing existing management, since everyone knew there were people who would be better at their job than they were and wished such a person could be convinced to do it instead.

Comment author: Louie 18 November 2012 10:04:40AM 8 points [-]

Note that this was most of the purpose of the Fellows program in the first place -- [was] to help sort/develop those people into useful roles, including replacing existing management

FWIW, I never knew the purpose of the VF program was to replace existing SI management. And I somewhat doubt that you knew this at the time, either. I think you're just imagining this retroactively given that that's what ended up happening. For instance, the internal point system used to score people in the VFs program had no points for correctly identifying organizational improvements and implementing them. It had no points for doing administrative work (besides cleaning up the physical house or giving others car rides). And it had no points for rising to management roles. It was all about getting karma on LW or writing conference papers. When I first offered to help with the organization directly, I was told I was "too competent" and that I should go do something more useful with my talent, like start another business... not "waste my time working directly at SI."

Comment author: John_Maxwell_IV 19 December 2012 01:31:42PM 1 point [-]

"I'd rather hear Eliezer say 'thanks for funding us until we stumbled across some employees who are good at defeating their akrasia and [had one of the names of the things they were aware they were supposed to] care about [happen to be "]organizational best practices["]', because this seems like a better depiction of what actually happened."

Seems like a fair paraphrase.

Comment author: David_Gerard 26 May 2012 11:32:43PM 6 points [-]

This inspired me to make a blog post: You need to read Nonprofit Kit for Dummies.

Comment author: David_Gerard 27 May 2012 08:02:08AM 5 points [-]

... which Eliezer has read and responded to, noting he did indeed read just that book in 2000 when he was founding SIAI. This suggests having someone of Luke's remarkable drive was in fact the missing piece of the puzzle.

Comment author: ciphergoth 27 May 2012 09:26:28AM 4 points [-]

Fascinating! I want to ask "well, why didn't it take then?", but if I were in Eliezer's shoes I'd be finding this discussion almost unendurably painful right now, and it feels like what matters has already been established. And of course he's never been the person in charge of that sort of thing, so maybe he's not who we should be grilling anyway.

Comment author: David_Gerard 27 May 2012 10:22:17AM *  8 points [-]

Obviously we need How to be Lukeprog for Dummies. Luke appears to have written many fragments for this, of course.

Beating oneself up with hindsight bias is IME quite normal in this sort of circumstance, but not actually productive. Grilling the people who failed makes it too easy to blame them personally, when it's a pattern I've seen lots and lots, suggesting the problem is not a personal failing.

Comment author: ciphergoth 27 May 2012 11:23:11AM 4 points [-]

Agreed entirely - it's definitely not a mark of a personal failing. What I'm curious about is how we can all learn to do better at the crucial rationalist skill of making use of the standard advice about prosaic tasks - which is manifestly a non-trivial skill.

Comment author: David_Gerard 27 May 2012 01:52:32PM *  2 points [-]

The Bloody Obvious For Dummies. If only common sense were!

From the inside (of a subcompetent charity - and I must note, subcompetent charities know they're subcompetent), it feels like there's all this stuff you're supposed to magically know about, and lots of "shut up and do the impossible" moments. And you do the small very hard things, in a sheer tour de force of remarkable effort. But it leads to burnout. Until the organisation makes it to competence and the correct paths are retrospectively obvious.

That actually reads to me like descriptions I've seen of the startup process.

Comment author: private_messaging 27 May 2012 02:39:58PM *  -1 points [-]

The problem is that there are two efficiencies/competences here, the efficiency as in doing the accounting correctly, which is relatively easy in comparison to the second: the efficiency as in actually doing relevant novel technical work that matters. The former you could get advice from some books, the latter you won't get any advice on, it's a harder problem, and typical level of performance is exactly zero (even for those who get the first part right). The difference in difficulties is larger than that between building a robot kit by following instructions vs designing a ground breaking new robot and making a billion dollars off it.

The best advice to vast majority of startups is: dissolve startup and get normal jobs, starting tomorrow. The best advice to all is to take a very good look at themselves knowing that the most likely conclusion should be "dissolve and get normal jobs". The failed startups I've seen so far were propelled by pure, unfounded belief in themselves (like in a movie where someone doesn't want to jump, other says yes you can do that!! then that person jumps, but rather than sending positive message and jumping over and surviving, falls down to instant death, while the fire that the person was running away from just goes out). The successful startups, on the other hand, had very well founded belief in themselves (good track record, attainable goals), or started from a hobby project that gone successful.

Comment author: David_Gerard 14 May 2012 03:30:18PM 2 points [-]

That book looks like the basic solution to the pattern I outline here, and from your description, most people who have any public good they want to achieve should read it around the time they think of getting a second person involved.

Comment author: lukeprog 15 July 2012 10:57:25PM *  1 point [-]

You go to war with the army you have, not the army you might want.

Donald Rumsfeld

Comment author: Eliezer_Yudkowsky 15 July 2012 11:38:21PM 7 points [-]

...this was actually a terrible policy in historical practice.

Comment author: Vaniver 16 July 2012 12:16:19AM 2 points [-]

That only seems relevant if the war in question is optional.

Comment author: Eliezer_Yudkowsky 16 July 2012 02:09:44AM 5 points [-]

Rumsfeld is speaking of the Iraq war. It was an optional war, the army turned out to be far understrength for establishing order, and they deliberately threw out the careful plans for preserving e.g. Iraqi museums from looting that had been drawn up by the State Department, due to interdepartmental rivalry.

This doesn't prove the advice is bad, but at the very least, Rumsfeld was just spouting off Deep Wisdom that he did not benefit from spouting; one would wish to see it spoken by someone who actually benefited from the advice, rather than someone who wilfully and wantonly underprepared for an actual war.

Comment author: Vaniver 16 July 2012 02:27:10AM 8 points [-]

just spouting off Deep Wisdom that he did not benefit from spouting

Indeed. The proper response, which is surely worth contemplation, would have been:

Victorious warriors win first and then go to war, while defeated warriors go to war first and then seek to win.

Sun Tzu