lukeprog comments on Thoughts on the Singularity Institute (SI) - Less Wrong

256 Post author: HoldenKarnofsky 11 May 2012 04:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1270)

You are viewing a single comment's thread.

Comment author: lukeprog 10 May 2012 09:24:19PM *  62 points [-]

Update: My full response to Holden is now here.

As Holden said, I generally think that Holden's objections for SI "are either correct (especially re: past organizational competence) or incorrect but not addressed by SI in clear argumentative writing (this includes the part on 'tool' AI)," and we are working hard to fix both categories of issues.

In this comment I would merely like to argue for one small point: that the Singularity Institute is undergoing comprehensive changes — changes which I believe to be improvements that will help us to achieve our mission more efficiently and effectively.

Holden wrote:

I'm aware that SI has relatively new leadership that is attempting to address the issues behind some of my complaints. I have a generally positive impression of the new leadership; I believe the Executive Director and Development Director, in particular, to represent a step forward in terms of being interested in transparency and in testing their own general rationality. So I will not be surprised if there is some improvement in the coming years...

Louie Helm was hired as Director of Development in September 2011. I was hired as a Research Fellow that same month, and made Executive Director in November 2011. Below are some changes made since September. (Pardon the messy presentation: LW cannot correctly render tables in comments.)

SI before Sep. 2011: Very few peer-reviewed research publications.
SI today: More peer-reviewed publications coming in 2012 than in all past years combined. Additionally, I alone have a dozen papers in development, for which I am directing every step of research and writing, and will write the final draft, but am collaborating with remote researchers so as to put in only 5%-20% of the total hours required myself.

SI before Sep. 2011: No donor database / a very broken one.
SI today: A comprehensive donor database.

SI before Sep. 2011: Nearly all work performed directly by SI staff.
SI today: Most work outsourced to remote collaborators so that SI staff can focus on the things that only they can do.

SI before Sep. 2011: No strategic plan.
SI today: A strategic plan developed with input from all SI staff, and approved by the Board.

SI before Sep. 2011: Very little communication about what SI is doing.
SI today: Monthly progress reports, plus three Q&As with Luke about SI research and organizational development.

SI before Sep. 2011: No list of the research problems SI is working on.
SI today: A long, fully-referenced list of research problems SI is working on.

SI before Sep. 2011: Very little direct management of staff and projects.
SI today: Luke monitors all projects and staff work, and meets regularly with each staff member.

SI before Sep. 2011: Almost no detailed tracking of the expense of major SI projects (e.g. Summit, papers, etc.). The sole exception seems to be that Amy was tracking the costs of the 2011 Summit in NYC.
SI today: Detailed tracking of the expense of major SI projects for which this is possible (Luke has a folder in Google docs for these spreadsheets, and the summary spreadsheet is shared with the Board).

SI before Sep. 2011: No staff worklogs.
SI today: All staff members share their worklogs with Luke, Luke shares his worklog with all staff plus the Board.

SI before Sep. 2011: Best practices not followed for bookkeeping/accounting; accountant's recommendations ignored.
SI today: Meetings with consultants about bookkeeping/accounting; currently working with our accountant to implement best practices and find a good bookkeeper.

SI before Sep. 2011: Staff largely separated, many of them not well-connected to the others.
SI today: After a dozen or so staff dinners, staff much better connected, more of a team.

SI before Sep. 2011: Want to see the basics of AI Risk explained in plain language? Read The Sequences (more than a million words) or this academic book chapter by Yudkowsky.
SI today: Want to see the basics of AI Risk explained in plain language? Read Facing the Singularity (now in several languages, with more being added) or listen to the podcast version.

SI before Sep. 2011: Very few resources created to support others' research in AI risk.
SI today: IntelligenceExplosion.com, Friendly-AI.com, list of open problems in the field, with references, AI Risk Bibliography 2012, annotated list of journals that may publish papers on AI risk, a partial history of AI risk research, and a list of forthcoming and desired articles on AI risk.

SI before Sep. 2011: A hard-to-navigate website with much outdated content.
SI today: An entirely new website that is easier to navigate and has much new content (nearly complete; should launch in May or June).

SI before Sep. 2011: So little monitoring of funds that $118k was stolen in 2010 before SI noticed. (Note that we have won stipulated judgments to get much of this back, and have upcoming court dates to argue for stipulated judgments to get the rest back.)
SI today: Our bank accounts have been consolidated, with 3-4 people regularly checking over them.

SI before Sep. 2011: SI publications exported straight to PDF from Word or Google Docs, sometimes without even author names appearing.
SI today: All publications being converted into slick, useable LaTeX template (example), with all references checked and put into a central BibTeX file.

SI before Sep. 2011: No write-up of our major public technical breakthrough (TDT) using the mainstream format and vocabulary comprehensible to most researchers in the field (this is what we have at the moment).
SI today: Philosopher Rachael Briggs, whose papers on decision theory have been twice selected for the Philosopher's Annual, has been contracted to write an explanation of TDT and publish it in one of a select few leading philosophy journals.

SI before Sep. 2011: No explicit effort made toward efficient use of SEO or our (free) Google Adwords.
SI today: Highly optimized use of Google Adwords to direct traffic to our sites; currently working with SEO consultants to improve our SEO (of course, the new website will help).

(Just to be clear, I think this list shows not that "SI is looking really great!" but instead that "SI is rapidly improving and finally reaching a 'basic' level of organizational function.")

Comment author: lukeprog 11 May 2012 02:54:28AM *  22 points [-]

...which is not to say, of course, that things were not improving before September 2011. It's just that the improvements have accelerated quite a bit since then.

For example, Amy was hired in December 2009 and is largely responsible for these improvements:

  • Built a "real" Board and officers; launched monthly Board meetings in February 2010.
  • Began compiling monthly financial reports in December 2010.
  • Began tracking Summit expenses and seeking Summit sponsors.
  • Played a major role in canceling many programs and expenses that were deemed low ROI.
Comment author: [deleted] 11 May 2012 04:25:54AM *  9 points [-]

Our bank accounts have been consolidated, with 3-4 people regularly checking over them.

In addition to reviews, should SI implement a two-man rule for manipulating large quantities of money? (For example, over 5k, over 10k, etc.)

Comment author: JoshuaFox 17 May 2012 03:12:28PM 4 points [-]

As a supporter and donor to SI since 2006, I can say that I had a lot of specific criticisms of the way that the organization was managed. The points Luke lists above were among them. I was surprised that on many occasions management did not realize the obvious problems and fix them.

But the current management is now recognizing many of these points and resolving them one by one, as Luke says. If this continues, SI's future looks good.

Comment author: [deleted] 11 May 2012 08:18:32AM *  5 points [-]

I was hired as a Research Fellow that same month

Luke alone has a dozen papers in development

Why did you start referring to yourself in the first person and then change your mind? (Or am I missing something?)

Comment author: lukeprog 11 May 2012 08:20:33AM *  9 points [-]

Brain fart: now fixed.

Comment author: [deleted] 11 May 2012 08:27:14AM *  18 points [-]

(Why was this downvoted? If it's because the downvoter wants to see fewer brain farts, they're doing it wrong, because the message such a downvote actually conveys is that they want to see fewer acknowledgements of brain farts. Upvoted back to 0, anyway.)

Comment author: siodine 11 May 2012 01:35:22PM 4 points [-]

Isn't this very strong evidence in support for Holden's point about "Apparent poorly grounded belief in SI's superior general rationality" (excluding Luke, at least)? And especially this?

Comment author: lukeprog 11 May 2012 08:13:20PM *  18 points [-]

This topic is something I've been thinking about lately. Do SIers tend to have superior general rationality, or do we merely escape a few particular biases? Are we good at rationality, or just good at "far mode" rationality (aka philosophy)? Are we good at epistemic but not instrumental rationality? (Keep in mind, though, that rationality is only a ceteris paribus predictor of success.)

Or, pick a more specific comparison. Do SIers tend to be better at general rationality than someone who can keep a small business running for 5 years? Maybe the tight feedback loops of running a small business are better rationality training than "debiasing interventions" can hope to be.

Of course, different people are more or less rational in different domains, at different times, in different environments.

This isn't an idle question about labels. My estimate of the scope and level of people's rationality in part determines how much I update from their stated opinion on something. How much evidence for Hypothesis X (about organizational development) is it when Eliezer gives me his opinion on the matter, as opposed to when Louie gives me his opinion on the matter? When Person B proposes to take on a totally new kind of project, I think their general rationality is a predictor of success — so, what is their level of general rationality?

Comment author: Bugmaster 11 May 2012 10:49:28PM 2 points [-]

Are we good at epistemic but not instrumental rationality?

Holden implies (and I agree with him) that there's very little evidence at the moment to suggest that SI is good at instrumental rationality. As for epistemic rationality, how would we know ? Is there some objective way to measure it ? I personally happen to believe that if a person seems to take it as a given that he's great at epistemic rationality, this fact should count as evidence (however circumstantial) against him being great at epistemic rationality... but that's just me.

Comment author: TheOtherDave 11 May 2012 09:10:55PM 1 point [-]

If you accept that your estimate of someone's "rationality" should depend on the domain, the environment, the time, the context, etc... and what you want to do is make reliable estimates of the reliability of their opinion, their chances of success. etc... it seems to follow that you should be looking for comparisons within a relevant domain, environment, etc.

That is, if you want to get opinions about hypothesis X about organizational development that serve as significant evidence, it seems the thing to do is to find someone who knows a lot about organizational development -- ideally, someone who has been successful at developing organizations -- and consult their opinions. How generally rational they are might be very relevant causally, or it might not, but is in either case screened off by their domain competence... and their domain competence is easier to measure than their general rationality.

So is their general rationality worth devoting resources to determining?

It seems this only makes sense if you have already (e.g.) decided to ask Eliezer and Louie for their advice, whether it's good evidence or not, and now you need to know how much evidence it is, and you expect the correct answer is different from the answer you'd get by applying the metrics you know about (e.g., domain familiarity and previously demonstrated relevant expertise).

Comment author: lukeprog 11 May 2012 09:55:52PM 3 points [-]

I do spend a fair amount of time talking to domain experts outside of SI. The trouble is that the question of what we should do about thing X doesn't just depend on domain competence but also on thousands of details about the inner workings of SI and our mission that I cannot communicate to domain experts outside SI, but which Eliezer and Louie already possess.

Comment author: TheOtherDave 11 May 2012 10:14:49PM 4 points [-]

So it seems you have a problem in two domains (organizational development + SI internals) and different domain experts in both domains (outside domain experts + Eliezer/Louie), and need some way of cross-linking the two groups' expertise to get a coherent recommendation, and the brute-force solutions (e.g. get them all in a room together, or bring one group up to speed on the other's domain) are too expensive to be worth it. (Well, assuming the obstacle isn't that the details need to be kept secret, but simply that expecting an outsider to come up to speed on all of SI's local potentially relevant trivia simply isn't practical.)

Yes?

Yeah, that can be a problem.

In that position, for serious questions I would probably ask E/L for their recommendations and a list of the most relevant details that informed that decision, then go to outside experts with a summary of the competing recommendations and an expanded version of that list and ask for their input. If there's convergence, great. If there's divergence, iterate.

This is still a expensive approach, though, so I can see where a cheaper approximation for less important questions is worth having.

Comment author: lukeprog 11 May 2012 10:18:53PM 2 points [-]

Yes to all this.

Comment author: siodine 11 May 2012 11:08:47PM -1 points [-]

In the world in which a varied group of intelligent and especially rational people are organizing to literally save humanity, I don't see the relatively trivial, but important, improvements you've made in a short period of time being made because they were made years ago. And I thought that already accounting for the points you've made.

I mean, the question this group should be asking themselves is "how can we best alter the future so as to navigate towards FAI?" So, how did they apparently miss something like opportunity cost? Why, for instance, has their salaries increased when they could've been using it to improve the foundation of their cause from which everything else follows?

(Granted, I don't know the history and inner workings of the SI, and so I could be missing some very significant and immovable hurdles, but I don't see that as very likely; at least, not as likely as Holden's scenario.)

Comment author: lukeprog 11 May 2012 11:18:25PM 4 points [-]

I don't see the relatively trivial, but important, improvements you've made in a short period of time being made because they were made years ago. And I thought that already accounting for the points you've made.

I don't know what these sentences mean.

So, how did they apparently miss something like opportunity cost? Why, for instance, has their salaries increased when they could've been using it to improve the foundation of their cause from which everything else follows?

Actually, salary increases help with opportunity cost. At very low salaries, SI staff ends up spending lots of time and energy on general life cost-saving measures that distract us from working on x-risk reduction. And our salaries are generally still pretty low. I have less than $6k in my bank accounts. Outsourcing most tasks to remote collaborators also helps a lot with opportunity cost.

Comment author: siodine 12 May 2012 12:01:50AM *  3 points [-]

I don't know what these sentences mean.

  • People are more rational in different domains, environments, and so on.
  • The people at SI may have poor instrumental rationality while being adept at epistemic rationality.
  • Being rational doesn't necessarily mean being successful.

I accept all those points, and yet I still see the Singularity Institute having made the improvements that you've made since being hired before you were hired if they have superior general rationality. That is, you wouldn't have that list of relatively trivial things to brag about because someone else would have recognized the items on that list as important and got them done somehow (ignore any negative connotations--they're not intended).

For instance, I don't see a varied group of people with superior general rationality not discovering or just not outsourcing work they don't have a comparative advantage in (i.e., what you've done). That doesn't look like just a failure in instrumental rationality, or just rationality operating on a different kind of utility function, or just a lack of domain specific knowledge.

The excuses available to a person acting in a way that's non-traditionally rational are less convincing when you apply them to a group.

Actually, salary increases help with opportunity cost. At very low salaries, SI staff ends up spending lots of time and energy on general life cost-saving measures that distract us from working on x-risk reduction. And our salaries are generally still pretty low. I have less than $6k in my bank accounts.

No, I get that. But that still doesn't explain away the higher salaries like EY's 80k/year and its past upwards trend. I mean, these higher paid people are the most committed to the cause, right? I don't see those people taking a higher salary when they could use that money for more outsourcing, or another employee, or better employees, if they want to literally save humanity while being superior in general rationality. It's like a homeless person desperately in want of shelter trying save enough for an apartment and yet buying meals at some restaurant.

Outsourcing most tasks to remote collaborators also helps a lot with opportunity cost.

That's the point I was making, why wasn't that done earlier? How did these people apparently miss out on opportunity cost? (And I'm just using outsourcing as an example because it was one of the most glaring changes you made that I think should have probably been made much earlier.)

Comment author: lukeprog 12 May 2012 12:20:39AM 4 points [-]

Right, I think we're saying the same thing, here: the availability of so much low-hanging fruit in organizational development as late as Sept. 2011 is some evidence against the general rationality of SIers. Eliezer seems to want to say it was all a matter of funding, but that doesn't make sense to me.

Now, on this:

I don't see those people taking a higher salary when they could use that money for more outsourcing, or another employee, or better employees, if they want to literally save humanity while being super in general rationality.

For some reason I'm having a hard time parsing your sentences for unambiguous meaning, but if I may attempt to rephrase: "SIers wouldn't take any salaries higher than (say) $70k/yr if they were truly committed to the cause and good in general rationality, because they would instead use that money to accomplish other things." Is that what you're saying?

Comment author: Rain 12 May 2012 12:29:53AM *  3 points [-]

I've heard the Bay Area is expensive, and previously pointed out that Eliezer earns more than I do, despite me being in the top 10 SI donors.

I don't mind, though, <joke> as has been pointed out, even thinking about muffins might be a question invoking existential risk calculations. </joke>

Comment author: lukeprog 12 May 2012 12:39:54AM *  4 points [-]

despite me being in the top 10 SI donors

...and much beloved for it.

Yes, the Bay Area is expensive. We've considered relocating, but on the other hand the (by far) best two places for meeting our needs in HR and in physically meeting with VIPs are SF and NYC, and if anything NYC is more expensive than the Bay Area. We cut living expenses where we can: most of us are just renting individual rooms.

Also, of course, it's not like the Board could decide we should relocate to a charter city in Honduras and then all our staff would be able to just up and relocate. :)

(Rain may know all this; I'm posting it for others' benefit.)

Comment author: komponisto 12 May 2012 06:58:03PM 12 points [-]

I think it's crucial that SI stay in the Bay Area. Being in a high-status place signals that the cause is important. If you think you're not taken seriously enough now, imagine if you were in Honduras...

Not to mention that HR is without doubt the single most important asset for SI. (Which is why it would probably be a good idea to pay more than the minimum cost of living.)

Comment author: TheOtherDave 12 May 2012 01:31:59AM 2 points [-]

Out of curiosity only: what were the most significant factors that led you to reject telepresence options?

Comment author: siodine 12 May 2012 12:34:35AM 0 points [-]

some evidence

Enough for you to agree with Holden on that point?

"SIers wouldn't take any salaries higher than (say) $70k/yr if they were truly committed to the cause and good in general rationality, because they would instead use that money to accomplish other things." Is that what you're saying?

Yes, but I wouldn't set a limit at a specific salary range; I'd expect them to give as much as they optimally could, because I assume they're more concerned with the cause than the money. (re the 70k/yr mention: I'd be surprised if that was anywhere near optimal)

Comment author: lukeprog 12 May 2012 12:46:18AM 2 points [-]

Enough for you to agree with Holden on that point?

Probably not. He and I continue to dialogue in private about the point, in part to find the source of our disagreement.

Yes, but I wouldn't set a limit at a specific salary range; I'd expect them to give as much as they optimally could, because I assume they're more concerned with the cause than the money. (re the 70k/yr mention: I'd be surprised if that was anywhere near optimal)

I believe everyone except Eliezer currently makes between $42k/yr and $48k/yr — pretty low for the cost of living in the Bay Area.

Comment author: siodine 12 May 2012 01:37:39AM 4 points [-]

Probably not. He and I continue to dialogue in private about the point, in part to find the source of our disagreement.

So, if you disagree with Holden, I assume you think SIers have superior general rationality: why?

And I'm confident SIers will score well on rationality tests, but that looks like specialized rationality. I.e., you can avoid a bias but you can't avoid a failure in your achieving your goals. To me, the SI approach seems poorly leveraged. I expect more significant returns from simple knowledge acquisition. E.g., you want to become successful? YOU WANT TO WIN?! Great, read these textbooks on microeconomics, finance, and business. I think this is more the approach you take anyway.

I believe everyone except Eliezer currently makes between $42k/yr and $48k/yr — pretty low for the cost of living in the Bay Area.

That isn't as bad as I thinking it was; I don't know if that's optimal, but it seems at least reasonable.

Comment author: komponisto 12 May 2012 02:04:06AM *  2 points [-]

(Disclaimer: the following comment should not be taken to imply that I myself have concluded that SI staff salaries should be reduced.)

I believe everyone except Eliezer currently makes between $42k/yr and $48k/yr — pretty low for the cost of living in the Bay Area.

I'll grant you that it's pretty low relative to other Bay Area salaries. But as for the actual cost of living, I'm less sure.

I'm not fortunate enough to be a Bay Area resident myself, but here is what the internet tells me:

  • After taxes, a $48,000/yr gross salary in California equates to a net of around $3000/month.

  • A 1-bedroom apartment in Berkeley and nearby places can be rented for around $1500/month. (Presumably, this is the category of expense where most of the geography-dependent high cost of living is contained.)

  • If one assumes an average spending of $20/day on food (typically enough to have at least one of one's daily meals at a restaurant), that comes out to about $600/month.

  • That leaves around $900/month for miscellaneous expenses, which seems pretty comfortable for a young person with no dependents.

So, if these numbers are right, it seems that this salary range is actually right about what the cost of living is. Of course, this calculation specifically does not include costs relating to signaling (via things such as choices of housing, clothing, transportation, etc.) that one has more money than necessary to live (and therefore isn't low-status). Depending on the nature of their job, certain SI employees may need, or at least find it distinctly advantageous for their particular duties, to engage in such signaling.

Comment author: Rain 12 May 2012 12:10:54AM *  2 points [-]

To summarize and rephrase: in a "counterfactual" world where SI was actually rational, they would have found all these solutions and done all these things long ago.

Comment author: komponisto 12 May 2012 12:47:08AM *  2 points [-]

Many of your sentences are confusing because you repeatedly use the locution "I see X"/ "I don't see X" in a nonstandard way, apparently to mean "X would have happened" /"X would not have happened".

This is not the way that phrase is usually understood. Normally, "I see X" is taken to mean either "I observe X" or "I predict X". For example I might say (if I were so inclined):

Unlike you, I see a lot of rationality being demonstrated by SI employees.

meaning that I believe (from my observation) they are in fact being rational. Or, I might say:

I don't see Luke quitting his job at SI tomorrow to become a punk rocker.

meaning that I don't predict that will happen. But I would not generally say:

* I don't see these people taking a higher salary.

if what I mean is "these people should/would not have taken a higher salary [if such-and-such were true]".

Comment author: siodine 12 May 2012 01:04:35AM *  2 points [-]

Oh, I see ;) Thanks. I'll definitely act on your comment, but I was using "I see X" as "I predict X"--just in the context of a possible world. E.g., I predict in the possible world in which SIers are superior in general rationality and committed to their cause, Luke wouldn't have that list of accomplishments. Or, "yet I still see the Singularity Institute having made the improvements..."

I now see that I've been using 'see' as syntactic sugar for counterfactual talk... but no more!

Comment author: komponisto 12 May 2012 01:21:01AM *  2 points [-]

I was using "I see X" as "I predict X"--just in the context of a possible world.

To get away with this, you really need, at minimum, an explicit counterfactual clause ("if", "unless", etc.) to introduce it: "In a world where SIers are superior in general rationality, I don't see Luke having that list of accomplishments."

The problem was not so much that your usage itself was logically inconceivable, but rather that it collided with the other interpretations of "I see X" in the particular contexts in which it occurred. E.g. "I don't see them taking higher salaries" sounded like you were saying that they weren't taking higher salaries. (There was an "if" clause, but it came way too late!)

Comment author: [deleted] 12 May 2012 07:19:52AM *  -2 points [-]

And our salaries are generally still pretty low.

By what measure do you figure that?

I have less than $6k in my bank accounts.

That might be informative if we knew anything about your budget, but without any sort of context it sounds purely obfuscatory. (Also, your bank account is pretty close to my annual salary, so you might want to consider what you're actually signalling here and to whom.)

Comment author: [deleted] 16 May 2012 07:23:21PM 1 point [-]

Have you considered the possibility that even higher salaries might raise productivity further?

I think we should search systematically for ways to convert money into increased productivity.

Comment author: lessdazed 31 May 2012 05:54:35AM 0 points [-]

Apparent poorly grounded belief in SI's superior general rationality

I found this complaint insufficiently detailed and not well worded.

Average people think their rationality is moderately good. Average people are not very rational. SI affiliated people think they are adept or at least adequate at rationality. SI affiliated people are not complete disasters at rationality.

SI affiliated people are vastly superior to others in generally rationality. So the original complaint literally interpreted is false.

An interesting question might be on the level of: "Do SI affiliates have rationality superior to what the average person falsely believes his or her rationality is?"

Holden's complaints each have their apparent legitimacy change differently under his and my beliefs. Some have to do with overconfidence or incorrect self-assessment, others with other-assessment, others with comparing SI people to others. Some of them:

Insufficient self-skepticism given how strong its claims are

Largely agree, as this relates to overconfidence.

...and how little support its claims have won.

Moderately disagree, as this relies on the rationality of others.

Being too selective (in terms of looking for people who share its preconceptions) when determining whom to hire and whose feedback to take seriously.

Largely disagree, as this relies significantly on the competence of others.

Paying insufficient attention to the limitations of the confidence one can have in one's untested theories, in line with my Objection 1.

Largely agree, as this depends more on accurate assessment of one's on rationality.

Rather than endorsing "Others have not accepted our arguments, so we will sharpen and/or reexamine our arguments," SI seems often to endorse something more like "Others have not accepted their arguments because they have inferior general rationality," a stance less likely to lead to improvement on SI's part.

There is instrumental value in falsely believing others to have a good basis for disagreement so one's search for reasons one might be wrong is enhanced. This is aside from the actual reasons of others.

It is easy to imagine an expert in a relevant field objecting to SI based on something SI does or says seeming wrong, only to have the expert couch the objection in literally false terms, perhaps ones that flow from motivated cognition and bear no trace of the real, relevant reason for the objection. This could be followed by SI's evaluation and dismissal of it and failure of a type not actually predicted by the expert...all such nuances are lost in the literally false "Apparent poorly grounded belief in SI's superior general rationality."

Such a failure comes to mind and is easy for me to imagine as I think this is a major reason why "Lack of impressive endorsements" is a problem. The reasons provided by experts for disagreeing with SI on particular issues are often terrible, but such expressions are merely what they believe their objections to be, and their expertise is in math or some such, not in knowing why they think what they think.

Comment author: ghf 11 May 2012 10:38:10PM *  5 points [-]

My hope is that the upcoming deluge of publications will answer this objection, but for the moment, I am unclear as to the justification for the level of resources being given to SIAI researchers.

Additionally, I alone have a dozen papers in development, for which I am directing every step of research and writing, and will write the final draft, but am collaborating with remote researchers so as to put in only 5%-20% of the total hours required myself.

This level of freedom is the dream of every researcher on the planet. Yet, it's unclear why these resources should be devoted to your projects. While I strongly believe that the current academic system is broken, you are asking for a level of support granted to top researchers prior to have made any original breakthroughs yourself.

If you can convince people to give you that money, wonderful. But until you have made at least some serious advancement to demonstrate your case, donating seems like an act of faith.

It's impressive that you all have found a way to hack the system and get paid to develop yourselves as researchers outside of the academic system and I will be delighted to see that development bear fruit over the coming years. But, at present, I don't see evidence that the work being done justifies or requires that support.

Comment author: lukeprog 11 May 2012 10:48:13PM 8 points [-]

This level of freedom is the dream of every researcher on the planet. Yet, it's unclear why these resources should be devoted to your projects.

Because some people like my earlier papers and think I'm writing papers on the most important topic in the world?

It's impressive that you all have found a way to hack the system and get paid to develop yourselves as researchers outside of the academic system...

Note that this isn't uncommon. SI is far from the only think tank with researchers who publish in academic journals. Researchers at private companies do the same.

Comment author: ghf 11 May 2012 11:15:03PM *  13 points [-]

First, let me say that, after re-reading, I think that my previous post came off as condescending/confrontational which was not my intent. I apologize.

Second, after thinking about this for a few minutes, I realized that some of the reason your papers seem so fluffy to me is that they argue what I consider to be obvious points. In my mind, of course we are likely "to develop human-level AI before 2100." Because of that, I may have tended to classify your work as outreach more than research.

But outreach is valuable. And, so that we can factor out the question of the independent contribution of your research, having people associated with SIAI with the publications/credibility to be treated as experts has gigantic benefits in terms of media multipliers (being the people who get called on for interviews, panels, etc). So, given that, I can see a strong argument for publication support being valuable to the overall organization goals regardless of any assessment of the value of the research.

Note that this isn't uncommon. SI is far from the only think tank with researchers who publish in academic journals. Researchers at private companies do the same.

My only point was that, in those situations, usually researchers are brought in with prior recognized achievements (or, unfortunately all too often, simply paper credentials). SIAI is bringing in people who are intelligent but unproven and giving them the resources reserved for top talent in academia or industry. As you've pointed out, one of the differences with SIAI is the lack of hoops to jump through.

Edit: I see you commented below that you view your own work as summarization of existing research and we agree on the value of that. Sorry that my slow typing speed left me behind the flow of the thread.

Comment author: Bugmaster 11 May 2012 10:53:44PM 2 points [-]

Researchers at private companies do the same.

It's true at my company, at least. There are quite a few papers out there authored by the researchers at the company where I work. There are several good business reasons for a company to invest time into publishing a paper; positive PR is one of them.

Comment author: Eliezer_Yudkowsky 11 May 2012 05:00:20AM 4 points [-]

And note that these improvements would not and could not have happened without more funding than the level of previous years - if, say, everyone had been waiting to see these kinds of improvements before funding.

Comment author: lukeprog 11 May 2012 08:13:02AM *  54 points [-]

note that these improvements would not and could not have happened without more funding than the level of previous years

Really? That's not obvious to me. Of course you've been around for all this and I haven't, but here's what I'm seeing from my vantage point...

Recent changes that cost very little:

  • Donor database
  • Strategic plan
  • Monthly progress reports
  • A list of research problems SI is working on (it took me 16 hours to write)
  • IntelligenceExplosion.com, Friendly-AI.com, AI Risk Bibliography 2012, annotated list of journals that may publish papers on AI risk, a partial history of AI risk research, and a list of forthcoming and desired articles on AI risk (each of these took me only 10-25 hours to create)
  • Detailed tracking of the expenses for major SI projects
  • Staff worklogs
  • Staff dinners (or something that brought staff together)
  • A few people keeping their eyes on SI's funds so theft would be caught sooner
  • Optimization of Google Adwords

Stuff that costs less than some other things SI had spent money on, such as funding Ben Goertzel's AGI research or renting downtown Berkeley apartments for the later visiting fellows:

  • Research papers
  • Management of staff and projects
  • Rachael Briggs' TDT write-up
  • Best-practices bookkeeping/accounting
  • New website
  • LaTeX template for SI publications; references checked and then organized with BibTeX
  • SEO

Do you disagree with these estimates, or have I misunderstood what you're claiming?

Comment author: David_Gerard 12 May 2012 06:37:08PM *  19 points [-]

A lot of charities go through this pattern before they finally work out how to transition from a board-run/individual-run tax-deductible band of conspirators to being a professional staff-run organisation tuned to doing the particular thing they do. The changes required seem simple and obvious in hindsight, but it's a common pattern for it to take years, so SIAI has been quite normal, or at the very least not been unusually dumb.

(My evidence is seeing this pattern close-up in the Wikimedia Foundation, Wikimedia UK (the first attempt at which died before managing it, the second making it through barely) and the West Australian Music Industry Association, and anecdotal evidence from others. Everyone involved always feels stupid at having taken years to achieve the retrospectively obvious. I would be surprised if this aspect of the dynamics of nonprofits had not been studied.)

edit: Luke's recommendation of The Nonprofit Kit For Dummies looks like precisely the book all the examples I know of needed to have someone throw at them before they even thought of forming an organisation to do whatever it is they wanted to achieve.

Comment author: Eliezer_Yudkowsky 12 May 2012 04:04:19AM 18 points [-]

Things that cost money:

  • Amy Willey
  • Luke Muehlhauser
  • Louie Helm
  • CfAR
  • trying things until something worked
Comment author: lukeprog 14 May 2012 10:07:06AM 65 points [-]

I don't think this response supports your claim that these improvements "would not and could not have happened without more funding than the level of previous years."

I know your comment is very brief because you're busy at minicamp, but I'll reply to what you wrote, anyway: Someone of decent rationality doesn't just "try things until something works." Moreover, many of the things on the list of recent improvements don't require an Amy, a Luke, or a Louie.

I don't even have past management experience. As you may recall, I had significant ambiguity aversion about the prospect of being made Executive Director, but as it turned out, the solution to almost every problem X has been (1) read what the experts say about how to solve X, (2) consult with people who care about your mission and have solved X before, and (3) do what they say.

When I was made Executive Director and phoned our Advisors, most of them said "Oh, how nice to hear from you! Nobody from SingInst has ever asked me for advice before!"

That is the kind of thing that makes me want to say that SingInst has "tested every method except the method of trying."

Donor database, strategic plan, staff worklogs, bringing staff together, expenses tracking, funds monitoring, basic management, best-practices accounting/bookkeeping... these are all literally from the Nonprofits for Dummies book.

Maybe these things weren't done for 11 years because SI's decision-makers did make good plans but failed to execute them due to the usual defeaters. But that's not the history I've heard, except that some funds monitoring was insisted upon after the large theft, and a donor database was sorta-kinda-not-really attempted at one point. The history I've heard is that SI failed to make these kinds of plans in the first place, failed to ask advisors for advice, failed to read Nonprofits for Dummies, and so on.

Money wasn't the barrier to doing many of those things, it was a gap in general rationality.

I will agree, however, that what is needed now is more money. We are rapidly becoming a more robust and efficient and rational organization, stepping up our FAI team recruiting efforts, stepping up our transparency and accountability efforts, and stepping up our research efforts, and all those things cost money.

At the risk of being too harsh… When I began to intern with the Singularity Institute in April 2011, I felt uncomfortable suggesting that people donate to SingInst, because I could see it from the inside and it wasn't pretty. (And I'm not the only SIer who felt this way at the time.)

But now I do feel comfortable asking people to donate to SingInst. I'm excited about our trajectory and our team, and if we can raise enough support then we might just have a shot at winning after all.

Comment author: Eliezer_Yudkowsky 21 May 2012 04:29:45AM 32 points [-]

Luke has just told me (personal conversation) that what he got from my comment was, "SIAI's difficulties were just due to lack of funding" which was not what I was trying to say at all. What I was trying to convey was more like, "I didn't have the ability to run this organization, and knew this - people who I hoped would be able to run the organization, while I tried to produce in other areas (e.g. turning my back on everything else to get a year of FAI work done with Marcello or writing the Sequences) didn't succeed in doing so either - and the only reason we could hang on long enough to hire Luke was that the funding was available nonetheless and in sufficient quantity that we could afford to take risks like paying Luke to stay on for a while, well before we knew he would become Executive Director".

Comment author: Will_Sawin 12 June 2012 05:23:10AM 1 point [-]

Does Luke disagree with this clarified point? I do not find a clear indicator in this conversation.

Comment author: lukeprog 28 August 2013 07:40:42PM *  10 points [-]

Update: I came out of a recent conversation with Eliezer with a higher opinion of Eliezer's general rationality, because several things that had previously looked to me like unforced, forseeable mistakes by Eliezer now look to me more like non-mistakes or not-so-forseeable mistakes.

Comment author: MarkusRamikin 14 May 2012 03:41:32PM 28 points [-]

You're allowed to say these things on the public Internet?

I just fell in love with SI.

Comment author: lukeprog 26 May 2012 12:33:50AM *  21 points [-]

You're allowed to say these things on the public Internet?

Well, at our most recent board meeting I wasn't fired, reprimanded, or even questioned for making these comments, so I guess I am. :)

Comment author: TheOtherDave 14 May 2012 04:20:43PM 8 points [-]

Well, all we really know is that he chose to. It may be that everyone he works with then privately berated him for it.
That said, I share your sentiment.
Actually, if SI generally endorses this sort of public "airing of dirty laundry," I encourage others involved in the organization to say so out loud.

Comment author: shminux 14 May 2012 06:04:43PM 18 points [-]

I just fell in love with SI.

It's Luke you should have fallen in love with, since he is the one turning things around.

Comment author: wedrifid 26 May 2012 02:24:14AM 43 points [-]

It's Luke you should have fallen in love with, since he is the one turning things around.

On the other hand I can count with one hand the number of established organisations I know of that would be sociologically capable of ceding power, status and control to Luke the way SingInst did. They took an untrained intern with essentially zero external status from past achievements and affiliations and basically decided to let him run the show (at least in terms of publicly visible initiatives). It is clearly the right thing for SingInst to do and admittedly Luke is very tall and has good hair which generally gives a boost when it comes to such selections - but still, making the appointment goes fundamentally against normal human behavior.

(Where I say "count with one hand" I am not including the use of any digits thereupon. I mean one.)

Comment author: Matt_Simpson 19 July 2012 07:05:00PM 7 points [-]

...and admittedly Luke is very tall and has good hair which generally gives a boost when it comes to such selections...

It doesn't matter that I completely understand why this phrase was included, I still found it hilarious in a network sitcom sort of way.

Comment author: [deleted] 14 May 2012 07:58:32PM *  0 points [-]

Consider the implications in light of the HoldenKarnofsky's critique about SI pretensions to high rationality.

  1. Rationality is winning.

  2. SI, at the same time as it was claiming extraordinary rationality, was behaving in ways that were blatantly irrational.

  3. Although this is supposedly due to "the usual causes," rationality (winning) subsumes overcoming akrasia.

  4. HoldenKarnofsky is correct that SI made claims for its own extraordinary rationality at a time when its leaders weren't rational.

  5. Further: why should anyone give SI credibility today—when it stands convicted of self-serving misrepresentation in the recent past?

Comment author: ciphergoth 15 May 2012 06:26:06AM 5 points [-]

You've misread the post - Luke is saying that he doesn't think the "usual defeaters" are the most likely explanation.

Comment author: lukeprog 25 May 2012 05:42:34PM 3 points [-]

Correct.

Comment author: thomblake 14 May 2012 08:03:44PM 5 points [-]

As a minor note, observe that claims of extraordinary rationality do not necessarily contradict claims of irrationality. The sanity waterline is very low.

Comment author: TheOtherDave 14 May 2012 09:12:55PM 5 points [-]

Do you mean to imply in context here that the organizational management of SIAI at the time under discussion was above average for a nonprofit organization? Or are you just making a more general statement that a system can be irrational while demonstrating above average rationality? I certainly agree with the latter.

Comment author: shminux 14 May 2012 08:10:09PM *  0 points [-]

Just to let you know, you've just made it on my list of the very few LW regulars I no longer bother replying to, due to the proven futility of any communications. In your case it is because you have a very evident ax to grind, which is incompatible with rational thought.

Comment author: metaphysicist 14 May 2012 08:34:42PM 1 point [-]

This comment seems strange. Is having an ax to grind opposed to rationality? Then why does Eliezer Yudkowsky, for example, not hesitate to advocate for causes such as friendly AI? Doesn't he have an ax to grind? More of one really, since this ax chops trees of gold.

It would seem intellectual honesty would require that you say you reject discussions with people with an ax to grind, unless you grind a similar ax.

Comment author: Benquo 14 May 2012 02:21:30PM *  17 points [-]

This makes me wonder... What "for dummies" books should I be using as checklists right now? Time to set a 5-minute timer and think about it.

Comment author: [deleted] 26 May 2012 11:38:50PM 5 points [-]

What did you come up with?

Comment author: Benquo 28 May 2012 09:02:01PM *  4 points [-]

I haven't actually found the right books yet, but these are the things where I decided I should find some "for beginners" text. the important insight is that I'm allowed to use these books as skill/practice/task checklists or catalogues, rather than ever reading them all straight through.

General interest:

  • Career

  • Networking

  • Time management

  • Fitness

For my own particular professional situation, skills, and interests:

  • Risk management

  • Finance

  • Computer programming

  • SAS

  • Finance careers

  • Career change

  • Web programming

  • Research/science careers

  • Math careers

  • Appraising

  • Real Estate

  • UNIX

Comment author: grendelkhan 28 March 2013 02:43:27PM 0 points [-]

For fitness, I'd found Liam Rosen's FAQ (the 'sticky' from 4chan's /fit/ board) to be remarkably helpful and information-dense. (Mainly, 'toning' doesn't mean anything, and you should probably be lifting heavier weights in a linear progression, but it's short enough to be worth actually reading through.)

Comment author: David_Gerard 14 May 2012 03:32:38PM 0 points [-]

The For Dummies series is generally very good indeed. Yes.

Comment author: JoshuaZ 14 May 2012 03:44:03PM 15 points [-]

The largest concern from reading this isn't really what it brings up in management context, but what it says about the SI in general. Here an area where there's real expertise and basic books that discuss well-understood methods and they didn't do any of that. Given that, how likely should I think it is that when the SI and mainstream AI people disagree that part of the problem may be the SI people not paying attention to basics?

Comment author: TheOtherDave 14 May 2012 04:17:42PM 4 points [-]

(nods) The nice thing about general-purpose techniques for winning at life (as opposed to domain-specific ones) is that there's lots of evidence available as to how effective they are.

Comment author: ciphergoth 21 May 2012 06:06:19PM 1 point [-]

I doubt there's all that much of a correlation between these things to be honest.

Comment author: private_messaging 16 May 2012 01:43:25PM *  0 points [-]

Precisely. For example of one existing base: the existing software that searches for solutions to engineering problems. Such as 'self improvement' via design of better chips. Works within narrowly defined field, to cull the search space. Should we expect state of the art software of this kind to be beaten by someone's contemporary paperclip maximizer? By how much?

Incredibly relevant to AI risk, but analysis can't be faked without really having technical expertise.

Comment author: Steve_Rayhawk 21 October 2012 10:10:58AM *  13 points [-]

these are all literally from the Nonprofits for Dummies book. [...] The history I've heard is that SI [...]

\

failed to read Nonprofits for Dummies,

I remember that, when Anna was managing the fellows program, she was reading books of the "for dummies" genre and trying to apply them... it's just that, as it happened, the conceptual labels she accidentally happened to give to the skill deficits she was aware of were "what it takes to manage well" (i.e. "basic management") and "what it takes to be productive", rather than "what it takes to (help) operate a nonprofit according to best practices". So those were the subjects of the books she got. (And read, and practiced.) And then, given everything else the program and the organization was trying to do, there wasn't really any cognitive space left over to effectively notice the possibility that those wouldn't be the skills that other people afterwards would complain that nobody acquired and obviously should have known to. The rest of her budgeted self-improvement effort mostly went toward overcoming self-defeating emotional/social blind spots and motivated cognition. (And I remember Jasen's skill learning focus was similar, except with more of the emphasis on emotional self-awareness and less on management.)

failed to ask advisors for advice,

I remember Anna went out of her way to get advice from people who she already knew, who she knew to be better than her at various aspects of personal or professional functioning. And she had long conversations with supporters who she came into contact with for some other reasons; for those who had executive experience, I expect she would have discussed her understanding of SIAI's current strategies with them and listened to their suggestions. But I don't know how much she went out of her way to find people she didn't already have reasonably reliable positive contact with, to get advice from them.

I don't know much about the reasoning of most people not connected with the fellows program about the skills or knowledge they needed. I think Vassar was mostly relying on skills tested during earlier business experience, and otherwise was mostly preoccupied with the general crisis of figuring out how to quickly-enough get around the various hugely-saliently-discrepant-seeming-to-him psychological barriers that were causing everyone inside and outside the organization to continue unthinkingly shooting themselves in the feet with respect to this outside-evolutionary-context-problem of existential risk mitigation. For the "everyone outside's psychological barriers" side of that, he was at least successful enough to keep SIAI's public image on track to trigger people like David Chalmers and Marcus Hutter into meaningful contributions to and participation in a nascent Singularity-studies academic discourse. I don't have a good idea what else was on his mind as something he needed to put effort into figuring out how to do, in what proportions occupying what kinds of subjective effort budgets, except that in total it was enough to put him on the threshold of burnout. Non-profit best practices apparently wasn't one of those things though.

But the proper approach to retrospective judgement is generally a confusing question.

the kind of thing that makes me want to say [. . .]

The general pattern, at least post-2008, may have been one where the people who could have been aware of problems felt too metacognitively exhausted and distracted by other problems to think about learning what to do about them, and hoped that someone else with more comparative advantage would catch them, or that the consequences wouldn't be bigger than those of the other fires they were trying to put out.

strategic plan [...] SI failed to make these kinds of plans in the first place,

There were also several attempts at building parts of a strategy document or strategic plan, which together took probably 400-1800 hours. In each case, the people involved ended up determining, from how long it was taking, that, despite reasonable-seeming initial expectations, it wasn't on track to possibly become a finished presentable product soon enough to justify the effort. The practical effect of these efforts was instead mostly just a hard-to-communicate cultural shared understanding of the strategic situation and options -- how different immediate projects, forms of investment, or conditions in the world might feed into each other on different timescales.

expenses tracking, funds monitoring [...] some funds monitoring was insisted upon after the large theft

There was an accountant (who herself already cost like $33k/yr as the CFO, despite being split three ways with two other nonprofits) who would have been the one informally expected to have been monitoring for that sort of thing, and to have told someone about it if she saw something, out of the like three paid administrative slots at the time... well, yeah, that didn't happen.

I agree with a paraphrase of John Maxwell's characterization: "I'd rather hear Eliezer say 'thanks for funding us until we stumbled across some employees who are good at defeating their akrasia and [had one of the names of the things they were aware they were supposed to] care about [happen to be "]organizational best practices["]', because this seems like a better depiction of what actually happened." Note that this was most of the purpose of the Fellows program in the first place -- to create an environment where people could be introduced to the necessary arguments/ideas/culture and to help sort/develop those people into useful roles, including replacing existing management, since everyone knew there were people who would be better at their job than they were and wished such a person could be convinced to do it instead.

Comment author: Louie 18 November 2012 10:04:40AM 8 points [-]

Note that this was most of the purpose of the Fellows program in the first place -- [was] to help sort/develop those people into useful roles, including replacing existing management

FWIW, I never knew the purpose of the VF program was to replace existing SI management. And I somewhat doubt that you knew this at the time, either. I think you're just imagining this retroactively given that that's what ended up happening. For instance, the internal point system used to score people in the VFs program had no points for correctly identifying organizational improvements and implementing them. It had no points for doing administrative work (besides cleaning up the physical house or giving others car rides). And it had no points for rising to management roles. It was all about getting karma on LW or writing conference papers. When I first offered to help with the organization directly, I was told I was "too competent" and that I should go do something more useful with my talent, like start another business... not "waste my time working directly at SI."

Comment author: John_Maxwell_IV 19 December 2012 01:31:42PM 1 point [-]

"I'd rather hear Eliezer say 'thanks for funding us until we stumbled across some employees who are good at defeating their akrasia and [had one of the names of the things they were aware they were supposed to] care about [happen to be "]organizational best practices["]', because this seems like a better depiction of what actually happened."

Seems like a fair paraphrase.

Comment author: David_Gerard 26 May 2012 11:32:43PM 6 points [-]

This inspired me to make a blog post: You need to read Nonprofit Kit for Dummies.

Comment author: David_Gerard 27 May 2012 08:02:08AM 5 points [-]

... which Eliezer has read and responded to, noting he did indeed read just that book in 2000 when he was founding SIAI. This suggests having someone of Luke's remarkable drive was in fact the missing piece of the puzzle.

Comment author: ciphergoth 27 May 2012 09:26:28AM 4 points [-]

Fascinating! I want to ask "well, why didn't it take then?", but if I were in Eliezer's shoes I'd be finding this discussion almost unendurably painful right now, and it feels like what matters has already been established. And of course he's never been the person in charge of that sort of thing, so maybe he's not who we should be grilling anyway.

Comment author: David_Gerard 27 May 2012 10:22:17AM *  8 points [-]

Obviously we need How to be Lukeprog for Dummies. Luke appears to have written many fragments for this, of course.

Beating oneself up with hindsight bias is IME quite normal in this sort of circumstance, but not actually productive. Grilling the people who failed makes it too easy to blame them personally, when it's a pattern I've seen lots and lots, suggesting the problem is not a personal failing.

Comment author: ciphergoth 27 May 2012 11:23:11AM 4 points [-]

Agreed entirely - it's definitely not a mark of a personal failing. What I'm curious about is how we can all learn to do better at the crucial rationalist skill of making use of the standard advice about prosaic tasks - which is manifestly a non-trivial skill.

Comment author: David_Gerard 14 May 2012 03:30:18PM 2 points [-]

That book looks like the basic solution to the pattern I outline here, and from your description, most people who have any public good they want to achieve should read it around the time they think of getting a second person involved.

Comment author: lukeprog 15 July 2012 10:57:25PM *  1 point [-]

You go to war with the army you have, not the army you might want.

Donald Rumsfeld

Comment author: Eliezer_Yudkowsky 15 July 2012 11:38:21PM 7 points [-]

...this was actually a terrible policy in historical practice.

Comment author: Vaniver 16 July 2012 12:16:19AM 2 points [-]

That only seems relevant if the war in question is optional.

Comment author: Eliezer_Yudkowsky 16 July 2012 02:09:44AM 5 points [-]

Rumsfeld is speaking of the Iraq war. It was an optional war, the army turned out to be far understrength for establishing order, and they deliberately threw out the careful plans for preserving e.g. Iraqi museums from looting that had been drawn up by the State Department, due to interdepartmental rivalry.

This doesn't prove the advice is bad, but at the very least, Rumsfeld was just spouting off Deep Wisdom that he did not benefit from spouting; one would wish to see it spoken by someone who actually benefited from the advice, rather than someone who wilfully and wantonly underprepared for an actual war.

Comment author: Vaniver 16 July 2012 02:27:10AM 8 points [-]

just spouting off Deep Wisdom that he did not benefit from spouting

Indeed. The proper response, which is surely worth contemplation, would have been:

Victorious warriors win first and then go to war, while defeated warriors go to war first and then seek to win.

Sun Tzu

Comment author: ghf 11 May 2012 10:06:54PM *  7 points [-]

And note that these improvements would not and could not have happened without more funding than the level of previous years

Given the several year lag between funding increases and the listed improvements, it appears that this was less a result of a prepared plan and more a process of underutilized resources attracting a mix of parasites (the theft) and talent (hopefully the more recent staff additions).

Which goes towards a critical question in terms of future funding: is SIAI primarily constrained in its mission by resources or competence?

Of course, the related question is: what is SIAI's mission? Someone donating primarily for AGI research might not count recent efforts (LW, rationality camps, etc) as improvements.

What should a potential donor expect from money invested into this organization going forward? Internally, what are your metrics for evaluation?

Edited to add: I think that the spin-off of the rationality efforts is a good step towards answering these questions.

Comment author: John_Maxwell_IV 11 May 2012 05:07:40AM 1 point [-]

This seems like a rather absolute statement. Knowing Luke, I'll bet he would've gotten some of it done even on a limited budget.

Comment author: ciphergoth 11 May 2012 06:08:58AM 7 points [-]

Luke and Louie Helm are both on paid staff.

Comment author: John_Maxwell_IV 12 May 2012 12:29:55AM *  10 points [-]

I'm pretty sure their combined salaries are lower than the cost of the summer fellows program that SI was sponsoring four or five years ago. Also, if you accept my assertion that Luke could find a way to do it on a limited budget, why couldn't somebody else?

Givewell is interested in finding charities that translate good intentions into good results. This requires that the employees of the charity have low akrasia, desire to learn about and implement organizational best practices, not suffer from dysrationalia, etc. I imagine that from Givewell's perspective, it counts as a strike against the charity if some of the charity's employees have a history of failing at any of these.

I'd rather hear Eliezer say "thanks for funding us until we stumbled across some employees who are good at defeating their akrasia and care about organizational best practices", because this seems like a better depiction of what actually happened. I don't get the impression SI was actively looking for folks like Louie and Luke.

Comment author: p4wnc6 12 May 2012 01:48:28AM 3 points [-]

Yes to this. Eliezer's claim about the need for funding may suffer many of Luke's criticisms above. But usually the most important thing you need is talent and that does require funding.

Comment author: Pablo_Stafforini 24 March 2013 06:40:48PM 0 points [-]

All publications being converted into slick, useable LaTeX template (example)

The 'example' link is dead.

Comment author: lukeprog 24 March 2013 08:50:55PM 0 points [-]

Fixed.

Comment author: aceofspades 05 July 2012 06:53:37PM *  -1 points [-]

The things posted here are not impressive enough to make me more likely to donate to SIAI and I doubt they appear so for others on this site, especially the many lurkers/infrequent posters here.