When I showed up at the Singularity Institute, I was surprised to find that 30-60 papers' worth of material was lying around in blog posts, mailing list discussions, and people's heads — but it had never been written up in clear, well-referenced academic articles.

Why is this so? Writing such articles has many clear benefits:

  • Clearly stated and well-defended arguments can persuade smart people to take AI risk seriously, creating additional supporters and collaborators for the Singularity Institute.
  • Such articles can also improve the credibility of the organization as a whole, which is especially important for attracting funds from top-level social entrepreneurs and institutions like the Gates Foundation and Givewell.
  • Laying out the arguments clearly and analyzing each premise can lead to new strategic insights that will help us understand how to purchase x-risk reduction most efficiently.
  • Clear explanations can provide a platform on which researchers can build to produce new strategic and technical research results.
  • Communicating clearly is what lets other people find errors in your reasoning.
  • Communities can use articles to cut down on communication costs. When something is written up clearly, 1000 people can read a single article instead of needing to transmit the information by having several hundred personal conversations between 2-5 people.

Of course, there are costs to writing articles, too. The single biggest cost is staff time / opportunity cost. An article like "Intelligence Explosion: Evidence and Import" can require anywhere from 150-800 person-hours. That is 150-800 paid hours during which our staff is not doing other critically important things that collectively have a bigger positive impact than a single academic article is likely to have.

So Louie Helm and Nick Beckstead and I sat down and asked, "Is there a way we can buy these articles without such an egregious cost?"

We think there might be. Basically, we suspect that most of the work involved in writing these articles can be outsourced. Here's the process we have in mind:

  1. An SI staff member chooses a paper idea we need written up, then writes an abstract and some notes on the desired final content.
  2. SI pays Gwern or another remote researcher to do a literature search-and-summary of relevant material, with pointers to other resources.
  3. SI posts a contest to LessWrong, inviting submissions of near-conference-level-quality articles that follow the provided abstract and notes on desired final content. Contestants benefit by starting with the results of Gwern's literature summary, and by knowing that they don't need to produce something as good as "Intelligence Explosion: Evidence and Import" to win the prize. First place wins $1200, 2nd place wins $500, and 3rd place wins $200.
  4. Submissions are due 1 month later. Submission are reviewed, and the authors of the best submissions are sent comments on what could be improved to maximize the chances of coming in first place.
  5. Revised articles are due 3 weeks after comments are received. Prizes are awarded.
  6. SI pays an experienced writer like Yvain or Kaj_Sotala or someone similar to build up and improve the 1st place submission, borrowing the best parts from the other submissions, too.
  7. An SI staff member does a final pass, adding some content, making it more clearly organized and polished, etc. One of SI's remote editors does another pass to make the sentences more perfect.
  8. The paper is submitted to a journal or an edited volume, and is marked as being co-authored by (1) the key SI staff member who provided the seed ideas and guided each stage of the revisions and polishing, (2) the author of the winning submission, and (3) Gwern. (With thanks to contributions from the other contest participants whose submissions were borrowed from — unless huge pieces were borrowed, in which case they may be counted as an additional co-author.)

If this method works, each paper may require only 50-150 hours of SI staff time per paper — a dramatic improvement! But this method has additional benefits:

  • Members of the community who are capable of doing one piece of the process but not the other pieces get to contribute where they shine. (Many people can write okay-level articles but can't do efficient literature searches or produce polished prose, etc.)
  • SI gets to learn more about the talent that exists in its community which hadn't yet been given the opportunity to flower. (We might be able to directly outsource future work to contest participants, and if one person wins three such contests, that's an indicator that we should consider hiring them.)
  • Additional paid "jobs" (by way of contest money) are created for LW rationalists who have some domain expertise in singularity-related subjects.
  • Many Less Wrongers are students in fields relevant to the subject matter of the papers that will be produced by this process, and this will give them an opportunity to co-author papers that can go on their CV.
  • The community in general gets better at collaborating.

This is, after all, more similar to how many papers would be produced by university departments, in which a senior researcher works with a team of students to produce papers.

Feedback? Interest?

(Not exactly the same, but see also the Polymath Project.)

New Comment
46 comments, sorted by Click to highlight new comments since:

The process you describe seems feasible, but I don't know how much I'm affected by the fact that I really, really want it to work.

Maybe just run a pilot?

Maybe just run a pilot?

That's the plan.

How did this go? Did you get around to piloting this? 

This is all great except for the contest part, which I might currently have moderate ethical objections to. In general I'm concerned by contests which are held as an alternative to just paying someone to do the work for you; I objected to the contest that SI used to select their new logo (which is great) for the same reasons.

Essentially what you're doing is asking some unknown number of people to work for highly unpredictable pay, which is mostly likely to be (assuming at least a half-dozen entries), no pay at all. This tactic makes lots of financial sense and I understand why it would appeal to a cash-strapped non-profit, but it seems to me that if you're going to ask someone to do work for your benefit, you should pay them for it. This is a slightly muddier ideal when it comes to non-profits because I certainly don't think there's anything wrong with asking people to volunteer their time. Perhaps it's the uncertainty that's bothering me; it's as though you're asking people to gamble with their time.

So perhaps it's ethically equivalent to a charity-sponsored raffle, which I don't object to. Is my reasoning wrong, or am I just inconsistent? I'm not sure.

[-]Raemon100

I have a similar problem with contest-labor. I have less of a problem with it for non-profits. But my reasoning is actually particularly relevant to an organization that is (among other things), promoting rationality. (You could argue that it is either more or less concerning, given your pool of volunteers' propensity for rationality)

My problem with contest labor is that it exploits people's probability biases. They see "I could get $1000!". They don't see "the expected value for this labor is about $1.00/hour" (or less). Which is usually the case (especially for stuff like logo design). I don't know what the expected value is for a contest like this - the numbers are high enough and the people contributing will probably be low enough that it may be a pretty good deal.

I don't think this is wrong per se, but it's Dark Arts-ish. (Approximately as Dark Arts as using anchoring in your advertising, but I'm not sure how bad I consider that in the first place)

(Bonus points to anyone who (for some reason?) has been following my posts closely and can point out inconsistencies in my previous comments on similar issues. I have no justification for the inconsistency)

I trust LWers to do expected utility calculations, but it's actually much worse than this.

We may decide whether or not to enter based on our probabilities about how many other people will enter: if I think many people will enter, I shouldn't waste my time, but if I think few people will enter, I have a good chance and should enter. But we also know all of our potential competitors will be thinking the same, and possibly making predictions with a similar algorithm to ourselves.

That makes this an anticoordination problem similar to the El Farol Bar, which is an especially nasty class of game because it means the majority of people inevitably regret their choice. If we predict few people will enter, then that prediction will make many people enter, and we will regret our prediction. If we predict many people will enter, that prediction will make few people enter, and we will again regret our prediction. As long as our choices are correlated, there's no good option!

The proper response would be to pursue a mixed strategy in which we randomly enter or do not enter the contest based on some calculations and a coin flip, but this would unfairly privilege defectors and be a bit mean to the Singularity Institute, especially if people were to settle on a solution like only one person entering each contest - which might end up optimal since more people entering not only linearly decreases chance of winning, but also increases effort you have to put into your entry, eg if you were the only entrant you could just write a single sentence and win by default.

And you might think: then just let everyone know exactly how many people have entered at any one time. But that turns it into a Malthusianism: people will gain no utility by entering the contest, because utility of entering the contest is a function of how many other people are in the contest, and if there were still utility to be gained, more people would enter the contest until that stopped being true.

(Although this comment isn't entirely serious, I honestly worried about some of these issues before I entered the efficient charity contest and the nutrition contest. And, uh, won both of them, which I guess makes me a dirty rotten defector and totally ruins my point.)

And you might think: then just let everyone know exactly how many people have entered at any one time. But that turns it into a Malthusianism: people will gain no utility by entering the contest, because utility of entering the contest is a function of how many other people are in the contest, and if there were still utility to be gained, more people would enter the contest until that stopped being true.

In fairness, this is only true if expected utility is purely a function of the number of participants, as in the El Farol Bar game. Here you also need to consider your strength relative to the field: if you and I both see that 10 people have entered then you might see opportunity where I would not, because you've won two of these and I haven't.

This is more helpful than it sounds at first, because this is really a two-stage game: first you sign up to write the paper, and then you actually write one. Entrants will decide whether to advance to the second stage by an assessment of their own strength relative to the field, which should tend to decrease as the field of entrants grows larger. People with low assessed EVs are thus discouraged from investing - exactly the result we want, so long as their assessments are accurate.

So what other ways could the Game be constructed to avoid this problem?

My problem with contest labor is that it exploits people's probability biases.

On the other hand, there could plausibly be many people who want to help SI but are suffering from akrasia issues, partially due to the lack of a concrete reward. Offering a reward, even one that people knew was illusionary, might play two biases against each other and get people to do what they'd endorse doing for free anyway.

I don't know how many people fall into this category, but it would at least somewhat describe me. (Or at least would describe me if I weren't currently getting paid to do writing for SI anyway.)

When I entered the Quantified Health contest, I calculated my expected return. I thought it would take me maybe 20-30 hours. I was right. I thought I had a 10% chance of winning $5000, 10% chance of winning $1000, and 50% chance of winning $500. That's $850 expected return. That's about $34 an hour to do something that I enjoyed doing, thought was a valuable use of my time, and taught me research skills and nutrition. I had just graduated high school, so that was way more than the wage I would have gotten in any mind-numbing part time I could have gotten in the small town where I was living. So entering was totally worthwhile.

I only won $500, which was an actual return of $20 an hour, but that's still more than you get flipping burgers.

So I think that there's nothing wrong with running these contests. People enter them if they think they should, and they're relatively cheap ways of getting stuff done.

I do think with those numbers make it a fairly reasonable decision to enter, in that instance. A lot of my concern about contest-labor stems from how it affects the art industry, in which returns end up being less than minimum wage.

I don't know how to expect this to play out over multiple iterations, either.

Thank you for explaining to me what I was thinking. This is exactly my concern.

[-][anonymous]00

My problem with contest labor is that it exploits people's probability biases.

Exactly -- this is what I understood to be the point of running contests. So presenting such a contest to LessWrong is odd (to put it politely).

[This comment is no longer endorsed by its author]Reply

As long as the process is clear and people know what they're getting into, I don't think there's an issue with this.

Definitely not an ethical issue anyway. Dark Artists, though, present all sorts of stuff they disagree with as "ethical" issues to claim the "high ground" and try to head off debate.

An alternative that you might consider more ethical is to limit the number of contestants and determine payment (or lack thereof) based on an absolute measure of quality rather than through competition.

Yeah, that would definitely allay my concerns. I think it's the uncertainty surrounding the number and quality of other entrants that bothers me.

Is it still unethical if competitors know the field?

I have a hard time seeing how offering someone any deal they aren't being deceived about can be unethical...

See Raemon's comment. The Dark Arts are involved, mere honesty is no defense.

Yes, but those who read this thread know the Dark Arts are involved and can adjust their beliefs accordingly.

Knowing about your biases does not automatically make you immune to them, and saying "but I told them the bias I was exploiting" doesn't excuse you from responsibility for knowingly exploiting a bias.

I didn't claim automatic immunity, I said "can". While deontologists might object to "knowingly exploiting a bias" full stop and virtue ethicists might claim that a person who does such things is probably vicious, a consequentialist must determine whether, in this case, using the Dark Arts might lead to better or worse outcomes (which seems non-obvious to me).

When I tried to write up some decision theory results as a full-length article, it felt really pointless and unpleasant. I couldn't get through even a single paragraph without thinking how much I hate it.

One problem is that even though I enjoy coming up with short and sweet proofs, I don't know in advance which parts will trip up readers and require clarification, and feel very averse to guessing. Here's a recent example. Maybe the right way is to write discussion posts first, then debug the presentation based on reader comments?

But the bigger problem is that academic articles seem to require a lot of fluff that doesn't add value. Moldbug called it "grant-related propaganda", but I'm not sure grants are the main reason why people add fluff. Contrast a typical paper today with John Nash's 1950 paper which doesn't waste even a single word on explaining why the subject matter is relevant to anything. I'd be happy to write things up in the same style, but then journals will just reject my writings, won't they?

I'd be happy to write up things in the same style, but then journals will just reject my writings, won't they?

No, because Yvain and Kaj and I will polish it and add the stuff you call "fluff" but I call "explanatory, clarifying, context-setting stuff."

The only thing is that you and I would have to have enough conversations that I understand what you're talking about, so I can fill in the inferential gaps and hold the reader's hand through the explanation.

The only thing is that you and I would have to have enough conversations that I understand what you're talking about, so I can fill in the inferential gaps and hold the reader's hand through the explanation.

(This might be premature optimization, but:) I suspect this process would go a lot smoother if you could find someone in the Bay Area to act as a sort of on-site translator, 'cuz long distance back-and-forth is sometimes a hassle. Are there any active decision theory hotshots in the Bay Area?

When I tried to write up some decision theory results as a full-length article, it felt really pointless and unpleasant. I couldn't get through even a single paragraph without thinking how much I hate it.

Are you still considering it, since creating the 'writeup' thread on the list, or are you describing what preceded that?

Well, sometime ago I moved past the "considering" stage and started writing, but then gave up. After what Luke said just now, I think I'll give it another try. I gave a lot of weight to Wei's opinion that publishing UDT might be dangerous, but now he seems to think that the really dangerous topic is logical uncertainty, and my mathy ideas serve as a nice distraction from that :-)

Wei? Your response?

I'm not sure that my thoughts on this topic should be taken that seriously, since I'm quite confused, uncertain, and conflicted (a part of me just wants to see these intellectual puzzles solved ASAP), but since you ask... My last thoughts on this topic were:

It's the social consequences that I'm most unsure about. It seems like if SIAI can keep "ownership" over the decision theory ideas and use it to preach AI risk, then that would be beneficial, but it could also be the case that the ideas take on a life of their own and we just end up having more people go into decision theory because they see it as a fruitful place to get interesting technical results.

It seems to me at this point that the most likely results of publishing UDT are 1) it gets ignored by the academic mainstream, or 2) it becomes part of the academic mainstream but detached from AI risk idea. "Sure, those SIAI guys once came up with the (incredibly obvious) idea that decisions are logical facts, but what have they done lately? Hey, let's try this trick to see if we can make our (UDT-inspired) AI do something interesting." But I don't claim to have particularly good intuitions about how academia works so if others think that SIAI can get a lot of traction from publishing UDT, they might well be right.

Also, to clarify, my private comment to cousin_it was meant to be a joke. I don't think the fact that publishing papers about ADT (i.e., proof-based versions of UDT) will distract some people away from UDT (and its emphasis on logical uncertainty) is a very important consideration.

Wei's well-known outside LW, so if he publically confirmed that logical uncertainty is dangerous, that might be dangerous. I'm not sure what the dangers could be of knowing that I don't know everything I know; although thinking about that for too long would probably make me more Will Newsome-ish.

There should be a step 9, where every potential author is sent the final article and has the option of refusing formal authorship (if she doesn't agree with the final article). Convention in academic literature is that each author individually endorses all claims made in an article, hence this final check.

There's a wiki here. Would that be a good place to collaborate on a paper?

As a jobless student I am very interested. I might as well ask here: If I haven't been contacted again about the research thing that was up on here does that mean I won't be?

[-][anonymous]100

I haven't heard back about the remote research position, either. I'm not sure why we haven't heard back yet. It's been almost a month since our submissions were due. Whatever the reason, to not hear back is discouraging. I can deal with a "Thanks for participating, but we're hired another candidate" email or a "We're still reviewing applications and submissions, thank you for your patience. Expect a decision on " email. But not hearing anything? Gives me a bad pang in my stomach.

While I am also interested in the contest, I am very reluctant to enter because of this. Should I sink another 20-40 hours into researching a technical field and writing a scholarly article when I haven't heard back about the first one? I'm not well versed in expected utility calculations, but my gut reaction is "no." As a jobless student myself, it may be optimal to instead focus on getting a conventional job. Which is disappointing, because I sincerely want to do something meaningful to support myself. Working an entry level job, while better than being broke, doesn't quite give me job satisfaction.

Edit: I also just want to clarify that if it's a matter of Luke not yet having the time to have chose a candidate, I completely understand. Out of the possibilities, I find that scenario the most likely. But it's the lack of communication at all that miffs me.

Hey. Just to clarify, for whatever reason I haven't been contacted with an assignment to try at all so I didn't end up sinking any time into it.

Were you one of the habit formation applications?

[-][anonymous]00

Yes, I was. It my article is "How to Change a Habit, and the Implications for Rationality Teaching."

Ah, that one. OK, the way we were doing it was the assistant contacted you guys, you sent in your submissions, I was assigned to read & review them, and based on that and his own reading, Luke picked people to hire.

Of the 41 people to express interest, 16 have actually sent in submissions, and I've read the first 15 including yours. As far as I know, 2 of the 16 were hired in some capacity. I guess you weren't one of them. I didn't know none of you had heard back.

(If you're curious, my assessment of yours was basically that you're right that focusing on environmental changes is a powerful boost to habit-formation, but you missed a lot of important information on what could be done short of moving across the country and the requirements for habit-formation. Middle of the pack.)

Hopefully this is helpful information.

[-][anonymous]00

Very helpful! Thank you very much.

This seems like a great way of moving forward. I would certainly enter.

What do you estimate a paper written in this way would cost, in total?

I would definately enter such a competition, even for far less valuable prizes: I enjoy that type of writing, and would love to have some papers on my CV.

would love to have some papers on my CV.

Luke, you might consider emphasizing this as an incentive, rather than the contest money.

Maybe make people who didn't win but still did a vaguely decent job get mentioned as co-authors (maybe not) or acknowledged somehow?

It is important to maintain integrity in assigning credit. But it would be good to encourage everyone to publish, as long as basic quality standards are maintained. There are more and less prestigious academic journals. Some are easy to get accepted to.

The main challenge is to match up people with the relevant skills -- domain knowledge, writing ability, skills in academic-style writing -- and to match up people of the relevant skill levels. (You might want the match higher-ability people with higher-ability people to get ideal results, but if a lower-ability person shows promise you might want to give them a higher-ability mentor.)

[-][anonymous]00

benefits... ...improve the credibility of the organization as a whole,

Another benefit: Having a body of academic publications is one part of creating "permission" for smart academics, e.g., PhD students, to get involved in FAI without ending their careers.

[This comment is no longer endorsed by its author]Reply

Instead of a contest, we may want to try the Open Source way. More specifically, set up a wiki. Of the top of my mind, I see we could:

  • Propose different versions for the same paper (on different pages)
  • Manage the different sections of the paper separately (though it may be difficult to keep them sufficiently self contained).
  • Borrow ideas from each others during the elaboration of the paper.
  • Lower the barrier to entry. People can make smaller and more targeted contributions. I for one would be happy to make suggestions, or hunt for errors. I'm not likely however to pull out a full paper (even if it's just a draft).

Some disadvantages can be:

  • Edit wars (can be mitigated by attributing one page per paper per contributor).
  • Not holding off on proposing solutions.
  • Other mind-killing effects due to having identified to one's own version of the paper.
  • While we can keep the contest format, there is a chance that contributions are diluted to the point where it becomes difficult to make proper thanks (Real World Haskell for instance have my name in its thanks section, for 2 or 3 not so useful comments I made.)
  • Lower barrier to entry. It may produce noise that would hinder more useful contributors.