Over the past few months, the Singularity Institute has published many papers on topics related to Friendly AI. It's wonderful that these ideas are getting written up, and it's virtually always better to do something suboptimal than to do nothing. However, I will make the case below that academic papers are a terrible way to discuss Friendly AI, and other ideas in that region of thought space. We need something better.
I won't try to argue that papers aren't worth publishing. There are many reasons to publish papers - prestige in certain communities and promises to grant agencies, for instance - and I haven't looked at them all in detail. However, I think there is a conclusive case that as a discussion forum - a way for ideas to be read by other people, evaluated, spread, criticized, and built on - academic papers fail. Why?
1. The time lag is huge; it's measured in months, or even years.
Ideas structured like the Less Wrong Sequences, with large inferential distances between beginning and ending, have huge webs of interdependencies: to read A you have to read B, which means you need to read C, which requires D and E, and on and on and on. Ideas build on each other. Einstein built on Maxwell, who built on Faraday, who built on Newton, who built on Kepler, who built on Galileo and Copernicus.
For this to happen, ideas need to get out there - whether orally or in writing - so others can build on them. The publication cycle for ideas is like the release cycle for software. It determines how quickly you can get feedback, fix mistakes, and then use whatever you've already built to help make the next thing. Most academic papers take months to write up, and then once written up, take more months to publish. Compare that to Less Wrong articles or blog posts, where you can write an essay, get comments within a few hours, and then write up a reply or follow-up the next day.
Of course, some of that extra time lag is that big formal documents are sometimes needed for discussion, and big formal documents take a while. But academic papers aren't just limited by writing and reviewing time - they still fundamentally operate on the schedule of the seventeenth-century Transactions of the Royal Society. When Holden published his critique of the Singularity Institute on Less Wrong, a big formal document, Eliezer could reply with another big formal document in about three weeks.
2. Most academic publications are inaccessible outside universities.
This problem is familiar to anyone who's done research outside a university. The ubiquitous journal paywall. People complain about how the New York Times and Wall Street Journal have paywalls, but at least you can pay for them if you really want to. It isn't practical for almost anyone doing research to pay for the articles they need out-of-pocket, since journals commonly charge $30 or more per article, and any serious research project involves dozens or even hundreds of articles. Sure, there are ways to get around the system, and you can try to publish (and get everyone else in your field to publish) in open-access journals, but why introduce a trivial inconvenience?
3. Virtually no one reads most academic publications.
This obviously goes together with point #2, but even within universities, it's rare for papers, dissertations or even books to be read outside a very narrow community. Most people don't regularly read journals outside their field, let alone outside their department. Academic papers are hard to get statistics on, but eg., I was a math major in undergrad, and I can't even understand the titles of most new math papers. More broadly, the print run of most academic books is very small, only a few hundred or so. The average Less Wrong post gets more views than that.
4. It's very unusual to make successful philosophical arguments in paper form.
When doing research for Personalized Medicine, I often read papers to discover the results of some experiment. Someone gave drug X to people with disease Y. What were the results? How many were cured? How many had side effects? What were the costs and benefits? All useful information.
However, most recent Singularity Institute papers are neither empirical ("we did experiment X, these are the results") or mathematical ("if you assume A, B, and C, then D and E follow"). Rather, they are philosophical, like Paul Graham's essays. I honestly can't think of a single instance where I was convinced of an informal, philosophical argument through an academic paper. Books, magazines, blog posts - sure, but papers just don't seem to be a thing.
5. Papers don't have prestige outside a narrow subset of society.
Several other arguments here - the time lag, for instance - also apply to books. However, society in general recognizes that writing a book is a noteworthy achievement, especially if it sells well. A successful author, even if not compensated well, is treated a little like a celebrity: media interviews, fan clubs, crazy people writing him letters in green ink, etc. (This is probably related to them not being paid well: in the labor market, payment in social status probably substitutes to a high degree for payment in money, as we see with actors and musicians.)
There's nothing comparable for academic papers. No one ever writes a really successful paper, and then goes on The Daily Show, or gets written up in the New York Times, or gets harassed by crowds of screaming fangirls. (There are a few exceptions, like medicine, but philosophy and computer science are not among them.) Eg., a lot of people are familiar with Ioannidis's paper, Why most published research findings are false. However, he also wrote another paper, a few years earlier, titled Replication validity of genetic association studies. This paper actually has more citations - over 1300 at least count. But not only have we not heard of it, no one else outside the field has either. (Try Googling it, and you'll see what I mean.)
6. Getting people to read papers is difficult.
Most intellectual people regularly read books, blogs, newspapers, magazines, and other common forms of memetic transmission. However, it's much less common for people to read papers, and that reduces the affordances that people have for doing so, if they are asking "hey, this thing is a crazy idea, why should I believe it?". Papers are, intentionally, written for an audience of specialists rather than a general interest group, which reduces both the tendency and ability of non-specialists to read them when asked (and also violates the "Explainers shoot high - aim low" rule).
7. Academia selects for conformity.
The whole point of tenure is to avoid selecting for conformity - if you have tenure, the theory goes, you can work on whatever you want, without fear of being fired or otherwise punished. However, only a small (and shrinking) number of academics have tenure. In order to make sure fools didn't get tenure, it turns out academia resorted to lots and lots of negative selection. The famous letter by chemistry professor Erick Carreira illustrates some of what the selection pressure is like, similar to medicine or investment banking: there's a single, narrow "track", and people who deviate at any point are pruned. Lee Smolin has written about this phenomenon in string theory, in his famous book The Trouble with Physics.
Things may change in the future, but as it stands now, many ideas like the Singularity are non-conformist, well outside the mainstream. They aren't likely to go very far in an environment where deviations from the norm are seen negatively.
8. The current community isn't academic in origin.
This isn't an airtight argument, because it's heuristic - "things which worked well before will probably work again". However, heuristic arguments still have a lot of validity. One of the key purposes of a discussion forum, like Less Wrong or the SL4 list that was, is to get new people with bright ideas interested in the topics under discussion. Academia's track record of getting new people interested isn't that great - of the current Singularity Institute directors and staff, only one (Anna Salamon) has an academic background, and she dropped out of her PhD program to work for SIAI. What has been successful, so far, at bringing new people into our community? I haven't analyzed it in depth, but whatever the answer is, the priors are that it will work well again.
9. Our ideas aren't academic in origin.
Similarly to #8, this is a "heuristic argument" rather than an airtight proof. But I still think it's important to note that our current ideas about Friendly AI - any given AI will probably destroy the world, mathematical proof is needed to prevent that, human value is complicated and hard to get right, and so on - were not developed through papers, but through in-person and mailing list discussions (primarily). I'm also not aware of any ideas which came into our community through papers. Even science fiction has a better track record - eg. some of our key concepts originated in Vinge's True Names and Other Dangers. What formats have previously worked well for discussing ideas?
10. Papers have a tradition of violating the bottom line rule.
In a classic paper, one starts with the conclusion in the abstract, and then builds up an argument for it in the paper itself. Paul Graham has a fascinating essay on this form of writing, and how it came to be - it ultimately derives from the legal tradition, where one takes a position (guilty or innocent), and then defends it. However, this style of writing violates the bottom line rule. Once something is written on the paper, it is already either right or wrong, no matter what clever arguments you come up with in support of it. This doesn't make it wrong, of course, but it does tend to create a fitness environment where truth isn't selected for, just as Alabama creates a fitness environment where startups aren't selected for.
11. Academic moderation is both very strict and badly run.
All forums need some sort of moderation to avoid degenerating. However, academic moderation is very strict by normal standards - in a lot of journals, only a small fraction of submissions get approved. In addition, academic moderation has a large random element, and is just not very good overall; many quality papers get rejected, and many obvious errors slip through.
As if that wasn't enough, most journals are single-blind rather than double-blind. You don't know who the moderators are, but they know who you are, raising the potential for all kinds of obvious unfairness. The most common kind of bias is one that hurts us unusually badly: people from prestigious universities are given a huge leg up, compared to people outside the system.
(This article has been cross-posted to my blog, The Rationalist Conspiracy.)
EDIT #1: As Lukeprog notes in the comments, academic papers are not our main discussion forum for FAI ideas. In practice, the main forum is still in-person conversations. However, in-person conversations have critical limitations too, albeit more obvious ones. Some crucial limits are the small number of people who can participate at any one time; the lack of any external record that can be looked up later; the lack of any way to "broadcast" key findings to a larger audience (you can shout, but that's not terribly effective); and the lack of lots of time to think, since each participant in the conversation can't really wait three hours before replying.
EDIT #2: To give a specific example of an alternative forum for FAI discussion, I think the proposal for an AI Risk wiki would solve most of the problems listed here.
Grab the interest of smart people who won't be grabbed by cheaper methods. This has worked before. Also: Many smart and productive people are extremely busy, and they use "Did they bother to pass peer review?" as a filter for what they choose to read. In addition, many smart people prefer to read papers over blog posts because papers are generally better organized, are more clearly written, helpfully cite related work, etc.
Reduce communication overhead. We don't have time to have a personal conversation with every interested smart person, and blog posts are often too disorganized and ambiguous to help. Though for this, a scholarly AI risk wiki would probably be even better. Luckily, as I say in that post, there isn't much additional cost involved in turning parts of papers into wiki articles, or combining wiki articles into papers.
Grab some prestige and credibility, because this matters to lots of the people we care about.
Show that we're capable of doing serious research. "Eliezer did some work with Marcello that we can never tell you about" and "We wrote some blog posts this month" don't quite show to most people that we can do research.
Be kinda-forced into writing more clearly, and in a way that is more thoroughly connected to the relevant empirical literatures, than we might otherwise be tempted to write.
As I said before, many people find papers more readable than ambiguous blog posts barely connected to the relevant literatures. Eliezer's papers aren't written in a different style than his blog posts, anyway. Also, peer review often improves the final product.
Agree with (a) and somewhat with (b), but we're only writing certain things in paper form. Like I said, the vast majority of FAI work and discussion happens outside papers. I don't know what you mean by (c).
I don't care about something like "average prestige in academia." What I care about is some particular people thinking we have enough credibility to bother reading and engaging with. Many of the people I care about won't bother to check whether the author of an article has elite university affiliation, but will care if we bothered to write up our ideas clearly and with references to related work. The Singularity and Machine Ethics looks much less crankish than Creating Friendly AI, even though none of the authors have elite university affiliation.
Still gathering data, and I haven't gathered permission to share it. I think two people who wouldn't mind you knowing they came to x-risk through "Astronomical Waste" are Nick Beckstead and Jason Gaverick Matheny.
Point taken.
My intended point was that sometimes a paper has summed up the main points from something that Eliezer took 30 blog posts to write when he wrote The Sequences. But obviously you don't have to write a paper to do this, so I drop the point.
Remember: almost all FAI research is not done via papers. In my above list of reasons why SI publishes papers, I didn't even think to mention "to produce original research" (and I won't go back and add it now), though that sometimes happens.
If one journal is poorly moderated, then you jump to another one. Unlike Mafia bosses, a "problem" with journal moderators means "I wasted a few hours communicating with them and making revisions," not "They decided to cut off my thumbs."
Re-replying:
For people who "are extremely busy, and they use "Did they bother to pass peer review?" as a filter for what they choose to read", which specific examples are you thinking of, and how much any of them become nontrivial members of our community, or helped us out in nontrivial ways?
I'm sure there are people who a) are very smart, b) look impressive on paper, who c) we've contacted about FAI research, and d) have said "I'm not going to pay attention, since this isn't peer reviewed" (or some equivalent). However, I