lessdazed comments on Why Academic Papers Are A Terrible Discussion Forum - Less Wrong

25 Post author: alyssavance 20 June 2012 06:15PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (54)

You are viewing a single comment's thread. Show more comments above.

Comment author: lukeprog 20 June 2012 06:46:39PM *  24 points [-]

My reply, in the context of Singularity Institute research:

academic papers are a terrible way to discuss Friendly AI

Almost all FAI discussion happens outside of papers. It happens on mailing lists, forums like Less Wrong, email threads, personal conversations, etc. Yesterday I had a three hour discussion about FAI with Eliezer, Paul Christiano, and Anna Salamon where we covered more ground than we possibly could in a 20-page paper because there's so much background material that we all agree on but hasn't been written up. Nobody is waiting around for papers to come out to advance FAI theory; that's not what papers are for.

The time lag is huge

Most SI papers borrow heavily from material that originated from mailing list discussions or LW posts, and most peer-reviewed SI publications are posted in preprint version when they are written instead of months later when they are published by the academic publisher.

Most academic publications are inaccessible outside universities

All SI publications are published on our website, which is open to everyone. Same goes for all of Nick Bostrom's papers.

Virtually no one reads most academic publications.

Not via the journals and academic books themselves, no. That's why SI and FHI publish their papers to our own websites, where they are read by far more people than read them in the journals themselves.

It's very unusual to make successful philosophical arguments in paper form. I honestly can't think of a single instance where I was convinced of an informal, philosophical argument through an academic paper.

Don't generaltize from one example. I'm slowly surveying a good chunk of the "player characters" in the x-risk reduction space, and a good chunk of them were hugely influenced by Eliezer's two GCR chapters or by Bostrom's Astronomical Waste.

Papers don't have prestige outside a narrow subset of society.

But we care unusually much about that narrow subset of society. Also, I don't write papers so much for prestige as for the fact that it forces me to write in a way that is unusually clear, well-referenced (so that people can check what other people are saying about each individual element), well-structured, careful, and so on. In contrast, people read the Hanson-Yudkowsky debate and there are 5 different ways to interpret every other paragraph and no references by which to check anything and they have no idea what to think.

Getting people to read papers is difficult.

Not as hard as getting them to read The Sequences. Also, many of the people we care about (e.g. me) find it easier to read papers than to read a few blog posts, because papers tend to be clearer written and point the reader to related sources.

Academia selects for conformity.

No problem; there are plenty of journals that are likely to publish the kinds of papers SI publishes, and some already have.

What has been successful, so far, at bringing new people into our community? I haven't analyzed it in depth, but whatever the answer is, the priors are that it will work well again.

As said previously, most FAI discussion still happens outside of papers, but in fact it turns out that several important people did come through Eliezer's and Bostrom's papers.

it's important to note that our current ideas about Friendly AI ... were not developed through papers, but through in-person and mailing list discussions (primarily).

Same goes for all new areas of research. They're developed in person and on mailing lists long before they end up in journal articles.

Academic moderation is both very strict and badly run.

This is sometimes a problem, sometimes not. Communications of the ACM might reject the paper Nick Bostrom and I wrote for it because it's too philosophical and we don't have the space to respond to all common objections. So we may end up publishing it somewhere else. But with my two TSH chapters, all that happened was that I got a bunch of feedback, some of it useful and some of it not, so I incorporated the useful feedback and ignored the useless feedback and published significantly improved papers as a result. Other people I've spoken to about this have reported a similar spread of experiences.

Also see two of my previous posts on the topic, neither of which I agree with anymore: How SIAI could publish in mainstream cognitive science journals and Reasons for SIAI to not publish in mainstream journals.

Comment author: alyssavance 20 June 2012 07:10:06PM *  4 points [-]

Hi Luke! Thanks for replying. Quick counterpoints:

  • Probably most importantly, what do you view as the purpose of SIAI's publishing papers? Or, if there are multiple purposes, which do you see as the most important?

  • If in-person conversations (despite all their limitations) are still the much preferred way to discuss things, instead of papers, that's evidence in favor of papers being bad. (It's also evidence of SIAI being effective, which is great, but that isn't the point under discussion.) If papers were a good discussion forum, there'd be fewer conversations and more papers.

  • If, as you say, the main audience for papers written by SIAI is through SIAI's website and not through the journals themselves, why spend the time and expense and hassle to write them up in journal form? Why not just publish them directly on the site, in (probably) a much more readable format?

  • The problem with conformity in academia isn't that it's impossible to find someplace to publish. You can always find somewhere, given enough effort. The problem is that a) it restricts the sorts of things you can say, b) restricts you, in many cases, to an awkward way of wording things (which I believe you've written about at http://lesswrong.com/lw/4r1/how_siai_could_publish_in_mainstream_cognitive/), and c) it makes academia a less fertile ground for recruiting people. Those are probably in addition to other problems.

  • I agree that we care more about prestige within academia than we do about prestige in almost all similarly sized groups. However, it seems fairly strongly that we aren't going to have that much prestige in academia anyway, given that the main prestige mechanism is elite university affiliations, and most of us don't have those.

  • Which people have come through Eliezer and Bostrom's papers? (That isn't a rhetorical question; given how large our community is compared to Dunbar's number, it's likely there is someone and it's also likely I've missed them, and they might be really cool people to know.)

  • Using my own personal experiences is generalizing from a single dataset, and that's indeed biased in some ways. However, it's very far from generalizing from a single example; it's generalizing from the many thousands of arguments that I've read and accepted at some point in the past. It's still obviously better to use multiple datasets, if you can get them.... but in this case they're difficult to get, because it's hard to know where your friends got all their beliefs.

  • Sure, it's easier to get people to read a single paper than all of the Sequences. But that's a totally unfair comparison: the Sequences are much, much longer, and it's always easier to read something shorter than something longer. How hard would it be to get someone to read a paper, vs. a single Sequence post of equal length, or a bunch of Sequence posts that sum to an equal length?

  • If all new areas of research are developed through in-person conversations and mailing lists, that doesn't imply that papers are a good way to do FAI research; it implies that papers are a bad way to do all those other kinds of research. If what you say is true, then my argument equally well applies to those fields too.

  • Of course, there are some instances of academic moderation being net good rather than net bad. However, to quote of your earlier arguments, "don't generalize from one example". I'm sure that there are some well-moderated journals, just as I'm sure there are Mafia bosses who are really nice helpful guys. However, that doesn't imply that hanging out with Mafia bosses is a good idea.

Comment author: lessdazed 20 June 2012 08:28:15PM -1 points [-]

Probably most importantly, what do you view as the purpose of SIAI's publishing papers? Or, if there are multiple purposes, which do you see as the most important?

In order to think of some things I do that only have one important purpose, it was necessary to perform the ritual of closing my eyes and thinking about nothing else for a few minutes by the clock.

I plan on assuming things have multiple important purposes and asking for several, e.g. "what do you view as the purposes of X."

There was nothing wrong with what you said, but it is strange how easily the (my?) mind stops questioning after coming up with just one purpose for something someone is doing. In contrast, when justifying one's own behavior, it is easy to think of multiple justifications.

It makes some sense in a story about motivated cognition and tribal arguments. It might be that to criticize, we look mostly for something someone does that has no justification, and invest less in attacking someone along a road that has some defenses. A person being criticized invests in defending against those attacks they know are coming, and does not try and think of all possible weaknesses in their position. There is some advantage in being genuinely blind to one's weaknesses so one can, without lying, be confident in one's own position.

Maybe it is ultimately unimportant to ask what the "purposes" of someone doing something is, since they will be motivated to justify themselves as much as possible. In this case, asking what the "purpose" is would force them concentrate on their most persuasive and potentially best argument, even if it will rarely actually be the case that one purpose is a large supermajority of their motivation.