Related: Why Academic Papers Are A Terrible Discussion Forum, Four Layers of Intellectual Conversation

During a recent discussion about (in part) academic peer review, some people defended peer review as necessary in academia, despite its flaws, for time management. Without it, they said, researchers would be overwhelmed by "cranks and incompetents and time-card-punchers" and "semi-serious people post ideas that have already been addressed or refuted in papers already". I replied that on online discussion forums, "it doesn't take a lot of effort to detect cranks and previously addressed ideas". I was prompted by Michael Arc and Stuart Armstrong to elaborate. Here's what I wrote in response:

My experience is with systems like LW. If an article is in my own specialty then I can judge it easily and make comments if it’s interesting, otherwise I look at its votes and other people’s comments to figure out whether it’s something I should pay more attention to. One advantage over peer review is that each specialist can see all the unfiltered work in their own field, and it only takes one person from all the specialists in a field to recognize that a work may be promising, then comment on it and draw others’ attentions. Another advantage is that nobody can make ill-considered comments without suffering personal consequences since everything is public. This seem like an obvious improvement over standard pre-publication peer review, for the purpose of filtering out bad work and focusing attention on promising work, and in practice works reasonably well on LW.

Apparently some people in academia have come to similar conclusions about how peer review is currently done and are trying to reform it in various ways, including switching to post-publication peer review (which seems very similar to what we do on forums like LW). However it's troubling (in a "civilizational inadequacy" sense) that academia is moving so slowly in that direction, despite the necessary enabling technology having been invented a decade or more ago.

New to LessWrong?

New Comment
26 comments, sorted by Click to highlight new comments since: Today at 9:08 PM

Thank you for not giving up on this discussion! Many people have mentioned the intellectual benefits of peer review, but I just thought of another argument that might be new to you.

Many of us agree that solving problems together is great fun. But what if it's just rationalization? What if we really want to participate in some status economy, and will come up with smart things to say only if we're paid with status in return? I know it's not true for you, because you came up with UDT on your own. But it's definitely true for me. Posting something like this and getting no response feels very discouraging to me, even if the topic is exciting. And since I'm close to the top of the LW heap, I imagine it's even more true for others.

The question then becomes, how do we set up a status economy that will encourage research? Peer review is one way, because publications and citations are a status badge desired by many people. Participating in a forum like LW when it's "hot" and frequented by high status folks is another way, but unfortunately we don't have that anymore. From that perspective it's easy to see why the massively popular HPMOR didn't attract many new researchers to AI risk, but attracted people to HPMOR speculation and rational fic writing. People do follow their interests sometimes, but mostly they try to find venues to show off.

Of course you could be happy with a system that's optimized for people like you, with few status rewards. But I suspect you'd miss out on many good contributors (think of all the smart people who drifted away from LW in recent years). I'd prefer to have something more like a pyramid or funnel, with popular appeal on one end and intellectual progress on the other. Academic credibility (including peer review) could be a key part of that funnel for us, and a central forum like LW would also help a lot. There are probably other measures that could work in synergy with these.

I wonder if people at MIRI think the same way. In a sense, the funnel idea was there from the beginning, as "raising the sanity waterline". CFAR can also be seen as part of that. But these efforts are mostly aimed at outreach, and I'm not sure they ever consciously tried to build a mechanism for converting status to research. What would it take to build such a mechanism today?

I know it's not true for you, because you came up with UDT on your own.

I have to think about the rest of your comment carefully, but I want to correct this before too many people read it. I think status is in fact a significant motivation even for me, and even the more "pure" motivations like intellectual curiosity can in some sense be traced back to status. It seems unlikely that UDT would have been developed without the existence of forums like extropians, everything-list, and LW, for reasons of both motivation and feedback/collaboration.

  • LW2 is in the works, and is an opportunity to make significant improvements to the model. Contribute ideas to make it better! I'll contribute some + yell at interesting people to get off FB or at least x-post

  • I think your honest admission of strong status motivation is very important. Big reason high-status ppl avoid the forum is not to be bogged down with n00bs and cranks. LW2 karma system+moderation will be really important to keep them around. Any ideas on improving it?

LW2 better hurry up. Healing a patient is much easier than resurrecting one.

Removing obstacles isn't enough to create a status economy. Creating an inflow of status is much more important. If people can't think beyond the default "let's ask old timers to contribute", it will most likely fail. You need to find a creative idea or ten.

The old LW status economy was enough for you to motivate you, no? Do you feel getting the old timers back would no be enough for you, or for other people?

Very curious about the inflow, what do you think that might look like? Any examples from other forums or social networks or games?

Some ideas:

  • Open up MIRI's internal discussions about strategy as they happen.
  • Open up MIRI's workshops as they happen, let people participate remotely.
  • Prizes, like Quantified Health or Paul's recent offer of funding for AI alignment research.
  • Post drafts of papers before publishing them.
  • Summarize and discuss ideas from Arbital.
  • Summarize and discuss Paul's work.
  • Guest posts / debates / AMAs with high profile non-LW people. Think someone like Nick Bostrom ideally.
  • Merge IAFF and the MIRI news blog into LW 2.0.

These are great ideas, and makes me optimistic that LW 2.0 can succeed. :)

how do we set up a status economy that will encourage research? Peer review is one way

Peer review by itself does not encourage (good) research, but merely mutual back-scratching. There is an astounding amount of published peer-reviewed crap -- see e.g. gender studies and such.

[-][anonymous]7y00

Agreed. It happens in STEM as well, e.g. lots of "semantic web" papers are like that. Some of it can be traced to grant committees being clueless. Right now many of the folks giving money to MIRI do have a clue, we should keep it that way.

[This comment is no longer endorsed by its author]Reply

From that perspective it's easy to see why the massively popular HPMOR didn't attract many new researchers to AI risk, but attracted people to HPMOR speculation and rational fic writing.

I think this is a nice insight that hadn't occurred to me before.

Participating in a forum like LW when it's "hot" and frequented by high status folks is another way, but unfortunately we don't have that anymore.

From looking around on the LW 2.0 closed beta, it will have some features specifically designed to attract some of the people that left, like trusted authors can have their own areas where they exercise greater moderation power. This will also hopefully prevent new high status folks from leaving later.

I want to echo Dr_Manhattan and suggest that you take a look at LW2 beta and see what more can be done there to support your ideas. They are planning to launch on November 1 with an open beta a few weeks before, so major new features are probably out (at least until later), but things like changes to the karma or moderation system are probably still possible. The people behind LW2 are planning to write a post soon about the karma changes and ask for review/suggestions so you can hold off your ideas until then as well.

Academic credibility (including peer review) could be a key part of that funnel for us

How do you envision this? Like if we get results published in academia, that will draw more people into this community? This makes me a bit worried that if being published in academia is the ultimate marker of status in this community, that'll discourage people who have a distaste for academia (like me when I first joined). May still be a good idea though...

Yeah, I agree that the connection to academia shouldn't be the end goal, but it could be one of several factors that help.

We also need to figure out how to avoid the bad incentives of academia. For example, to avoid the problem of publishing papers for the sake of publishing or for the sake of gaining citation counts, we should only reward someone with status if the act of academic publishing leads to positive consequences, for example productive academic research that otherwise wouldn't likely to have occurred (and not just additional citations), or practical deployment of the idea that otherwise wouldn't likely to have occurred. But this will be hard to do in practice, whereas counting papers / citations will be easy.

We have to keep in mind that the people who created the status / monetary economy in academia surely didn't intend to cause the incentive problems that now exist within it, and many people both in academia and out (e.g., policy makers, funders) have probably since noticed those problems and would love to have ways to fix them, but the bad incentives still exist. In some sense, inefficiencies are simply inevitable due to the multi-player nature of the game. The best we can do is perhaps to have a different set of inefficiencies / bad incentives, which allow us to reach a different (and not necessarily larger) set of low-hanging fruit.

I think this suggests that we should be ruthless about avoiding becoming just like academia, and throw out the baby with the bathwater if necessary to avoid it. For example, if we can't figure out a foolproof way of rewarding academic publishing only when it leads to positive consequences, we should assume that trying to reward academic publishing will lead to inefficiencies / bad incentives similar to ones existing in academia, and therefore not try to reward academic publishing at all. Or, alternatively, we need to create the kind of culture where we can say, "oops, rewarding people for academic publishing is making us too much like academia in terms of sharing the same set of inefficiencies / bad incentives, so we shouldn't be rewarding people for academic publishing anymore" but that seems significantly harder to accomplish. Academia is very likely a basin of attraction in the space of culture and institutional design, and we risk irreversibly falling into it just by getting close.

As long as MIRI is led and funded by people who care about the actual goal rather than citations, I don't see why we would go astray.

I can see a couple different ways that it could happen. Funders might have trouble judging actual progress in the absence of academic peer-reviewed publications and citations. Especially as more academics join the AI risk field and produce more papers and citations, funders might be tempted to think that they should re-direct resources towards academia (in part for subconscious status reasons). MIRI may have to switch to more academic norms in order to compete, which would then rub off on LW. (This seems to already be happening to some extent.) Or LW moves towards a more academic culture for internal status-economics reasons, and MIRI leaders may not have much control over that. (In that world, maybe LWers will eventually look down upon MIRI for not being sufficiently academic.)

You know what I will say, yall should stay in your lane, re: incentives.

Yudkowsky's incentives caused him to write HPMOR (which has precisely zero (0) academic value), and publish basically nothing. So as far as the mainstream is concerned his footprint does not exist. He's collecting a salary at MIRI, presumably. What is that salary buying?

Mainstream academics who collect a salary will say they teach undergraduates, and publish stuff to make grant agencies happy. Some of that stuff is useless, a lot of it is very useful indeed.


Reform attempts for "non-aligned" ecosystems like academia will almost certainly not work because (as you all are well aware) "aligning" is hard.


MIRI has the same problem everyone else has: if it grows it will become a non-aligned ecosystem, if it doesn't grow it will not have any impact.

You know what I will say, yall should stay in your lane, re: incentives.

I don't understand this. Please clarify? (Urban dictionary says "stay in your lane" means mind your own business, which is exactly what we're doing, namely trying to figure out what direction to push our own culture.)

and publish basically nothing

He's publishing mostly on Arbital these days. See this and this for examples. I'm not sure why he doesn't at least post links elsewhere to draw people's attention though. Hopefully that will change after LW 2.0 goes live.

So as far as the mainstream is concerned his footprint does not exist.

I'm not sure what you mean by this either. Certainly the people who work on AI safety at Berkeley, OpenAI, and DeepMind all know about Eliezer and MIRI's approach to AI alignment, even if they don't agree that it's the most promising one. Are you saying that if Eliezer had published in academia, they'd be more inclined to follow that approach, as opposed to the more ML-based approaches that they're currently following?

MIRI has the same problem everyone else has: if it grows it will become a non-aligned ecosystem

I think having "aligned" human institutions is too much to hope for. As I mentioned elsewhere in this thread, perhaps the best we can do is to have different bad incentives / inefficiencies in different institutions so that they're able to reach different sets of low hanging fruit, and not all suffer from the same collective blind spots.

Please clarify?

I get super annoyed by criticisms of mainstream academia out of the rationality-sphere, I suppose (mostly because it falls into either stuff that every academic already knows about that's very hard to fix, or just vastly misinformed stuff). Roko the other day on facebook: "academia produces nothing of value."

I'm not sure what you mean by this either.

I suppose what I mean by this is that academia functions on a dual currency/kudos system. "Academic kudos" is acquired by playing certain formal games within mainstream academia (publications in fancy journals and so on). So, for example, if Tegmark published Life 3.0, and it reached the best-seller list that would not award him a ton of "academic kudos" (well, at least in my opinion, Hanson might disagree). Instead, that would be called "being good with the media."

"Academic kudos" is a bit different from "I have heard of you."


I think having "aligned" human institutions is too much to hope for.

I agree with this entire paragraph. I am a big fan of letting a thousand flowers bloom.

So what's the fix here? If people think mailing lists work better than peer review, maybe an organization like OpenPhil should set up a mailing list for academics working on AI safety and award grants based on discussions on the mailing list? Academia has a lot of momentum behind it, and it seems more efficient to redirect that momentum than try to set up something new from scratch.

It's probably not as simple as that. Part of why online discussions work as well as they do is probably that there's no money riding on them. If funders start making grant decisions based on mailing list discussions, we might start seeing mailing lists becoming politicized to an uncomfortable and unproductive degree. I think for now the "fix" is just for people to monitor efforts to reform peer review in academia and adopt the ones that work well into the AI safety field, and also maintain a number of AI safety research institutions with diverse cultures instead of e.g. demanding that everyone publish in academic venues as a condition for funding.

I replied that on online discussion forums, "it doesn't take a lot of effort to detect cranks and previously addressed ideas".

It takes a lot of effort, so much so that academics just gave up (Scott Aaronson had a post on this). I gave up doing this here.

I agree that peer review has a lot of problems, though.

I guess I can see how it might be too much effort if you're trying to participate in online discussions in addition to academia (and your main effort by necessity has to be in academia because that's your livelihood). If you only had to do the former though, it doesn't seem that bad, at least in my experience. (Would appreciate a link to Scott Aaronson's post if you can find it.)

EDIT: Maybe as a busy academic, just look at posts that are already highly upvoted or have positive comments from people you trust. Is it still too much effort if you did that?

One advantage of peer review is that it helps the author to improve the paper. I have one published article that greatly benefited from two anonymous reviewers who find some important flows. If I just published it somewhere, they may just ignore it and the improvement will not happen. But the peer review system forced them to search for flaws according to some quastionary, and they have to write a couple of pages each.

I think it depends, for example on who are your peers in the "peer review" process and what kind of online forums you frequent.

Generally speaking, this is the problem of filtering out noise and finding honest and competent people to comment on your papers. It's a hard problem. Peer review is not a perfect solution, but neither is online discussion.

detecting previously addressed ideas is a major impediment due to non-obvious terminology.