Related: Why Academic Papers Are A Terrible Discussion Forum, Four Layers of Intellectual Conversation
During a recent discussion about (in part) academic peer review, some people defended peer review as necessary in academia, despite its flaws, for time management. Without it, they said, researchers would be overwhelmed by "cranks and incompetents and time-card-punchers" and "semi-serious people post ideas that have already been addressed or refuted in papers already". I replied that on online discussion forums, "it doesn't take a lot of effort to detect cranks and previously addressed ideas". I was prompted by Michael Arc and Stuart Armstrong to elaborate. Here's what I wrote in response:
My experience is with systems like LW. If an article is in my own specialty then I can judge it easily and make comments if it’s interesting, otherwise I look at its votes and other people’s comments to figure out whether it’s something I should pay more attention to. One advantage over peer review is that each specialist can see all the unfiltered work in their own field, and it only takes one person from all the specialists in a field to recognize that a work may be promising, then comment on it and draw others’ attentions. Another advantage is that nobody can make ill-considered comments without suffering personal consequences since everything is public. This seem like an obvious improvement over standard pre-publication peer review, for the purpose of filtering out bad work and focusing attention on promising work, and in practice works reasonably well on LW.
Apparently some people in academia have come to similar conclusions about how peer review is currently done and are trying to reform it in various ways, including switching to post-publication peer review (which seems very similar to what we do on forums like LW). However it's troubling (in a "civilizational inadequacy" sense) that academia is moving so slowly in that direction, despite the necessary enabling technology having been invented a decade or more ago.
We also need to figure out how to avoid the bad incentives of academia. For example, to avoid the problem of publishing papers for the sake of publishing or for the sake of gaining citation counts, we should only reward someone with status if the act of academic publishing leads to positive consequences, for example productive academic research that otherwise wouldn't likely to have occurred (and not just additional citations), or practical deployment of the idea that otherwise wouldn't likely to have occurred. But this will be hard to do in practice, whereas counting papers / citations will be easy.
We have to keep in mind that the people who created the status / monetary economy in academia surely didn't intend to cause the incentive problems that now exist within it, and many people both in academia and out (e.g., policy makers, funders) have probably since noticed those problems and would love to have ways to fix them, but the bad incentives still exist. In some sense, inefficiencies are simply inevitable due to the multi-player nature of the game. The best we can do is perhaps to have a different set of inefficiencies / bad incentives, which allow us to reach a different (and not necessarily larger) set of low-hanging fruit.
I think this suggests that we should be ruthless about avoiding becoming just like academia, and throw out the baby with the bathwater if necessary to avoid it. For example, if we can't figure out a foolproof way of rewarding academic publishing only when it leads to positive consequences, we should assume that trying to reward academic publishing will lead to inefficiencies / bad incentives similar to ones existing in academia, and therefore not try to reward academic publishing at all. Or, alternatively, we need to create the kind of culture where we can say, "oops, rewarding people for academic publishing is making us too much like academia in terms of sharing the same set of inefficiencies / bad incentives, so we shouldn't be rewarding people for academic publishing anymore" but that seems significantly harder to accomplish. Academia is very likely a basin of attraction in the space of culture and institutional design, and we risk irreversibly falling into it just by getting close.
As long as MIRI is led and funded by people who care about the actual goal rather than citations, I don't see why we would go astray.