About a month ago I accidentally found out that the LW idea of quining cooperation is already studied in academia:
1) Moshe Tennenholtz's 2004 paper Program Equilibrium describes the idea of programs cooperating in the Prisoner's Dilemma by inspecting each other's source code.
2) Lance Fortnow's 2009 paper Program Equilibria and Discounted Computation Time describes an analogue of Benja Fallenstein's idea for implementing correlated play, among other things.
3) Peters and Szentes's 2012 paper Definable and Contractible Contracts studies quining cooperation over a wider class of definable (not just computable) functions.
As far as I know, academia still hasn't discovered Loebian cooperation or the subsequent ideas about formal models of UDT, but I might easily be wrong about that. In any case, the episode has given me a mini-crisis of faith, and a new appreciation of academia. That was a big part of the motivation for my previous post.
At the beginning of this year I dove into psychology in my free time. I skimmed hundreds, maybe thousands of papers. I expected to find awesome useful ideas. Let me try to explain how much crap I found instead.
It's easy to nitpick on any specific piece of psychology. Fodor argued about empty labels that made no predictions. Computational models of memory were neither efficient nor biologically plausible. The concept literature is a mess of people arguing over the extent to which concepts are built on rules, or relations, or exemplars, or prototypes, or affordances, or schemas, or codelets, or perceptual symbols, or analogies, without ever finding the underlying math which explains why concepts are useful, when and how any of those kind of underlying concept-stuff could work.
But maybe those are examples of people failing to make unified theories out of their experimental results. What if we focus on the experiments? Here's the setup of a fairly recent nominal combination experiment: A child is asked to picture a zebra clam, and then to point at an image which is most similar to their imagined interpretation of the phrase. This is not an unusual experiment for psychology. Asking people what they think and thereby distilling their entire thought process down to one data point is the norm is psychology, not the exception. For example, the entire personality psychology research program was built on self- and peer-evaluations, just asking people "are you a pretty confident person?" and so on. That alone is amazing. It's like using the motion of an electron you don't understand to predict the motion of a proton you don't understand. But back to the zebra clam. No one decided to try that study because they thought it would answer any foundational questions of their field. It wasn't valuable information, it was just an easy test which they could do, one that sounded relevant to their favorite child's-interpretation-of-zebra-clam hypothesis.
That's a taste of what I see in psychology research. Hundreds of studies that never should have been done, a field that doesn't know what observables to measure (I don't know either! Brains are hard, I'm not saying I would be a great psychologist, I'm just saying psychology happens to be bad presently), and fluff arguments about intuitive theories.
Those were some of the fruits of my last research binge. Now I'm looking at logic, decision theory, and game theory. The situation is different there, but not much better. That said, while academics are generally incompetent, they're not relatively incompetent. I don't know anywhere to find people who can reliably solve new, alien problems.
One area where psychology journals are said to be leading the way is encouraging a discussion of effect sizes. It's shocking to me that all journals don't do this.