I strongly suspect that there is a possible art of rationality (attaining the map that reflects the territory, choosing so as to direct reality into regions high in your preference ordering) which goes beyond the skills that are standard, and beyond what any single practitioner singly knows. I have a sense that more is possible.
The degree to which a group of people can do anything useful about this, will depend overwhelmingly on what methods we can devise to verify our many amazing good ideas.
I suggest stratifying verification methods into 3 levels of usefulness:
- Reputational
- Experimental
- Organizational
If your martial arts master occasionally fights realistic duels (ideally, real duels) against the masters of other schools, and wins or at least doesn't lose too often, then you know that the master's reputation is grounded in reality; you know that your master is not a complete poseur. The same would go if your school regularly competed against other schools. You'd be keepin' it real.
Some martial arts fail to compete realistically enough, and their students go down in seconds against real streetfighters. Other martial arts schools fail to compete at all—except based on charisma and good stories—and their masters decide they have chi powers. In this latter class we can also place the splintered schools of psychoanalysis.
So even just the basic step of trying to ground reputations in some realistic trial other than charisma and good stories, has tremendous positive effects on a whole field of endeavor.
But that doesn't yet get you a science. A science requires that you be able to test 100 applications of method A against 100 applications of method B and run statistics on the results. Experiments have to be replicable and replicated. This requires standard measurements that can be run on students who've been taught using randomly-assigned alternative methods, not just realistic duels fought between masters using all of their accumulated techniques and strength.
The field of happiness studies was created, more or less, by realizing that asking people "On a scale of 1 to 10, how good do you feel right now?" was a measure that statistically validated well against other ideas for measuring happiness. And this, despite all skepticism, looks like it's actually a pretty useful measure of some things, if you ask 100 people and average the results.
But suppose you wanted to put happier people in positions of power—pay happy people to train other people to be happier, or employ the happiest at a hedge fund? Then you're going to need some test that's harder to game than just asking someone "How happy are you?"
This question of verification methods good enough to build organizations, is a huge problem at all levels of modern human society. If you're going to use the SAT to control admissions to elite colleges, then can the SAT be defeated by studying just for the SAT in a way that ends up not correlating to other scholastic potential? If you give colleges the power to grant degrees, then do they have an incentive not to fail people? (I consider it drop-dead obvious that the task of verifying acquired skills and hence the power to grant degrees should be separated from the institutions that do the teaching, but let's not go into that.) If a hedge fund posts 20% returns, are they really that much better than the indices, or are they selling puts that will blow up in a down market?
If you have a verification method that can be gamed, the whole field adapts to game it, and loses its purpose. Colleges turn into tests of whether you can endure the classes. High schools do nothing but teach to statewide tests. Hedge funds sell puts to boost their returns.
On the other hand—we still manage to teach engineers, even though our organizational verification methods aren't perfect. So what perfect or imperfect methods could you use for verifying rationality skills, that would be at least a little resistant to gaming?
(Added: Measurements with high noise can still be used experimentally, if you randomly assign enough subjects to have an expectation of washing out the variance. But for the organizational purpose of verifying particular individuals, you need low-noise measurements.)
So I now put to you the question—how do you verify rationality skills? At any of the three levels? Brainstorm, I beg you; even a difficult and expensive measurement can become a gold standard to verify other metrics. Feel free to email me at sentience@pobox.com to suggest any measurements that are better off not being publicly known (though this is of course a major disadvantage of that method). Stupid ideas can suggest good ideas, so if you can't come up with a good idea, come up with a stupid one.
Reputational, experimental, organizational:
- Something the masters and schools can do to keep it real (realistically real);
- Something you can do to measure each of a hundred students;
- Something you could use as a test even if people have an incentive to game it.
Finding good solutions at each level determines what a whole field of study can be useful for—how much it can hope to accomplish. This is one of the Big Important Foundational Questions, so—
Think!
(PS: And ponder on your own before you look at the other comments; we need breadth of coverage here.)
This is a problem with “class tests” of anything, of course. I've thought (more than five minutes) on your post, but I didn't come up with much specifically about rationality testing. (Except for “automatically build arbitrary but coherent «worlds» automatically, let students model them and the check how well their model fits «reality» afterwards”, which is an obvious application of the definition, and has been suggested already several times.)
I've come up with a few thought on testing in general:
1) As you say, cheap-but-game-able tests are often useful; we do have useful universities despite the problem of Us awarding diplomas to their own students. I think this is more than just “works well enough”, in some case it's actually useful: (a) Having good tests (e.g., by a third party) requires defining well in advance exactly what you're testing. But in many cases it can be useful if a school experiments with what it teaches (and even why), and the only test needed is internal. (b) In many (most?) cases, you can't really test some ability until you really try using it. There are plausible cases where a quick-and-dirty (but cheap) test (e.g. university diplomas) is needed only to pre-select people (i.e., weed out most incompetents), and then get to real testing doing actual work (e.g., hiring interviews and tests, then probation period). If you make the initial test «better» (e.g., harder to game) but more expensive you may be actually loosing if it's not «better» in the sense of accurate for whatever you need people to be good in.
OK, now I'm getting to what you're saying about doing good in class but bad in real life. It seems an obvious solution that you should actually be doing the testing in real life: first weed out the bad as well as you can with an approximate test (how good you do on this tests your map against reality), then “hire” (whatever that means in the context) people who look promising, make them do real work, and evaluate them there.
You don't have to evaluate everything they do, as long as you do it randomly (i.e., nobody knows when they're evaluated). The fact that random testing is done can be safely made public: if you don't know when it's done, the only way to “game” this is to actually be as good as you can be all the time.
The random testing can be passive (e.g. audits) or active (e.g. penetration testing). The only trick is that you have to do it often enough to give significant information, and that the tested can't tell when they're being tested. For instance, testing for biases can be very useful even in a context where everybody is extensively familiar with their existence, as long as you do it often enough to have a decent chance of catching people unawares. (This is hard to do, which is why such tests are difficult. Which is why university exams are still useful.)
Note that you don't have to make all tests undetectable; having some tests detected (especially if it's not obvious that they are detectable on purpose) both reminds testees of them, and allows detecting people who react differently when tested than in real life. (This can then allow you to notice when people detect tests you're trying to keep secret, assuming there's enough testing going on.)
I had a similar idea, but I'm still not sure about it. Succeeding in Real Life does seem like a good measure, to a point. How could one gauge one's success in real life, though? Through yearly income, or net worth? What about happiness or satisfaction?