Well, you asked for DUMB ideas, so here's mine. It has the advantage that I'm sure no one else will suggest it. This is based on an accidental discovery (so far as I know, unpublished) that one can compare two arbitrary documents for similarity (even if they are in different word-processor formats) by running them both through a recognizer built out of a random state machine and comparing bit masks of all the states traversed. The more common they are, the more states will be traversed in both.
So, lets assume we have a panel of highly rational individuals which are our control group. We generate a random multiple-choice questionnaire consisting of nonsensical questions and answers. Things like:
1) How Green is the Smell of Bacon?
a) 7.5
b) Neon
c) Introspection
d) Larger
You then do a correlation over how your panel of experts chose their answers and see if there is a common pattern. You then score students who take the test based on how similar to the common pattern they are.
Assuming this idea works at all, the advantage of this is that it would be extremely difficult to game. The disadvantage would be that it would penalize those who are significantly more rational than the 'norm'. It...
NOT CRAZY ENOUGH! We need EVEN STUPIDER ideas!
(Voted up for being the best try so far, though.)
Occasionally, well-respected community members could say things that are intentionally false, but persuasive and subtle, a la http://www.overcomingbias.com/2008/02/my-favorite-lia.html.
You get points for catching these mistakes. Perhaps you submit your busts privately to some arbiter so others have the same challenge.
Later, the error is revealed and discussed.
This would also have the benefit of causing everyone to read the most-respected members' writings ultra-critically, rather than sitting back and being spoon-fed.
One key thing this idea has is short term feedback. Frequent, rapid feedback is essential for getting good at this kind of thing. (IMO that's why economics is still so useless relative to the other sciences: the experiments take fifty years to run.)
For 'hot' political and religious biases, create materials in which apparent advocates of different ideologies or parties are arguing for some particular empirical prediction, e.g. about the relationship between different tax rate changes and economic growth, with some predictions being right and some wrong. The subject then needs to make his or her own prediction about some easily-verifiable but obscure empirical fact related to the argument, e.g. whether a graph of GDP and tax rates matches Norway or Iceland.
Scoring would reflect the degree to which the ideological affiliation in the prompt biased the results. If it was being gamed you might need to add in scoring for accuracy. Challenges would be producing a large enough inventory of test items, keeping them secret, and the need to tailor tests to locally popular ideologies or ideologies of interest.
More surveys that study the relationship between knowledge about verifiable facts and values. What sorts of information do those with different values tend to have, and what are the values of those whose knowledge covers the pet facts of all camps? There is a fair amount of this literature in political science aimed at the electorat...
People tend to compartmentalize. We need to bear in mind that anything we come up with that involves testing someone when they know they're being tested can only check how rational they can be if they put their mind to it, not how rational they are when they're not being tested.
They key is probably to test someone without letting them know you are testing them. If I ran a martial arts dojo and wanted to make sure my students were really super badass ninjas, I would give them a convincing looking "test" that included things you would expect to see: strength, speed, form, technique, success in actual matches, etc.
This would have very little weighting in the actual grade, however. The real test would be some sort of surprise fight or fights where the student has no idea that the fight is actually one of the tests. Perhaps he (or she) is followed by the assailant until an opportunity to pick a fight arises.
The main advantage of the surprise test is that it is much hard to game. Imperfect metrics are much more likely to say something meaningful about the student in this surprise situation than if the student knows the test is coming.
When it comes to the rationality dojo, there are numerous normally easy-to-game heuristics that could be used, for example:
I think that the most important skill a rationalist can have is the ability to assess the quality of other rationalists, and to participate effectively in team projects. A measurement of individual rationality has to include how well a randomly selected team including that individual performs on team rationality tests.
So, I think that a rationalist 'decathlon' would consist of a variety of competitions between individuals and small teams including math/logic problems, general knowledge tests, cooperative and non-cooperative game theory games, prediction markets, and engineering challenges (egg drops, programming robots to compete in some arena, etc.)
But then there would be a second level, in which individuals and teams would compete in a prediction market in which they observe (by video recording) the deliberations of other teams on first-level problems and bet on their relative performance.
And even a third level, in which individuals observe the deliberations of second-level teams and bet on their performance in that second-level prediction market.
There are a variety of other things that might be interesting to measure - for example, what team sizes perform best, whether individual rationalism and team-participant rationalism are different skills, and whether team performance is best predicted by strongest member, average member, or weakest member.
I'm not sure why "teaching to the test" is so disparaged for its effects on the learning process. Obviously that is a different use for tests than evaluation of ability, as is the main goal here.
Studying for the LSAT taught me to feel genuine physical unease when I read a bad argument, then be calm it by the next problem. It's very hard to turn that off when reading the newspaper.
The third stage of my growth as a rationalist was discovering this site. I no longer go through the day thinking of things I read and hear: "Wrong (fallacy), wrong (incorrect premise), wrong (fallacy), true (but irrelevant)." Now it's more like: "Wrong (fallacy), not even wrong (internally inconsistent), wrong (map/territory confusion), wrong (fallacy), not even wrong (argument from definition)."
I propose thinking of ways to hijack the human mental machinery as an alternative to overcoming it, akin to what evolution does.
Hrm... Well, one initial notion I have is along the lines of this: Rationality training should improve how good one can become at other stuff, or at least improve ability to gain skills/etc in other fields.
So, maybe tests could be something along the lines of find various subjects/fields a student is unfamiliar with and basically assign them to "get some knowledge and skill in this field."
How efficiently students can basically bootstrap up into something they're unfamiliar with should vary with their rationality, right? So something like this may be a starting point.
(Yes, I can see a bunch of details that would need to be worked out, but seems to be that this notion may at least be somewhere to start for developing rationality tests.)
Organize large games/contests where a lot of candidates are locked up in an area, and have a finite time to reach a certain point / find a certain object.
The exact rules would be specially designed each time for that years challenge, by a group of rationalists and game designers. So the details would vary, but some common themes would be:
For example, the candidates are blindfolded and brought into a large underground circular room, whose only unlocked exits are twenty slides along on the edge (so, one-way exit only). The goal is to take the exit that's due north.
Or, the players are dropped in a maze, and each player is given twenty balls with his name written on them. In the maze are tall glass tubes in which the player can drop their balls. The players know that at the end of the games everyone gets points for the balls with his name that are in "good" tubes (from 10 to 1 points, depending on whether his ball is at the bottom or top - only ten balls fit in a tube), and loses points for balls in &...
Frank Mager, in various books, including "Preparing Instructional Objectives", suggests working backward from evidence that would make you conclude that someone is, e.g. a Bayesian Master Rationalist, to the tests (and instructional objectives) for a course of instruction intended to turn someone into a Bayesian Master Rationalist (or whatever you want to turn them into).
Compile a large enough database of historical events that nobody could memorize more than a fraction of it. For the test, choose a few events at random, describe the initial conditions and ask the candidate to predict the outcomes.
Carry around a notepad, form probabilistic opinions on lots of little questions that you can find out the answer to soon after, record all the probabilities assigned to correct answers, where applicable add tags like "politics", "project completion", "my social status", "trivia", put into a spreadsheet or something and see if you're miscalibrated globally and for different tags.
Here's a stupid idea: Evaluate people by auditing their domiciles. I've read (and from personal experience, I believe it) that you get really solid insight into someone's personal qualities by inspecting their home, as good as interviewing them and all of their friends and family. (I googled a bit, but I can't find the source.)
Anyway, it can probably be gamed.
Here's an immoral one: crack a rationalist
Most, if not all, human minds are vulnerable to hacking, eg by cults, religions, pseudoscience, etc. The minds of rationalists should be harder to hack than others.
Make a copy of a (would-be) rationalist, subject the copy to significant emotional stress, and then send missionaries his way.
The myths carried by the missionaries should be invented for the challenge so everyone can agree that they are false, but should, of course, be significantly more plausible than today's religions.
Make a copy of a (would-be) rationalist, subject the copy to significant emotional stress, and then send missionaries his way.
Moral qualms aside, we should probably have a back-up plan just in case we don't solve human uploading before we want to start testing.
I'll be honest -- my life has taken a sharp downturn since I deconverted. My theist girlfriend, with whom I was very much in love, couldn't deal with this change in me, and after six months of painful vacillation, she left me for a co-worker. That was another six months ago, and I have been heartbroken, miserable, unfocused, and extremely ineffective since.
Perhaps this is an example of the valley of bad rationality of which PhilGoetz spoke, but I still hold my current situation higher in my preference ranking than happiness with false beliefs.
You have my sympathy and my praise.
If anyone's unusually good at deconversions, there might be a market for deconversion attempts aimed at the friends and family of atheists.
Thank you. You taught me (a large chunk of) everything I know, so that means a lot.
Honestly, thinking back, I suspect the best opportunity I ever had to deconvert her was when I myself did not yet identify as atheist -- when the crisis of faith was still in full swing. I'd have been perceived as sharing my doubts, rather than as "attacking" her with arguments.
Of course, back then I feared atheism -- I saw it as something terrible happening to me, that I should avoid doing to her. If I'd done a better job of leaving a line of retreat, I might have made better choices -- I might have shared each doubt as it occurred to me, instead of winding up 30 inferential steps removed from the woman I loved.
(And no, explaining that there is an inferential distance between you greater than is likely to be encountered in the ancestral environment really does not help in a fight)
I've been thinking lately of trying to write something addressed specifically to those beginning to question their religions. Life doesn't come with save points, but standing at the spot you went wrong, calling out advice to passers-by seems like the next best thing.
My empathies: that happened to me about 6 years ago (though thankfully without as much visible vacillation).
My sister, who had some Cognitive Behaviour Therapy training, reminded me that relationships are forming and breaking all the time, and given I wasn't unattractive and hadn't retreated into monastic seclusion, it wasn't rational to think I'd be alone for the rest of my life (she turned out to be right). That was helpful at the times when my feelings hadn't completely got the better of me. I suppose we can be haunted by stuff that is real.
There are two problems with measuring rationality, one of which is difficult but manageable, the other of which might be insurmountable. The first problem is that most conceivable tests of rationality require using information from other fields (such as finance, physics, or psychology), such that you can gain a considerable advantage on the test by studying things from that field which don't actually make you more rational. This can be solved with sufficient cleverness.
The second problem is that how rational someone is depends on how well they maintain it under stress. Pressure, fatigue, emotionally charged situations, alcohol, and/or deliberate manipulation, can make the best rationalists act completely insane. (About a year ago, I went on a reality television show, which was in a way like a series of rationality tests. I didn't do all that well, rationality-wise, but some people who should have known better did dramatically worse.)
Give the students sodium pentothal and ask if they're one of the top 50% of rationalists in their school. However many out of 200 say 'no', that's the school's percentage score. Schools scoring over 100% are thrown out for cheating.
Good rationalists, taken as a group, shouldn't be systematically optimistic.
They should be if they want to win in practice, as opposed to just getting theoretically-correct answers. See, e.g., the studies referenced in Seligman's "Learned Optimism", that show optimists consistently out-perform pessimists (i.e., realists) in a wide variety of fields and endeavors.
(Of course, Seligman's definition of optimism may be different from yours.)
Ask a thousand married rationalists of a given school to estimate the probability that their spouses have cheated on them. Confidentially ask their spouses if they have. Measure group calibration.
ETA: This applies to any potentially painful, but verifiable question. Ask them to draw a probability distribution over their date of death, or the longevity of their marriages. Estimate the probability of various kinds of cancer appearing over the next (5,10,15) years, etc. etc.
(I consider it drop-dead obvious that the task of verifying acquired skills and hence the power to grant degrees should be separated from the institutions that do the teaching, but let's not go into that.)
Was/are there any organizations that are just dedicated to verifying rationality skills? CFAR tried to do both IIRC. Seems pretty bad if there haven't been any attempts at this even.
CFAR tried to do both IIRC.
According to me (who worked at CFAR for 5 years) CFAR did approximately 0-rationality verification whatsoever.
Indeed, while that would be crucial to the kind of experimental rationality development that's described in the Craft and the Community, it isn't and wasn't a natural component of CFAR's functional strategy, which was something more like rationality community-building and culture-building.
[I hope to write more about what CFAR did and why, and how it differed from the sort of thing outlined in the Craft and the Community, sometime.]
Use small-scale, limited-term betting markets with play money.
Put the group of people you want to rank relative to each other into a room - without internet access. Everyone starts with 0 points. People are ranked on how many points they have at the end of the test.
Participants make bets (for points) with each other. There's a time limit for settling those debts; all bets made have to be specified in a way that clearly determines the winner within a fixed period after the end of the test. Of course, bets that can be settled immediately (e.g. on current tri...
Well, there's always the idea of using fMRI scans to determine if someone is thinking in 'rational' patterns. You stick them under the machine and give them a test. You ignore the results of the test, but score the student on what parts of their brains light up.
Clearly real life achievement correlates well with rationality, by definition. So an impractical but "gold standard guaranteed" test of rationality would be to wait until the person in question got to the age of, say, 50, and check to see whether they had made lots of money, or achieved other obvious life goals (fame, for example).
A more specific good test of rationality is the world of startups. Other than the OB/LW community, the entrepreneurial world is the closest to perfect rationality I have found. You could test someone in a month or so b...
I don't see what I thought were the obvious answers, so here they are. The foundations are elsewhere on the site, but they seemed missing from this list.
Reputational: Expect Bayesian masters to participate in other scientific fields. People who make more discoveries in other fields get more street cred among rationalists, especially when they can explain how rationalism helped them make the discoveries. Obviously, this is a long-term process that doesn't lend itself to improving the art quickly.
Experimental: This one's a two-step process. First, ask a larg...
"Piggyback" on other tests: ask people taking part in various tests (standardized exams, sport competitions, driving lessons, programming contests, art exhibitions - whatever) their chances of success (or their probability distribution over the range of results).
The other items should themselves be important enough, so it would fit well with a university cursus, so that it can be "automated" for a lot of things. The way of asking for predictions should be made so as to maximize bad predictions: for example the students are asked to give...
There is a recent trend of 'serious games' which use video games to teach and train people in various capacities, including military, health care, management, as well as the traditional schooling. I see no reason why this couldn't be applied to rationality training.
I always liked adventure style games as a kid, such as King's Quest or Myst, and wondered why they aren't around any more. They seemed to be testing rationality in that you would need to guide the character through many interconnected puzzles while figuring out the model of the world and how b...
I'm not sure if this has already been said, but does the "biases" literature not already contain a lot of perfectly good (although probably overly game-able) rationality tests? Just pick an experiment at random from Tversky and Kahneman and see how well the people in the school do.
Of course, there is a problem of people learning how to do some of these tests, but I'm pretty sure there are some that could be reworked so that they're pretty damned hard to pass even if you're well-acquainted with the literature. I'm thinking particularly those wher...
(haven't looked through comments, so this may have been suggested many times over)
In a college-level rationality course, it would be most appropriate for a portion of the grade to be determined by an artificial economy. That is, set up a currency and a (relatively even) starting distribution, add (probabilistic) opportunities for investment (perhaps linked to other important parts of the course) and, most importantly, make defection possible, anonymous and easy. Make it, as much as possible, like a vast array of one-shot (or known number of iterations) P...
I'm tempted to say "have them play poker", except it uses lots of domain-specific knowledge as well as general rationality. Perhaps if you could generate random games from a large enough space that people don't build up game-specific skills, and the games just end up testing general rationality? While poker-like games don't test all aspects of rationality, there are some things like "ability to keep making good decisions when frustrated / bored / angry" that these games test very well.
I think people would develop skill at the whole class of games...but at the same time, they would be improving their rationality.
Maybe there is a simple thing, which rational people can't do - always get wrong.
Some not very good examples could be:
Skipping with closed eyes.
Telling a lie to a stranger without it being discovered
Saying - "Ooops, I' m wrong," quickly enough
Going to church and sitting thru' a whole sermon without getting very very upset
Multi-tasking
Irony
Understanding metaphors metaphorically.......
Another key feature of [edit] group rationality is the ability to not be swayed by what the social group thinks.
There are simple experiments (though I cannot think of the relevant keywords) where a test subject is put in a room full of confederates, all of whom estimate one line segment to be longer than another when the two lines are in fact the same length.
EDIT: Conforming to the group opinion (on average) increases the probability that you are right, thus improving individual truth-tracking. But adding more conformers to the LW community just screws i...
Reputational: D&D.Sci.
Experimental: D&D.Sci, with a consistent limit on time & resources used.
Organizational: D&D.Sci, with a consistent limit on time & resources used, using freshly-baked scenarios you know no-one has ever played before.
Limitations:
Misc. addl. reflections on the top...
Let's see...
Give them a motivation that is higher than the drive to game the test. I'm an immortalist. I don't want to die. I could deceive myself and others in many ways about my skills, purposes, beliefs, but in the end I can't do that at the expense of my chances of not dying. Finding a similarly important purpose, something that might even be gamed, but for which gaming means you loose. Some real life test.
Maybe, measuring someone's capability to win. I have often wondered if being rational correlates with being succesful in society. I can't be sure, though it see...
Send rationalists to do consulting work where real money is involved, for example techdirt:
The Techdirt group blog uses a proven economic framework to analyze and offer insight into news stories about changes in government policy, technology and legal issues that affect companies’ ability to innovate and grow.
Here you basically get paid for good insights. A "team" of rationalists could be sent in to dominate this particular arena, thereby validating the technique. Basically any online arena where real money can be made is fair game. Trading in Second Life, for example.
A friend of mine, the most consistently rational person I know of, once told me that his major criteria for whether a piece of information is useful is if it can allow him to forget multiple other pieces of information, because they are now derivable from his corpus of information, given this new fact.
I have a vague feeling that there should be a useful test of rationality based on this. Some sort of information modeling test whereby one is given a complex set of interrelated but random data, and a randomly-generated data-expression language. Scoring is ba...
An interesting idea would be to feed people the scientific data that ancient or medieval scientists had and see whether they reproduced all the incorrect but (given the limited knowledge) plausible theories that were invented.
This would work especially well on the vast numbers of people in our society who don't know any science anyway.
In fact just finding some sufficiently obscure area of current science would suffice. There's so much of it... How much of contemporary paleontology or inorganic chemistry could I re-invent?
I once succeeded in deriving the...
Hmm. Some off the top of my head:
Something the masters (and students) of each school can do to keep it real:
The Winning Tournament: Organise a yearly or so event. A group of clever, evil people select and creates a number of "games" or tests, if you'd rather. Wannabe masters of rationality can compete against each other for the title, pride and glory.
The type of games and tests should be kept varied. Some could be contests where participants randomly compete against each other, other might be battle royals where people can form alliance and all around try as hard as they can t...
Stupid idea: Have a handful of students from each school volunteer to be assigned extremely difficult, real-world tasks, such as "become an officer at Microsoft within the next five years". These people would be putting any other of their life plans on hold, so you'd need to incentivize them with some kind of reward and/or sense of honor/loyalty to their school.
I doubt a few minutes of pondering will provoke any significantly insightful thoughts, but on the off chance that they do here's what I've got:
A major pitfall of most tests is that they can end up examining a wide variety of confounding variables. For example if the test for rationality is based on a written prompt then it selects against those with dyslexia in spite of their rationality. If it's based on a spoken prompt then it selects for those with similar accents to the test-giver, or against those who had it read to them in a strange wa...
Two ideas I got after 5 minutes (by the clock :)) thinking.
If the tests are stressful and mentally (and possibly physically) exhausting, then even if it is still possible to prepare just for the test, it will not be as far from preparing for the "real thing". So, something like Initiation Ceremony could be done periodically and not just for initiation.
Give the students "stories" and see if they can make heads or tails of them. (How accurately can they guess the omitted details? Can they predict how it continues? Etc.) But, where can you...
I should note that per the EY’s request I haven’t read the other comments before posting, so sorry if I duplicate anything.
The ability to make predictions in advance seems like one of the most important important, and assuming that you have enough time easiest to test measures of rationality. For the experimental and potentially the organizational level success on the prediction markets seems like an obvious choice, that also has the benefit of showing how good the person is at avoiding certain money related biases. There would of course need to be some ...
Maybe something that tests "certainty faking"? I really don't know how to construct it, per se, may use a FACS test to see how much a person is trying to convey that they're very certain of something when they aren't. That would just be conscious faking, of course; you'd still need something to assess when someone is expressing their feeling of certainty vs. the data. Maybe something like Texas Hold 'Em, except with bets being placed on how accurate the probabilities are (e.g. randomized variations of situations like the cancer scenario at EY's B...
I'm reminded of your own introduction to Bayes. Even a really good test won't do a darn bit of good if rationalists are vanishingly rare.
There are lots of proposals which basically say, let somebody predict the development of a situation they're previously unfamiliar with. But that'll probably be very heavily a test of IQ, and while rationality would certainly help your performance in such scenarios, it seems to me that IQ will regardless be a bigger factor. Same with using real-life performance as a factor.
I'm not opposed to using such scenarios, and I proposed something like that myself, but I do think that the scenarios have to be specifically designed so that they're likely to trigger known biases (even if in a subtle way). You can't just use totally random historical events or police cases.
I get the feeling that the real problem here is repeatability. It's one thing to design a test for rationality, it's another to design a test that could not be gamed once the particulars are known. Since it probably isn't possible to control the flow of information in that way, the next-best option might be to design a test so that the testing criteria would not be understood except by those who pass.
I'm thinking of a test I heard about years ago. The teacher passes out the test, stressing to the students to read the instructions before beginning. The ...
Like R.A.W. has said, "The more you see yourself acting like a cosmic smuck thus less of a cosmic smuck you will become". I think it is very important that the environment stresses awareness of moment to moment actions and thoughts. If not, I think decent application of the knowledge of rationality will be very hard indeed.
If this is an important aspect of your 'school', then I think it would be hard to game the system without actually learning what is supposed to be learned. This would especially be true when it is a part of the reputation heirarchy. Sure, some could mimic to gain status but others with actual awareness would see through them easily.
I seem to be years late to this party, but I've heard the LW culture isn't opposed to commenting on old posts. In the interest of "breadth" I'll answer anyway after at least five minutes of thought, without looking at the other answers first (though I've probably seen subsequent posts that have been influenced by this one by now).
So there are three categories of tests here. In order of strictness: those for masters, those for students, and those for employees?
There are many skills under the "rationality" umbrella. Enumerate them and tes...
Generate a fantasy world with certain rules of magic. The goal is to figure out precisely what those rules are, all the while working towards some end goal. Perhaps this could be run by a handful game masters who know exactly what the rules are supposed to be, or magics are input into a computer program, so no one knows for sure. One would promise to keep the rules secret once figure out. This would encourage proper hypothesis testing and thoughtful use of evidence, especially if resources are limited. I suspect this wouldn't just be a one-off, but a repea...
If rhetoric is the dark arts, then rationalists need a defense against the dark arts.
I've always seem debates as a missed opportunity for rationality training/testing. Not for debaters, but for the audience.
When you have two people cleverly arguing for an answer, that is an opportunity for the audience to see if they can avoid being suckered in. To keep things interesting, you could randomize the debate so that one, bother, or neither debater is telling the truth. (Or course in the toughest debates, the debaters are both partially true and the audience n...
http://lesswrong.com/lw/3h/why_our_kind_cant_cooperate/
I can't help but think the focus on competition is a fairly bad idea. If a student can raise the entire group's score by 10%, that is far more commendable than raising their own individual score by 20%. We don't want high-scoring individuals, we want to win. That's something which is quite often done as part of a group, in the real world.
When I started thinking about this I realized that testing for rationality is pretty complicated! The hardest part about it is determining the 'most rational person' in a group. If the 'most rational person' is a member of the group being tested, how can the testers determine who they are if the testers are less rational than them? Does a tester's ability to recognize the best of the test group depend on whether the tester is biased, and how they are biased? And who would test the testers, then?
Regardless, here's an idea or two.
A Multilevel test: Biase...
I just suggested a relevant rationality test here: http://www.overcomingbias.com/2009/03/how-spend-rationality-test.html
Experimental methods for measuring rationality can be converted into organizational tools through the measurement of biological traits that are minimally malleable. For instance, you could map genomic and brain structure information to experimental tests of particular biases or bias-promoting traits, and then use those biological markers as ungameable indicators. Unfortunately, while this could help organizations get more rational employees (possibly deriving economies of scale), it would be much less useful for measuring improvement.
Vladimir Gritsenko mentioned Rational Debating on an old post. It looks like it would be a useful addition to the list.
Make a very detailed audit of the habits, hobbies, books, music, shoes, watch, cell-phone etc. etc. etc. of the top/average/bottom contributors to LW. Are there correlations? Match to new candidates.
Here is a stupid one: Detective stories. Like Encyclopedia Brown, but subtler. And with false leads. I don't think normal mass-market detective stories would work, because they may try to deliberately choose an irrational answer to surprise you. But special ones written by rationalists for rationalists could be a fun distraction if nothing else.
Role play. Build a corpus of fictional scenarios too big to memorize and present a random subset in the test.
Also, standard tests on rationality lore and mathematics would work to a degree because they're correlated with actual rationality.
What we need is a rationality equivalent of a katana or a machine gun. One for each student, some basic training and even ninja masters go down pretty quickly (unless they really can dodge bullets). Occupatio "weapon of mass rationality".
To look at the successes of the person taking into account his initial conditions. If he is a Nobel laureate, he have success, if he do science for humanity. If he is an egoist we should to look his happyness.
I ended up going in a completely different direction with this: I intend to test my OWN rationality, and I figure that if rationality is about WINNING, about being EFFECTIVE, then I ought to find direct measures of the things I want, and test myself in 6 months or so (timeframe dependent on the toughness/length of the task). This will, in other words, be a test of my ability to understand the territory insofar as that understanding makes me more effective at a given task.
The things in particular, a few subgoals of my personal life-optimization:
An idea that might be both unsustainable and potentially dangerous, but also potentially useful, is to have someone teach as a final test. Less an exam and more a project (with oversight?). Of course, these trainees could be authentic or disguised testers.
Problems with this idea (non-exhaustive): - Rationality doesn't necessarily make you good at teaching, - Teaching the basics badly are likely to have negative effects on the trainee, - This could potentially be gamed by reformulated regurgitation.
So... What behaves differently in the presence of Rationa...
One large theme I've seen in biases is the tendency to affirm positions you already hold, by treating evidence and arguments with imbalance.
So my idea, is to purposefully select arguments from both sides of highly controversial issues such as gun control, abortion, or whatever is polarized at the time period. Then riddle the arguments with mistakes, and challenge the student to find errors in both sides of the issues.
Possibly having a bank of possible rational missteps that they must dole out to different arguments, or a free form analysis that has to be well justified and is subjectively judged by a group of rationalists.
Take any cognitive bias that is supported by previous experimental data. Replicate to confirm.
Subject students to various training regimens, with control group.
Test again for presence of cognitive bias, note any improvements.
Repeat, repeat again for other known cognitive biases.
Not perfect, but it should be enough to make some headway.
Also just subjecting a student to a battery of tests, ideally creative stuff potentially involving real life scenarios not just written tests, to look for all sorts of cognitive biases.
Should the student try to game the system by learning, well great!
How about a test that causes people to build and use mental models and formulas? People are asked to estimate primarily numeric facts based on other facts. In each question, give people a set of "measured facts"* and ask people to estimate more relevant facts/consequences via back-of-envelope calculations (or a computer program, for more precision). But unlike a normal math word problem, set up the test so that, say, 2/3 of the questions cannot be accurately estimated with only the information given. Among that 2/3, half can be accurately estimat...
Erm... let me be Brennan and go with the "obvious". Find problems whose solutions are known in some field but not widely, provide the initial data and results of additional experiments on request (with "too expensive to perform" being a possible result). Then have two measures:
1)Someone who is _also not an expert_ checks solutions for, well, everything you discuss here. Biases, effort, mysterious answers - you name it. (For effort, you might need to register when every thought was written, not just what it was.)
2)An expert checks the dataset used - what of the really conducted experiments students failed to request and which of them were actually useful.
The 'test even if gamed' reminds me of a labyrinth. Suppose there are several ways of reaching the end, and the participants can't know which way they are set upon, because it is chosen randomly. They are asked questions from outside of their domain of knowledge (it would need a big database to pick from), constructed in such a way that it is impossible to pick the right answer without knowing about various cognitive biases (e.g., the conjunction fallacy etc.) The questions can be independently rated for apparent difficulty, and masters will be given the h...
How about asking people:
i) What is rationality for you?
ii) How rational are you?
iii) How will you prove it?
The askee can then ask the asker: Do you agree?
And then we have a conversation. Both parties have to agree on the final score.
Basic true/false test; reverse stupidity is not intelligence but rationalists tend to have fewer false beliefs. Taking the test upon entering the school would prevent the school from teaching to the test and the test could be scored on multiple areas of which one is a cunningly disguised synonym for rationality and the others are red herrings so that irrationalists have no incentive to lie on the test.
It seems like rationality overlaps so many different fields that it does not seem very plausible to be able to test rationality specifically. Political and ethical debates though seem to contain a lot of elements dealing with rationality.
Although this post is old now, I'll still enter my ideas (good or bad) before reading the other comments...
Video games. Expertise in one video game is not good enough; ideally, speed rationality of 100 people could be tested on a new game none of them had seen before.
Along similar lines, ask the 100 people to cooperate in a large artificial project which requires that number of people, such as the manufacture of a complicated item invented for the day. It should be complex enough that cooperation is needed; IE, involve several complex skills such as a
Try to simulate the apparently supernatural/ create other hoaxes and see who can debunk them. There is enough domain specific knowledge that it wouldn’t work too well with individuals, particularly if they have a motivation to game the system. Still if a school doesn’t generally increase its students ability to deal with the apparently supernatural and false information it’s almost certainly bad sign.
Experimental and Organizational tests seem to be the most important test types here; if the students and methods are able to show they're capable, and are measurably better than the students of another craft, then their school is obviously doing something better than other schools anyway, no Reputational test needed. So I'll concentrate on those.
What do we need for an experimental test? We need a way of comparing the strengths of students and ideas, to see which are stronger. The problem here is that there's not really a standard unit of rationality. I...
I might use something similar to The Book from Neal Stephenson's Anathem, but less deliberately harmful and more confusingly-related-to-reality. Something where, in order to succeed, you must Change Your Mind, at least partially. If possible, include a real scenario where you must apply the knowledge in a charged context, where people are most prone to irrationality.
Hmm... To me, a master of rationality might seem to be able to debate fairly well with other heads of powerful schools, such as philosophy and physics. I myself can pose some interesting questions to physics knowledgeable people, and refute offhand philosophical stupidity in a stride.
To test students for rationality I guess is easier to test for debiasing, by making classical bias experiments?
I need to mull this over with my fellow Bayesian conspirators.
I strongly suspect that there is a possible art of rationality (attaining the map that reflects the territory, choosing so as to direct reality into regions high in your preference ordering) which goes beyond the skills that are standard, and beyond what any single practitioner singly knows. I have a sense that more is possible.
The degree to which a group of people can do anything useful about this, will depend overwhelmingly on what methods we can devise to verify our many amazing good ideas.
I suggest stratifying verification methods into 3 levels of usefulness:
If your martial arts master occasionally fights realistic duels (ideally, real duels) against the masters of other schools, and wins or at least doesn't lose too often, then you know that the master's reputation is grounded in reality; you know that your master is not a complete poseur. The same would go if your school regularly competed against other schools. You'd be keepin' it real.
Some martial arts fail to compete realistically enough, and their students go down in seconds against real streetfighters. Other martial arts schools fail to compete at all—except based on charisma and good stories—and their masters decide they have chi powers. In this latter class we can also place the splintered schools of psychoanalysis.
So even just the basic step of trying to ground reputations in some realistic trial other than charisma and good stories, has tremendous positive effects on a whole field of endeavor.
But that doesn't yet get you a science. A science requires that you be able to test 100 applications of method A against 100 applications of method B and run statistics on the results. Experiments have to be replicable and replicated. This requires standard measurements that can be run on students who've been taught using randomly-assigned alternative methods, not just realistic duels fought between masters using all of their accumulated techniques and strength.
The field of happiness studies was created, more or less, by realizing that asking people "On a scale of 1 to 10, how good do you feel right now?" was a measure that statistically validated well against other ideas for measuring happiness. And this, despite all skepticism, looks like it's actually a pretty useful measure of some things, if you ask 100 people and average the results.
But suppose you wanted to put happier people in positions of power—pay happy people to train other people to be happier, or employ the happiest at a hedge fund? Then you're going to need some test that's harder to game than just asking someone "How happy are you?"
This question of verification methods good enough to build organizations, is a huge problem at all levels of modern human society. If you're going to use the SAT to control admissions to elite colleges, then can the SAT be defeated by studying just for the SAT in a way that ends up not correlating to other scholastic potential? If you give colleges the power to grant degrees, then do they have an incentive not to fail people? (I consider it drop-dead obvious that the task of verifying acquired skills and hence the power to grant degrees should be separated from the institutions that do the teaching, but let's not go into that.) If a hedge fund posts 20% returns, are they really that much better than the indices, or are they selling puts that will blow up in a down market?
If you have a verification method that can be gamed, the whole field adapts to game it, and loses its purpose. Colleges turn into tests of whether you can endure the classes. High schools do nothing but teach to statewide tests. Hedge funds sell puts to boost their returns.
On the other hand—we still manage to teach engineers, even though our organizational verification methods aren't perfect. So what perfect or imperfect methods could you use for verifying rationality skills, that would be at least a little resistant to gaming?
(Added: Measurements with high noise can still be used experimentally, if you randomly assign enough subjects to have an expectation of washing out the variance. But for the organizational purpose of verifying particular individuals, you need low-noise measurements.)
So I now put to you the question—how do you verify rationality skills? At any of the three levels? Brainstorm, I beg you; even a difficult and expensive measurement can become a gold standard to verify other metrics. Feel free to email me at sentience@pobox.com to suggest any measurements that are better off not being publicly known (though this is of course a major disadvantage of that method). Stupid ideas can suggest good ideas, so if you can't come up with a good idea, come up with a stupid one.
Reputational, experimental, organizational:
Finding good solutions at each level determines what a whole field of study can be useful for—how much it can hope to accomplish. This is one of the Big Important Foundational Questions, so—
Think!
(PS: And ponder on your own before you look at the other comments; we need breadth of coverage here.)