Since joining the Official Idealized Rationality Organization as a fresh novice six months ago, you have been training and studying hard, and today you are ready to take your exam to advance to the first rank of recognized skill and knowledge. 

You expect the test to be difficult. Here in the Future, card-carrying Rationalists are widely known to be formidably effective, clear-thinking people. Rationalists are capable of impressive feats individually, and accomplish miracles when working in groups. Part of cultivating such a strong reputation, obviously, involves setting high standards.

You step out of your flying car, enter the testing center, and take your seat. The proctor hands you a thick sheaf of test questions. Turning the first page, you read the first question ...

(What sorts of question would you hope to see on such a test? If not "test questions" per se, what other sorts of requirements would make sense?)

(Another way of looking at this might be: design a test that you would be proud to say you passed.)

New Answer
New Comment

11 Answers sorted by

Jan Czechowski

200

I'm recently considering if problems like ones from International Olympiad in Linguistics (https://ioling.org/) can be a good exercise or test for some aspects of Rationality. See example problems here: https://ioling.org/booklets/iol-2018-indiv-prob.en.pdf Usually you are given 10-20 sentences in some obscure or ancient language with translations and are tasked with translating some more lines. The generic strategy would be:

  • Generate some hypotheses about how the language works
  • Think of base probabilities and ways to test the ideas
  • Test selected ideas using given data
  • Update your beliefs
  • Repeat

In my opinion this kind of problems is a sweet spot between exactness and "soft" understanding of the world and language in general. Physics and mathematics problems might be too much to the "exactness" end of the scale.

I don't really expect that the Rationalist Test as you described it would really contain IOL-like problems, but hopefully it gives some inspiration

Richard_Kennaway

160

Some other models for admission to an inner circle have been used.

In the old days, the way you got to be a spy for Her Majesty's Government was to be the "right sort", sound opinions, lively mind, and you might one day receive the metaphorical tap on the shoulder, and someone would suggest to you that a sound chap like yourself might be interested in an opportunity to serve your country in certain ways. This was not something you could explicitly put yourself forwards for, but if you cultivated the right connections, and the right attitudes, and impressed your tutors at Oxford or Cambridge, then you might find yourself invited into the agency that was never officially acknowledged to exist.

The entrance test for certain esoteric schools (it is said) was to discover their existence and to find your way to them on your own initiative.

In the old days, the way you got to be a spy for Her Majesty's Government was to be the "right sort" [...]

The freemasons and other semi-occult organisations also recruit that way.

ChristianKl

140

The main problem with this scenario is that a real rationality organization wouldn't create a test that's made in a way where you can optimize for a high test score by training and studying hard for the test. You don't want to get people to try to guess the teachers password.

The whole point of creating a good test is to avoid ways you can score highly in the test by studying for the test instead of engaging with real life issues. That means you can have something like a internship model and you decide based on the performance in the intership whether or not to make someone a card-carrying rationalist. 

You can take alignment forum as an example of a rationalist organization with selected membership. You can get in by writing good AI risk posts on LessWrong. You can get in by having formal credentials (e.g. another institution vetted you) and likely a few other ways.

There's no test for which you can study and train hard to get into into the alignmentforum.

Besides the alignment forum CFAR's candidate filter or the one used in the LWCW might be other existing examples of selecting people for being rationalists.

gilch

140

Raising the Sanity Waterline seems related. Points mentioned there include

  • ...what constitutes evidence and why;
  • ...Occam's Razor;
  • ...how the above two rules derive from the lawful and causal operation of minds as mapping engines, and do not switch off when you talk about tooth fairies;
  • ...how to tell the difference between a real answer and a curiosity-stopper;
  • ...how to rethink matters for themselves instead of just repeating things they heard;
  • ...certain general trends of science over the last three thousand years;
  • ...the difficult arts of actually updating on new evidence and relinquishing old beliefs;
  • ...epistemology 101;
  • ...self-honesty 201;
  • ...etcetera etcetera etcetera and so on.

Any one of these might be the basis for a good yellow-belt level test.

This would work as an answer.

2gilch
I'll move to answers then.

Alex Power

110

The answers suggesting "this shouldn't be a test you can study for" seem very misguided. This is a yellow belt, not a black belt.  If you think you can become a card-carrying Rationalist without studying books, you are mistaken.

I would expect a battery of short-answer questions, maybe 6 hours/75 questions.  Prove the Pythagorean Theorem.  What is Bayes' Theorem?  Is Pascal's Wager accurate? What impact do human emotions have in decision making?  If humans evolved from monkeys, then why are there still monkeys?  Was George Washington a myth?

There is an aesthetic desire for a more flashy test, a gom jabbar for the rationalist.  I would expect that would be an intermediate test between the yellow belt and the black belt.  The various "Explain this mysterious system" questions are good, so I'll suggest some puzzle where "what do you think you know and why do you think you know it" is the key to the solution.

110
  • Something along the lines of the CRT
  • Calibration questions
  • In the sequences, one of the beisutsukai rituals involves having to perform an on-the-fly bayesian update mentally (and then resist being peer pressured into a wrong answer)
  • Something to do with the planning fallacy
  • Confirmation bias - see the harry/hermione scene on the train
  • Something to do with overcomplicated plans/theories
  • "What grade will you get on this test?" -> graded on the accuracy. sort of a calibration-plus-humility question
  • something to test whether they can notice the "quiet strain" that signals something's wrong with a theory -- maybe successively presented with evidence that counters a theory they've formed, and see how long it takes them to figure it out
  • curiosity, some small thread that's out of place and they have to tug on instead of accepting the obvious solution. this is sort of like the CRT, but I'm thinking more like they're given an explicit framework and there's something tiny out of place, rather than there being a wrong s1 answer and a right s2 answer
  • "name a time where you believed yourself to be correct with absolute or near-absolute confidence, and describe the process of how you came to realize that you were wrong"
  • an authority states something wrong, but subtly wrong, see if they can notice the incorrectness and disagree. "Perhaps your conception of rationality is that it is rational to believe the words of the Great Teacher, and the Great Teacher says, “The sky is green,” and you look up at the sky and see blue. If you think, “It may look like the sky is blue, but rationality is to believe the words of the Great Teacher,” you lose a chance to discover your mistake."

I agree with ChristianKi that avoiding goodharting will be a major challenge.

Edit: Please describe a future event or belief that you are 100% certain will happen or is correct.

Putting anything down that isn't a rejection of the question is zero points. This is basic probability theory mixed with ability to think for yourself (the question is framed to imply that there are 100% certainties).

Connor_Flexman

30

Two tests I'd like to see:

  1. A test in which there are a number of Claims, like "We should put more effort into education". For each Claim, a number of subclaims and pieces of "evidence" are given by the test; the sources are cited, but as with most things they might be misleading or wrong. You answer by responding to the core of the claim (not necessarily the explicit proposition) in a way that integrates the relevant evidence, calls out or at least skips the pieces which are irrelevant or misleading, grapples with the uncertainties, doesn't dodge the parts that might make you look bad, shows a fitting attitude toward the uncertainties and pieces of evidence (not reveling in your uncertainty, integrating intuition and anecdotes naturalistically with escape-cruxes, not getting too detailed or too hand-wavy).

    (Tools trained: avoiding Misleads, turning a proposition into a nuanced tree of attitudes, [all the other rationality techniques]. I think this is a core part of rationality, because the most common ways for rationalists to go wrong isn't to make a big dumb math mistake, it's the entropy that steadily claws its way into your reasoning as you slide past the difficult core unspoken points and into the most attackable propositions stated per se. It's like the soft version of when you read a news article, and it's Not Even Wrong, it just states a bunch of irrelevant statistics and backs them up with not-actually-evidence—that's why it's so hard to see when you're wrong, because the ways in which you're wrong are always Not Even Wrong.)
  2. An oral exam where someone talks to you about something that's sort of triggering or difficult for you (like education if you had a bad education), and it's not specifically about whether you're Wrong there, it's just about how the thing works. And they challenge some of your views, and you have to build a better understanding and attitude toward it on the fly.

    (Skills trained: hypothetical apostasy-lite. Healthy lines of retreat are arguably the core skill of rationality, a prequisite for escaping or iterating on bad/stuck beliefs. The original versions of apostasies were mainly talked about in big flashy ways, where you only have your worldview challenged so hard a few dozen times in your life, but there are about a billion small areas where people can easily find that they have twisted beliefs and it's more on the order of a half hour to start untwisting them, rather than requiring deep courage and total isolation. Admittedly this is hard to test and a little bit easy to game because the scoring would be so subjective and it's hard to tell how much actually changes in someone's mind, but there are some patches.)

One problem with testing/training rationality is that it's very hard to tell apart from object-level knowledge or IQ, and in fact a good deal of it is just the heuristics that you learn from object-level knowledge. The second test helps to separate these by isolating a place where you know you can do better without just grinding; the first test gives you a good measure both of the actual knowledge and of the rationality given some fixed amount of knowledge.

Donald Hobson

30

In front of you is several experimental results relating to an obscure physics phenomena. There are also 6 proposed explanations for this phenomena. One of these explanations is correct. The rest were written by people who didn't know the experimental results (and who failed to correctly deduce them based on surrounding knowledge) As such, these explanations cannot shift anticipation towards the correct experimental result. Your task is to distinguish the real explanation from the plausible sounding fake explanations.

Donald Hobson

30

On your screens you should see a user interface that looks somewhat like a circuit simulator program. You have several different types of components, available, and connectors between them. Some of the components contain an adjustable knob. Some contain a numeric output dial and some have neither. 

These components obey some simple equations. These equations do not necessarily correspond to any real world electronic components. To encourage thought about which experiment to perform next, you will be penalized for the number of components used. You are of course free to reuse components from one experiment in the next, but some experiments will break components. Most broken components will have a big red X appear on them. However at least one component can be broken in a way that doesn't have any visual indicator. You may assume that all fresh components are perfectly identical. You may use a calculator. Your task is to figure out the equations behind these components. Go.

Donald Hobson

30

The environment rolls 2 standard 6 sided dice , one red and one blue. The red dice shows the number of identical copies of your agent that will be created. Each agent will be shown one of the 2 dice. This is an independent coinflip for each agent. The agents are colourblind so have no idea which dice they saw. They just see a number.  The agents must assign probabilities to each of the 36 outcomes, and are scored on the log of the probability they assigned to the correct outcome. 

Write code for an agent that maximizes its total score across all copies.

Write code for an agent that maximizes its average score. 

Explain how and why these differ.

JBlack

10

Uh wow. The first sample question at that link has five possible answers, and all are wrong for different reasons. I agree with their selected answer in that it's least incorrect (it's possible that this is a source of disagreement), but it's still wrong (you can't logically conclude that this is a disagreement).

Response (D) is incorrect since Kim has not said or even strongly implied that medical applications are the most valuable achievements. Kim merely provides it as an example for pure research leading to saving lives. Kim may believe that other achievements are even more valuable (i.e. save far more lives), but chose medicine as an example due to the link being more direct or any number of other reasons.

So far I am not very impressed.

The rest of the questions have much more reasonable explanations. It seems unfortunate that the first (and most visible) question is also the most dubious.

5 comments, sorted by Click to highlight new comments since:

Rationalists are capable of impressive feats individually, and accomplish miracles when working in groups.

I believe it when I see it. Any real-life examples where previously ordinary people who mastered zen and the art of rationality "accomplished miracles"?

I think that part was meant to be fictional, as part of the hypothetical Future with the flying cars, etc.

I don't think we've come close to perfecting the Art yet, especially in groups. I feel like we've stalled somehow, years ago, and I'm not sure what to do about it.

I understand it's meant to be fictional, but probably less fictional than Harry Potter-style magic, in that it is assumed to be achievable without supernatural miracles. Still, the conjecture is that most people would measurably benefit from learning rationality, as opposed to, say, math or tarot cards, and one would expect these benefits to start showing up quite visibly after 10+ years of the community existing.

Still, the conjecture is that most people would measurably benefit from learning rationality, as opposed to, say, math or tarot cards, and one would expect these benefits to start showing up quite visibly after 10+ years of the community existing.

How useful was learning chemistry 10+ years of the chemistry community existing?

The assumptions depends a lot on how much of possible rationality techniques we already discovered and for those techniques that we did discover we actually got people to use them on a regular basis.

How useful was learning chemistry 10+ years of the chemistry community existing?

Good question. I don't know what reference class is appropriate here. I can't come up with other communities like this off the top of my head.

The assumptions depends a lot on how much of possible rationality techniques we already discovered and for those techniques that we did discover we actually got people to use them on a regular basis.

It does. One estimate is "what CFAR teaches", and I think it's quite a bit. Whether the CFAR alumni are measurably better than their peers who didn't attend a CFAR workshop, I don't know, do you?