Things that interest me:
I would not participate in activities that boil down to arbitrary left-brain problem solving.
"Doing impossible things"
Answer: Writing Your Hypothetical Apostasy
See Write Your Hypothetical Apostasy on Overcoming Bias.
Imagine, if you will, that the world's destruction is at stake and the only way to save it is for you to write a one-pager that convinces a jury that your old cherished view is mistaken or at least seriously incomplete. The more inadequate the jury thinks your old cherished view is, the greater the chances that the world is saved. The catch is that the jury consists of earlier stages of yourself (such as yourself such as you were one year ago). Moreover, the jury believes that you have been bribed to write your apostasy; so any assurances of the form "trust me, I am older and know better" will be ineffective. Your only hope of saving the world is by writing an apostasy that will make the jury recognize how flawed/partial/shallow/juvenile/crude/irresponsible/incomplete and generally inadequate your old cherished view is.
I'm not sure exactly how this fits in to group rationality practice. I personally am always more motivated to write when it's something that I will publish, so having a place where we publish hypothetical apostacys could be useful for motivational reasons. It would also be useful because you'd get feedback on your thought process, although that point could be made for many other exercises.
Answer: Check My Understanding
Here's how it'd work. Suppose I want to improve my understanding of Aumann's Agreement Theorem. I would write up my thoughts, doing my best to explain what I know about it. Then other people would comment on what I'm missing and where I went wrong.
This seems useful for a few different reasons:
I was thinking that if the sequences and other LW classics were a high school class, we could make something like an SAT subject test to check understanding/fluency in the subject, then that could be a badge on the site and potentially a good credential to have in your career.
The kinds of questions could be like:
1.
If a US citizen has a legal way to save $500/year on their taxes, but it requires spending 1 hour/day filling out boring paperwork on 5 days of every week, should they do it?
a. Virtually everyone should do it
b. A significant fraction (10-90%) of the population should do it
c. Virtually no one should do it
2.
With sufficient evidence and a rational deliberation process, is it possible to become sure that the Loch Ness Monster does/doesn't exist?
a. We CAN potentially become sure either way
b. We CAN'T potentially become sure either way
c. We can only potentially become sure that it DOES exist
d. We can only potentially become sure that it DOESN'T exist
I recall reading educational psych stuff about how the act of both 1) creating and 2) answering questions like this is a great way to deepen your understanding.
Answer: Betting With Real Money
From the end of Inadequate Equilibria:
I don’t have good, repeatable exercises for training your skill in this field, and that’s one reason I worry about the results. But I can tell you this much: bet on everything. Bet on everything where you can or will find out the answer. Even if you’re only testing yourself against one other person, it’s away of calibrating yourself to avoid both overconfidence and underconfidence, which will serve you in good stead emotionally when you try to do inadequacy reasoning. Or so I hope.
Eliezer seems to be referring to real money here. And I recall him talking elsewhere about how it is useful to put real money on the line.
This meshes with my experiences playing poker. It's one thing to study and learn that X is a mistake. It's another thing to make the mistake of X and lose a big pot because of it. There's something about losing real money that cements it in your head. And I'm not just referring to my own experiences. From talking to other poker players, it seems that this is the norm.
However, real money is a touchy subject and I'm not sure how we would actually pull this off. But I figure that there is still value in bringing it up.
Betting with real money is definitely a useful way of probing at your own confidence (I don't do it much at all due to general underconfidence, but it's sure helped me nail down the feeling of being really sure of something), and a lot of my rationalist friends do it on a handshake-agreement basis. However, any way of formalizing this would turn LW (or whatever institution) into a gambling site, which is illegal :/
Answer: Discussing Updates
See the Updates Thread. Basically, taking note of the belief updates you perform and discussing why you performed them. What did you previously believe, what do you currently believe, and why did the data you observed move you from there to here?
Making bets is good exercise too. If you can't find other people to bet with you can also make public predictions.
When I first read the sequences, I thought "What do I know and how do I think I know it?" was pretty banal and useless -- didn't everyone know that? Philosophy 101, question your beliefs, look for hidden assumptions, etc.
The older I get the more I come to think that no, not everyone knows this, and even the people who know it don't practice it enough. I'm not sure though.
I think of "What do I know and how do I think I know it?" as the "root cause" of essentially all other epistemic rationality - i.e. if you're sufficiently good at that one skill, all the others will follow naturally from it. Conversely, that suggests it's really difficult to get really good at it: if I'm missing any other epistemic rationality skill, it means I'm not good enough at "What do I know and how do I think I know it?".
I'd say the "obvious" version of the skill involves activities which look like questioning beliefs, looking for hidden assumptions...
Answer: Fermi Estimates
Fermi estimates are attempts to answer a quantitative question using order-of-magnitude style reasoning. These are questions like "How many people fly on airplanes each day?" or "How many atoms are in my arm?". In contrast to things like calibration practice, these are much more generative, attempting to tie together parts of your world model to come up with a model that answers a question.
On LessWrong, this could be practically implemented by having a set of 100-1000 questions that users can do either in a weekend blitz, or spaced out over time. A user who got 100 correct (within a factor of 2x) could have a sign on their profile indicating that they completed this task. It could also be implemented as a daily/weekly question for users to answer and then compare notes on.
According to a vague feeling of a couple of people I know, the CFAR handbook is tricky enough that reading it without doing CFAR could be dangerous.
It seems very plausible that you'd get more value out of them after having gone through CFAR. But it seems implausible that you'd get zero or negative value out of them without having gone through CFAR. At least in terms of expected value.
Nah, I don't think that's a real concern. Or at least I really don't see much danger in the things in there, and have worked a lot with it in the past.
I think this excerpt from Rationality: From AI to Zombies' preface says it all.
It was a mistake that I didn't write my two years of blog posts with the intention of helping people do better in their everyday lives. I wrote it with the intention of helping people solve big, difficult, important problems, and I chose impressive-sounding, abstract problems as my examples.
In retrospect, this was the second-largest mistake in my approach. It ties in to the first-largest mistake in my writing which was that I didn't realize that the big problem in learning this valuable way of thinking was figuring out how to practice it, not knowing the theory. I didn't realize that part was the priority; and regarding this I can only say "Oops" and "Duh."
Yes, sometimes those big issues really are big and really are important; but that doesn't change the basic truth that to master skills you need to practice them and it's harder to practice on things that are further away. (Today the Center for Applied Rationality is working on repairing this huge mistake of mine in a more systematic fashion.)
This "incorporated into LW" condition is a tight leash; and it reminds me of why I don't usually... recommend LW to my friends.
Some matters are too personal to talk about on the Internet. Like marital infidelity, which 1) is something outside of many people's experiences, 2) definitely seems to require tons of instrumental rationality even on the best of days, 3) has (ethical) implications which real people often don't take into account despite other real people often expecting them to (but knowing they won't), and 4) unlike acceptable LW material with which it shares the above characteristics, it hurts. And so it is with some other things that actual adults have to deal with.
Unless you speak about something already in the past. Maybe we should have a Cemetery of Failed Things in our City. (Our current Cemetery of Failed Things holds several startups and personal habits, which is, wow, how lucky we are.)
Oh, I remembered we have this: https://www.lesswrong.com/posts/suGfBx5pcjCn6DT5A/how-did-my-baby-die-and-what-is-the-probability-that-my-next#XmC8SwC8h6kKKwMBF . If I remember anything else, I shall add it here.
Basic have-you-read-the-sequences knowledge test (e.g. "Which of the following is an example of 'belief as attire'?")
This might be combined with calibration training.
I want to know what are good rationality exercises.
I was just on a call with Liron and PhilH, hanging out after the weekly LessWrong weekend event, and we discussed exercises that could happen on LessWrong.
Here is the list we generated:
Another user on the call (whose name I forget) suggested it could be fun to have a daily Fermi Estimate on LessWrong, where everyone submits their number and the model they used to reach the number. I think this would be quite exciting.
Please write answers with other exercises that you think are or might be great for rationality training, some explanation of why you think it could be good, and a suggestion of how it could be incorporated into LessWrong. I'll probably add some of the above myself.