See also: Boring Advice Repository, Solved Problems Repository, Grad Student Advice Repository, Useful Concepts Repository, Bad Concepts Repository
I just got back from the July CFAR workshop, where I was a guest instructor. One useful piece of rationality I started paying more attention to as a result of the workshop is the idea of useful questions to ask in various situations, particularly because I had been introduced to a new one:
"What skill am I actually training?"
This is a question that can be asked whenever you're practicing something, but more generally it can also be asked whenever you're doing something you do frequently, and it can help you notice when you're practicing a skill you weren't intending to train. Some examples of when to use this question:
- You practice a piece of music so quickly that you consistently make mistakes. What skill are you actually training? How to play with mistakes.
- You teach students math by putting them in a classroom and having them take notes while a lecturer talks about math. What skill are you actually training? How to take notes.
- A personal example: at the workshop, I noticed that I was more apprehensive about the idea of singing in public than I had previously thought I was. After walking outside and actually singing in public for a little, I had a hypothesis about why: for the past several years, I've been singing in public when I don't think anyone is around but stopping when I saw people because I didn't want to bother them. What skill was I actually training by doing that? How to not sing around people.
Many of the lessons of the sequences can also be packaged as useful questions, like "what do I believe and why do I believe it?" and "what would I expect to see if this were true?"
I'd like to invite people to post other examples of useful questions in the comments, hopefully together with an explanation of why they're useful and some examples of when to use them. As usual, one useful question per comment for voting purposes.
How would I update my probabilities if I saw the opposite piece of evidence? What I’m trying to get at here is that “A” and “not A” can’t really be evidences for the same thing. And often it’s more obvious which way “not A” is pointing. A couple of examples:
I saw someone suggesting that maybe a certain Mr. Far Wright was secretly gay because, when the subject was broached, he had publicly expressed his dislike of homosexuality. There was even a wiki page (that I now can’t find) laying out the “law” that the more a person sounds like they hate gays the more likely they are to be gay. At first this sounded appealing*, but then I applied the “not A” test: “if Mr. Far Wright’s sexual orientation is unknown and I heard him publicly declare that he loved homosexual behavior, how would I update the probability that he is gay?” In that case, it seems clear that I’d update it towards him being gay. Therefore, it doesn’t really make sense that when Mr. Wright does that opposite—publicly declaring that he hates homosexual behavior—I also update my probability that he is gay.
Or another recent example I had from talking with someone about Mormonism. Someone said that not having the golden plates available for inspection wasn’t really evidence against Joseph Smith’s story because there were several good reasons why they weren’t available. I was about to concede when I realized that a world where the golden plates were observable would be strong evidence for Joseph Smith’s story so a world where they aren’t has to be at least weak evidence against his story. If A moves the probability quite a bit one way, not A has to at least minimally move the probability the other way.
*Sometimes, if all I can observe, is a denial, it is evidence that the person is guilty. For example, if I walked through the door and the first thing I heard was my toddler denying to my wife that he took the candy, it increases my probability that he did take candy. But too my wife—who already has the evidence that led her to make the accusation—a denial is evidence against him taking the candy (it increases the relative odds that his brother did it instead).
Did I keep all of my reasoning here correct? If not, there might be a better way to express the idea with a Bayesian network.
Not-A for publicly declaring that one hates homosexual behavior isn't "publicly declaring that one loves homosexual behavior". It's just "not publicly declaring that one hates homosexual behavior". Your A-or-not-A has to cover all the possibilities, including remaining silently at home, awkwardly evading questions about homosexuality, making positive statements about heterosexuality but none directly about homosexuality, etc.