You may have heard about IARPA's Sirius Program, which is a proposal to develop serious games that would teach intelligence analysts to recognize and correct their cognitive biases. The intelligence community has a long history of interest in debiasing, and even produced a rationality handbook based on internal CIA publications from the 70's and 80's. Creating games which would systematically improve our thinking skills has enormous potential, and I would highly encourage the LW community to consider this as a potential way forward to encourage rationality more broadly.
While developing these particular games will require thought and programming, the proposal did inspire the NYC LW community to play a game of our own. Using a list of cognitive biases, we broke up into groups of no larger than four, and spent five minutes discussing each bias with regards to three questions:
- How do we recognize it?
- How do we correct it?
- How do we use its existence to help us win?
The Sirius Program specifically targets Confirmation Bias, Fundamental Attribution Error, Bias Blind Spot, Anchoring Bias, Representativeness Bias, and Projection Bias. To this list, I also decided to add the Planning Fallacy, the Availability Heuristic, Hindsight Bias, the Halo Effect, Confabulation, and the Overconfidence Effect. We did this Pomodoro style, with six rounds of five minutes, a quick break, another six rounds, before a break and then a group discussion of the exercise.
Results of this exercise are posted below the fold. I encourage you to try the exercise for yourself before looking at our answers.
Caution: Dark Arts! Explicit discussion of how to exploit bugs in human reasoning may lead to discomfort. You have been warned.
Confirmation Bias
- Notice if you (don't) want a theory to be true
- Don't be afraid of being wrong, question the outcome that you fear will happen
- Seek out people with contrary opinions and be genuinely curious why they believe what they do
- How do we make people genuinely curious? Maybe try encouraging childlike behavior generally?
- If your theory is true every test should come back positive, so don't worry and make a game of disproving your hypothesis
- Commit yourself to which directions you will update on the different outcomes of an experiment before running it
- Be more suspicious of confirmatory results when you do run tests
- Feed confirmatory evidence to others, give them tests to run which you know beforehand are confirmatory
- Agree with people first, before attempting in any way to change their beliefs (but be careful you don't start believing it yourself)
Fundamental Attribution Error
- Critical: make observations, not moralistic judgments
- It helps to be around other non-judgmental people
- Observe your own behavior as a third party: visualize the scene with someone else in your place, ask yourself how others would explain your behavior in the situation
- Increase information about the situation, we are more inclined to simple explanations (e.g. stupid, evil) when we have less data
- Get people to internalize the FAE about their own behavior to take more agency in their lives
- Make moralistic judgments about distant people to increase in-group/out-group effects
Bias Blind Spot
- General knowledge about cognitive biases helps
- Ask other people whether you are biased
- Get people to put themselves into a reference class, don't let them think they are a special case
- Point out biases in others as they occur (planning fallacy seems particularly fruitful here)
- Do not use the word "bias": use "heuristic" for technical folks, otherwise use no titles and deal on a case-by-case basis
- Do not cite studies, turn the results of the study into a story
Anchoring Bias
- If possible, gather actual data instead of guessing. How much is this a problem in practical life?
- Analyze things longer, don't rely on a first impression
- When making complex decisions, make a list of pros and cons and weight each of them by importance
- Make everyone guess to themselves before anyone in the group reveals
- Possible technique: flash a lot of random numbers in rapid succession, to weaken an existing anchor. Recency effect would still be in play. Does this work for qualitative reasoning by flashing nonsense words? This could possibly be implemented on our native hardware by going into free association.
- Use anchoring and relative evaluation on yourself, e.g. turn a shower very cold and then back up slightly, rather than turning it straight down to the final temperature
- Anchor others in critical situations, like salary negotiations
Representativeness Bias
- If possible, gather actual data instead of guessing
- Consider a wide variety of many different examples
- Skim over examples when reading, stick to reading facts
- Ask other people for additional examples in conversation (although it could be more confirmation as well)
- Give other people examples, especially vivid and detailed ones
Projection Bias
- Critical: be responsible for your own emotional responses
- Ask if something you think about someone applies to yourself
- Hold map/territory distinction in mind, be willing to admit you were wrong about your initial impressions
- Empathize with other people to get them to open up emotionally
- Conditional on sufficient self-awareness, just ask the person if they are projecting
- Point out similarities between the projector and the projectee
- Become the thing that the other person admires about themselves
- Nice people who naively project this onto others are more vulnerable to manipulation
Planning Fallacy
- Make estimates of time to completion, and calibrate yourself over time
- Make your estimate and add some proportional amount of time to it (should decrease as calibration improves)
- Ask your friends how long they think it will take you
- Figure out the reference class of your task, gather data on how others underestimate time to completion for those particular tasks
- Give your estimates to other people, to make yourself socially accountable to them
- Visualize encountering various problems during task completion before estimating
- Tell other people about the bias before asking them for time estimates (maybe - you can always add to their estimate)
- You can be more lazy without much penalty
- Note that others don't expect you to be well-calibrated either, so giving a longer time estimate in a one-shot game is not a winning strategy. For repeat games, a reputation for task-completion and accuracy could be more valuable.
- Create two estimates, one you actually believe and one you tell other people
Availability Heuristic
- Critical: ask yourself what specific observations are forming your belief
- If possible, gather actual data instead of guessing
- Ask yourself how many reasons you have for believing something
- Don't stop with an initial estimate, keep thinking and looking for more information
- In a group setting, have a policy of someone giving another suggestion immediately after the first is announced
- Tell anecdotes and stories to other people
- You can shift people's beliefs over long periods of time without their knowledge by sporadically mentioning things
Hindsight Bias
- Estimate task difficulty ex ante, and calibrate over time
- This bias only exists ex post, so the above technique should basically fix the problem (unless you subsequently argue with your past self's estimate, but with calibration this should not be an issue)
- Your successes will be remembered, your failures forgotten, e.g. cold reading
- Amplify this bias to make other feel smarter and better about themselves
Halo Effect
- This seems to just be the way neurons function, making it more difficult to correct on a heuristic level
- Notice your positive/negative affect towards something, and state that observation out loud to yourself
- Be skeptical of any immediate feelings about something
- Reduce the affect by using comparisons, e.g. imagine someone even more awesome
- Try to consciously reverse the affect you are experiencing for a period of time
- Ask other people who may be less vulnerable to a particular person
- Try to imagine others as a collection of separable parts, view them independently
- The halo effect is not always an unreasonable heuristic, keep a correlation between features in mind
- Make a good first impression by doing something you are good at
- Be happy, make others feel good about themselves, and contribute positive mood contagion
- Surround yourself with high-status people, acquire all good things and ideas and display them readily
Confabulation
- Trade-off between rewriting the memory upon access, and accessing frequently enough to retain the connection strengths
- Don't take your memory as absolute truth, be willing to admit you can be wrong
- Create objective recordings of situations: audio, video, etc.
- Write down your thoughts about the situation as soon as possible after it occurs
- If possible, learn to identify your internal story-generating process (note: this might serve other functions, so exercise caution if modifying)
- Encode initial memories more strongly using extreme emotional states
- Use Anki or some other SRS program to remember specific facts about situations
- You can create memories in others over long periods of time by telling them stories
Overconfidence Bias
- Critical: this particular bias appears to have significant benefits
- Does overconfidence require miscalibration? This seems like an emotional effect possibly separable from probability estimates
- Visualize success
- Reflect only on the successes of the past, do not think about failures
- Feel an enormous amount of positive emotions upon success, do not feel shame upon failure
- Have your friends help you reinforce this bias by telling you how awesome you are
- To correct it, make people bet on their beliefs. Avoid activities where overconfidence would hurt you, e.g. gambling
- Encourage others to start ambitious projects, and take them over already partially-completed when they fail
- Write contracts such that a likely failure imposes very costly penalties
- Prevent others from taking on improbable tasks and wasting their time
Summary
How long do you think it should take to solve a major problem if you are not wasting any time? Everything written above was created in a sum total of one hour of work. How many of these ideas had never even occurred to us before we sat down and thought about it for five minutes? Take five minutes right now and write down what areas of your life you could optimize to make the biggest difference. You know what to do from there. This is the power of rationality.
I think the most straightforward "edutainment" design would be a "rube or blegg" model of presenting conflicting evidence and then revealing the Word of God objective truth at the end of the game - different biases can be targetted with different forms of evidence, different models of interpretation (e.g. whether or not players can assign confidence levels in their guesses), and different scoring methods (e.g. whether the game is iterative, whether it's many one shots but probability of success over many games is the goal, etc.).
A more compelling example that won't turn off as many people (ew, edutainment? bo-ring) would probably be a multiplayer game in which the players are randomly led to believe incompatible conclusions and then interact. Availability of public information and the importance of having been right all along or committing strongly to a position early could be calibrated to target specific biases and fallacies.
As someone with aspirations to game design, this is a particularly interesting concept. One great aspect of video game culture is that most multiplayer games are one-offs from a social perspective: There's no social penalty for denigrating an ally's ability since you will never see them again, and there's no gameplay penalty for being wrong. This means that insofar as any and all facets in the course of a game where trusting an ally is not necessary, one can greatly underestimate the ally's skill FOREVER without ever being critically wrong. This makes online gaming perhaps the most fertile incubator of socially negative confirmation bias anywhere ever. If an ally is judged poorly, there's no penalty for declaring them as poor prematurely, and in fact people seem to apply profound confirmation bias on all evidence for the remainder of the game.
Could a game effectively be designed to target this confirmation bias and give the online gaming community a more constructive and realistic picture? I'll definitely be mulling this over. Great post.
If I understand your 'problem' correctly - estimating potential ally capabilities and being right/wrong about that (say, when considering teammates/guildmates/raid members/whatever), then it's not nearly a game-specific concept - it applies to any partner-selection without perfect information, like mating or in job interviews. As long as there is a large enough pool of potential parners, and you don't need all of the 'good' ones, then false negatives don't really matter as much as the speed or ease of the selection process and the cost of false positives, ... (read more)