[Video] The Essential Strategies To Debiasing From Academic Rationality
Political Debiasing and the Political Bias Test
Cross-posted from the EA forum. I asked for questions for this test here on LW about a year ago. Thanks to those who contributed.
Rationally, your political values shouldn't affect your factual beliefs. Nevertheless, that often happens. Many factual issues are politically controversial - typically because the true answer makes a certain political course of action more plausible - and on those issues, many partisans tend to disregard politically uncomfortable evidence.
This sort of political bias has been demonstrated in a large number of psychological studies. For instance, Yale professor Dan Kahan and his collaborators showed in a fascinating experiment that on politically controversial questions, people are quite likely to commit mathematical mistakes that help them retain their beliefs, but much less likely to commit mistakes that would force them to give up those belies. Examples like this abound in the literature.
Political bias is likely to be a major cause of misguided policies in democracies (even the main one according to economist Bryan Caplan). If they don’t have any special reason not to, people without special knowledge defer to the scientific consensus on technical issues. Thus, they do not interfere with the experts, who normally get things right. On politically controversial issues, however, they often let their political bias win over science and evidence, which means they’ll end up with false beliefs. And, in a democracy voters having systematically false beliefs obviously more often than not translates into misguided policy.
Can we reduce this kind of political bias? I’m fairly hopeful. One reason for optimism is that debiasing generally seems to be possible to at least some extent. This optimism of mine was strengthened by participating in a CFAR workshop last year. Political bias seems not to be fundamentally different from other kinds of biases and should thus be reducible too. But obviously one could argue against this view of mine. I’m happy to discuss this issue further.
Another reason for optimism is that it seems that the level of political bias is actually lower today than it was historically. People are better at judging politically controversial issues in a detached, scientific way today than they were in, say, the 14th century. This shows that progress is possible. There seems to be no reason to believe it couldn’t continue.
A third reason for optimism is that there seems to be a strong norm against political bias. Few people are consciously and intentionally politically biased. Instead most people seem to believe themselves to be politically rational, and hold that as a very important value (or so I believe). They fail to see their own biases due to the bias blind spot (which disables us from seeing our own biases).
Thus if you could somehow make it salient to people that they are biased, they would actually want to change. And if others saw how biased they are, the incentives to debias would be even stronger.
There are many ways in which you could make political bias salient. For instance, you could meticulously go through political debaters’ arguments and point out fallacies, like I have done on my blog. I will post more about that later. Here I want to focus on another method, however, namely a political bias test which I have constructed with ClearerThinking, run by EA-member Spencer Greenberg. Since learning how the test works might make you answer a bit differently, I will not explain how the test works here, but instead refer either to the explanatory sections of the test, or to Jess Whittlestone’s (also an EA member) Vox.com-article.
Our hope is of course that people taking the test might start thinking more both about their own biases, and about the problem of political bias in general. We want this important topic to be discussed more. Our test is produced for the American market, but hopefully, it could work as a generic template for bias tests in other countries (akin to the Political Compass or Voting Advice Applications).
Here is a guide for making new bias tests (where the main criticisms of our test are also discussed). Also, we hope that the test could inspire academic psychologists and political scientists to construct full-blown scientific political bias tests.
This does not mean, however, that we think that such bias tests in themselves will get rid of the problem of political bias. We need to attack the problem of political bias from many other angles as well.
Pro-Con-lists of arguments and onesidedness points
Follow-up to Reverse Engineering of Belief Structures
Pro-con-lists of arguments such as ProCon.org and BalancedPolitics.org fill a useful purpose. They give an overview over complex debates, and arguably foster nuance. My network for evidence-based policy is currently in the process of constructing a similar site in Swedish.
I'm thinking it might be interesting to add more features to such a site. You could let people create a profile on the site. Then you would let them fill in whether they agree or disagree with the theses under discussion (cannabis legalization, GM foods legalization, etc), and also whether they agree or disagree with the different argument for and against these theses (alternatively, you could let them rate the arguments from 1-5).
Once you have this data, you could use them to give people different kinds of statistics. The most straightforward statistic would be their degree of "onesidedness". If you think that all of the arguments for the theses you believe in are good, and all the arguments against them are bad, then you're defined as onesided. If you, on the other hand, believe that some of your own side's arguments are bad, whereas some of the opponents' arguments are good, you're defined as not being onesided. (The exact mathematical function you would choose could be discussed.)
Once you've told people how one-sided they are, according to the test, you would discuss what might explain onesidedness. My hunch is that the most plausible explanation normally is different kinds of bias. Instead of reviewing new arguments impartially, people treat arguments for their views more leniently than arguments against their views. Hence they end up being onesided, according to the test.
There are other possible explanations, though. One is that all of the arguments against the thesis in question actually are bad. That might happen occassionally, but I don't think that's very common. As Eliezer Yudkowsky says in "Policy Debates Should Not Appear One-sided":
On questions of simple fact (for example, whether Earthly life arose by natural selection) there's a legitimate expectation that the argument should be a one-sided battle; the facts themselves are either one way or another, and the so-called "balance of evidence" should reflect this. Indeed, under the Bayesian definition of evidence, "strong evidence" is just that sort of evidence which we only expect to find on one side of an argument.
But there is no reason for complex actions with many consequences to exhibit this onesidedness property.
Instead, the reason why people end up with one-sided beliefs is bias, Yudkowsky argues:
Why do people seem to want their policy debates to be one-sided?
Politics is the mind-killer. Arguments are soldiers. Once you know which side you're on, you must support all arguments of that side, and attack all arguments that appear to favor the enemy side; otherwise it's like stabbing your soldiers in the back. If you abide within that pattern, policy debates will also appear one-sided to you—the costs and drawbacks of your favored policy are enemy soldiers, to be attacked by any means necessary.
Especially if you're consistently one-sided in lots of different debates, it's hard to see that any other hypothesis besides bias is plausible. It depends a bit on what kinds of arguments you include in the list, though. In our lists we haven't really checked the quality of the arguments (our purpose is to summarize the debate, rather than to judge it), but you could also do that, of course.
My hope is that such a test would make people more aware both of their own biases, and of the problem of political bias in general. I'm thinking that is the first step towards debiasing. I've also constructed a political bias test with similar methods and purposes together with ClearerThinking, which should be released soon.
You could also add other features to a pro-con-list. For instance, you could classify arguments in different ways: ad hominem-arguments, consequentialist arguments, rights-based arguments, etc. (Some arguments might be hard to classify, and then you just wouldn't do that. You wouldn't necessarily have to classify every argument.) Using this info, you could give people a profile: e.g., what kinds of arguments do they find most persuasive? That could make them reflect more on what kinds of arguments really are valid.
You could also combine these two features. For instance, some people might accept ad hominem-arguments when they support their views, but not when they contradict them. That would make your use of ad hominem-arguments onesided.
Yet another feature that could be added is a standard political compass. Since people fill in what theses they believe in (cannabis legalization, GM goods legalization, etc) you could calcluate what party is closest to them, based on the parties' stances on these issues. That could potentially make the test more attractive to take.
Suggestions of more possible features are welcome, as well as general comments - especially about implementation.
Sharing about my mental illness and popularizing future-oriented thinking: feedback appreciated!
I'd appreciate feedback on optimizing a blog post that shares about my mental illness and popularizes future-oriented thinking to a broad audience. I'm using story-telling as the driver of the narrative, and sprinkling in elements of rational thinking, such as hyperbolic discounting, mental maps, and future-oriented thinking, in a strategic way. The target audience is college-age youth and young adults. Any suggestions for what works well, and what can be improved would be welcomed! The blog draft itself is below the line.
P.S. For context, the blog is part of a broader project, Intentional Insights, aimed at promoting rationality to a broad audience, as I described in this LW discussion post. To do so, we couch rationality in the language of self-improvement and present it in a narrative style.
_______________________________________________________________________________________________________________________
Coming Out of the Mental Health Closet
My hand jerked back, as if the computer mouse had turned into a real mouse. I just couldn’t do it. Would they think I am crazy? Would they whisper behind my back? Would they never trust me again? These are the kinds of anxious thoughts that ran through my head as I was about to post on my Facebook profile revealing my mental illness to my Facebook friends, about 6 months after my condition began.
I really wanted to share much earlier about my mental illness, a mood disorder characterized by high anxiety, sudden and extreme fatigue, and panic attacks. It would have felt great to be genuinely authentic with people in my life, and not hide who I am. Plus, I would have been proud to contribute to overcoming the stigma against mental illness in our society, especially since this stigma impacts me on such a personal level.
Ironically, the very stigma against mental illness, combined with my own excessive anxiety response, made it very hard for me to share. I was really anxious about whether friends and acquaintances would turn away from me. I was also very concerned about the impact on my professional career of sharing publicly, due to the stigma in academia against mental illness, including at my workplace, Ohio State, as my colleague and fellow professor described in his article.
Whenever the thought of telling others entered my mind, I felt a wave of anxiety pass through me. My head began to pound, my heart sped up, my breathing became fast and shallow, almost like I was suffocating. If I didn’t catch it in time, the anxiety could lead to a full-blown panic attack, or sudden and extreme fatigue, with my body collapsing in place. Not a pretty picture.
Still, I did eventually start discussing my mental illness with some very close friends who I was very confident would support me. And one conversation really challenged my mental map, in other words how I perceive reality, about sharing my story of mental illness.
My friend told me something that really struck me, namely his perspective about how great would it be if all people who needed professional help with their mental health actually went to get such help. One of the main obstacles, as research shows, is the stigma against mental health. We discussed how one of the best ways to deal with such stigma is for well-functioning people with mental illness to come out of the closet about their condition.
Well, I am one of these well-functioning people. I have a great job and do it well, have wonderful relationships, and participate in all sorts of civic activities. The vast majority of people who know me don’t realize I suffer from a mental illness.
That conversation motivated me to think seriously through the roadblocks thrown up by the emotional part of my brain. Previously, I never sat down for a few minutes and forced myself to think what good things might happen if I pushed past all the anxiety and stress of telling people in my life about my mental illness.
I realized that I was just flinching away, scared of the short-term pain of rejection and not thinking about the long-term benefits to me and to others of sharing my story. I was falling for a thinking error that scientists call hyperbolic discounting, a reluctance to make short-term sacrifices for much higher long-term rewards.
To combat this problem, I imagined what world I wanted to live in a year from now – one where I shared about this situation now on my Facebook profile, or one where I did not. This approach is based on research showing that future-oriented thinking is very helpful for dealing with thinking errors associated with focusing on the present.
In the world where I would share right now about my condition, I would be very anxious about what people think of me. Anytime I saw someone who found out for the first time, I would be afraid about the impact on that person’s opinion of me. I would be watching her or his behavior closely for signs of distancing from me. And this would not only be my anxiety: I was quite confident that some people would not want to associate with me due to my mental illness. However, over time, this close watching and anxious thoughts would diminish. All the people who knew me previously would find out. All new people who met met would learn about my condition, since I would not keep it a secret. I would make the kind of difference I wanted to make in the world by fighting mental stigma in our society, and especially in academia. Just as important, it would be a huge burden off my back to not hide myself and be authentic with people in my life.
I imagined a second world. I would continue to hide my mental health condition from everyone but a few close friends. I would always have to keep this secret under wraps, and worry about people finding out about it. I would not be making the kind of impact on our society that I knew I would be able to make. And likely, people would find out about it anyway, whether if I chose to share about it or some other way, and I would get all the negative consequences later.
Based on this comparison, I saw that the first world was much more attractive to me. So I decided to take the plunge, and made a plan to share about the situation publicly. As part of doing so, I made that Facebook post. I had such a good reaction from my Facebook friends that I decided to make the post publicly available on my Facebook to all, not only my friends. Moreover, I decided to become an activist in talking about my mental condition publicly, as in this essay that you are reading.
What can you do?
So how can you apply this story to your life? Whether you want to come out of the closet to people in your life about some unpleasant news, or more broadly overcome the short-term emotional pain of taking an action that would help you achieve your long-term goals, here are some strategies.
1) Consider the world where you want to live a year from now. What would the world look like if you take the action? What would it look like if you did not take the action?
2) Evaluate all the important costs and benefits of each world. What world looks the most attractive a year from now?
3) Decide on the actions needed to get to that world, make a plan, and take the plunge. Be flexible about revising your plan based on new information such as reactions from others, as I did regarding sharing about my own condition.
What do you think?
- Do you ever experience a reluctance to tell others about something important to you because of your concern about their response? How have you dealt with this problem yourself?
- Is there any area of your life where an orientation to the short term undermines much higher long-term rewards? Do you have any effective strategies for addressing this challenge?
- Do you think the strategy of imagining the world you want to live in a year from now can be helpful in any area of your life? If so, where and how?
___________________________________________________________________________________________________________
Thanks in advance for your feedback and suggestions on optimizing the post!
Feedback on promoting rational thinking about one's career choice to a broad audience
I'd appreciate feedback on optimizing a blog post that promotes rational thinking about one's career choice to a broad audience in a way that's engaging, accessible, and fun to read. I'm aiming to use story-telling as the driver of the narrative, and sprinkling in elements of rational thinking, such as agency and mere-exposure effect, in a strategic way. The target audience is college-age youth and young adults, as you'll see from the narrative. Any suggestions for what works well, and what can be improved would be welcomed! The blog draft itself is below the line.
P.S. For context, the blog is part of a broader project, Intentional Insights, aimed at promoting rationality to a broad audience, as I described in this LW discussion post. To do so, we couch rationality in the language of self-improvement and present it in a narrative style.
____________________________________________________________________________________________________________
Title:
"Stop and Think Before It's Too Late!"
Body:
Back when I was in high school and through the first couple of years in college, I had a clear career goal.
I planned to become a medical doctor.
Why? Looking back at it, my career goal was a result of the encouragement and expectations from my family and friends.
My family immigrated from the Soviet Union when I was 10, and we spent the next few years living in poverty. I remember my parents’ early jobs in America, my dad driving a bread delivery truck and my mom cleaning other people’s houses. We couldn’t afford nice things. I felt so ashamed in front of other kids for not being able to get that latest cool backpack or wear cool clothes – always on the margins, never fitting in. My parents encouraged me to become a medical doctor. They gave up successful professional careers when they moved to the US, and they worked long and hard to regain financial stability. It’s no wonder that they wanted me to have a career that guaranteed a high income, stability, and prestige.
My friends also encouraged me to go into medicine. This was especially so with my best friend in high school, who also wanted to become a medical doctor. He wanted to have a prestigious job and make lots of money, which sounded like a good goal to have and reinforced my parents’ advice. In addition, friendly competition was a big part of what my best friend and I did. Whether debating complex intellectual questions, trying to best each other on the high school chess team, or playing poker into the wee hours of the morning. Putting in long hours to ace the biochemistry exam and get a high score on the standardized test to get into medical school was just another way for us to show each other who was top dog. I still remember the thrill of finding out that I got the higher score on the standardized test. I had won!
As you can see, it was very easy for me to go along with what my friends and family encouraged me to do.
I was in my last year of college, working through the complicated and expensive process of applying to medical schools, when I came across an essay question that stopped in me in my tracks:
“Why do you want to be a medical doctor?”
The question stopped me in my tracks. Why did I want to be a medical doctor? Well, it’s what everyone around me wanted me to do. It was what my family wanted me to do. It was what my friends encouraged me to do. It would mean getting a lot of money. It would be a very safe career. It would be prestigious. So it was the right thing for me to do. Wasn’t it?
Well, maybe it wasn’t.
I realized that I never really stopped and thought about what I wanted to do with my life. My career is how I would spend much of my time every week for many, many years, but I never considered what kind of work I would actually want to do, not to mention whether I would want to do the work that’s involved in being a medical doctor. As a medical doctor, I would work long and sleepless hours, spend my time around the sick and dying, and hold people’s lives in my hands. Is that what I wanted to do?
There I was, sitting at the keyboard, staring at the blank Word document with that essay question at the top. Why did I want to be a medical doctor? I didn’t have a good answer to that question.
My mind was racing, my thoughts were jumbled. What should I do? I decided to talk to someone I could trust, so I called my girlfriend to help me deal with my mini-life crisis. She was very supportive, as I thought she would be. She told me I shouldn’t do what others thought I should do, but think about what would make me happy. More important than making money, she said, is having a lifestyle you enjoy, and that lifestyle can be had for much less than I might think.
Her words provided a valuable outside perspective for me. By the end of our conversation, I realized that I had no interest in doing the job of a medical doctor. And that if I continued down the path I was on, I would be miserable in my career, doing it just for the money and prestige. I realized that I was on the medical school track because others I trust - my parents and my friends - told me it was a good idea so many times that I believed it was true, regardless of whether it was actually a good thing for me to do.
Why did this happen?
I later learned that I found myself in this situation because of a common thinking error which scientists call the mere-exposure effect. It means that we tend our tendency to believe something is true and good just because we are familiar with it, regardless of whether it is actually true and good.
Since I learned about the mere-exposure effect, I am much more suspicious of any beliefs I have that are frequently repeated by others around me, and go the extra mile to evaluate whether they are true and good for me. This means I can gain agency and intentionally take actions that help me toward my long-term goals.
So what happened next?
After my big realization about medical school and the conversation with my girlfriend, I took some time to think about my actual long-term goals. What did I - not someone else - want to do with my life? What kind of a career did I want to have? Where did I want to go?
I was always passionate about history. In grade school I got in trouble for reading history books under my desk when the teacher talked about math. As a teenager, I stayed up until 3am reading books about World War II. Even when I was on the medical school track in college I double-majored in history and biology, with history my love and joy. However, I never seriously considered going into history professionally. It’s not a field where one can make much money or have great job security.
After considering my options and preferences, I decided that money and security mattered less than a profession that would be genuinely satisfying and meaningful. What’s the point of making a million bucks if I’m miserable doing it, I thought to myself. I chose a long-term goal that I thought would make me happy, as opposed to simply being in line with the expectations of my parents and friends. So I decided to become a history professor.
My decision led to some big challenges with those close to me. My parents were very upset to learn that I no longer wanted to go to medical school. They really tore into me, telling me I would never be well off or have job security. Also, it wasn’t easy to tell my friends that I decided to become a history professor instead of a medical doctor. My best friend even jokingly asked if I was willing to trade grades on the standardized medical school exam, since I wasn’t going to use my score. Not to mention how painful it was to accept that I wasted so much time and effort to prepare for medical school only to realize that it was not the right choice for me. I really I wish this was something I realized earlier, not in my last year of college.
3 steps to prevent this from happening to you:
If you want to avoid finding yourself in a situation like this, here are 3 steps you can take:
1. Stop and think about your life purpose and your long-term goals. Write these down on a piece of paper.
2. Now review your thoughts, and see whether you may be excessively influenced by messages you get from your family, friends, or the media. If so, pay special attention and make sure that these goals are also aligned with what you want for yourself. Answer the following question: if you did not have any of those influences, what would you put down for your own life purpose and long-term goals? Recognize that your life is yours, not theirs, and you should live whatever life you choose for yourself.
3. Review your answers and revise them as needed every 3 months. Avoid being attached to your previous goals. Remember, you change throughout your life, and your goals and preferences change with you. Don’t be afraid to let go of the past, and welcome the current you with arms wide open.
What do you think?
· Do you ever experience pressure to make choices that are not necessarily right for you?
· Have you ever made a big decision, but later realized that it wasn’t in line with your long-term goals?
· Have you ever set aside time to think about your long-term goals? If so, what was your experience?
Rationality promoted by the American Humanist Association
Happy to share that I got to discuss rationality-informed thinking strategies on the American Humanist Association's well-known and popular podcast, the Humanist Hour (here's the link to the interview). Now, this was aimed at secular audiences, so even before the interview the hosts steered me to orient specifically toward what they thought the audience would find valuable. Thus, the interview focused more on secular issues, such as finding meaning and purpose from a science-based perspective. Still, I got to talk about map and territory and other rationality strategies, as well as cognitive biases such as planning fallacy and sunken costs. So I'd call that a win. I'd appreciate any feedback from you all on how to optimize the way I present rationality-informed strategies in future media appearances.
Making a Rationality-promoting blog post more effective and shareable
I wrote a blog post that popularizes the "false consensus effect" and the debiasing strategy of "imagining the opposite" and "avoiding failing at other minds." Thoughts on where the post works and where it can be improved would be super-helpful for improving our content and my writing style. Especially useful would be feedback on how to make this post more shareable on Facebook and other social media, as we'd like people to be motivated to share these posts with their friends. For example, what would make you more likely to share it? What would make others you know more likely to share it?
For a bit of context, the blog post is part of the efforts of Intentional Insights to promote rational thinking to a broad audience and thus raise the sanity waterline, as described here. The target audience for the blog post is reason-minded youth and young adults who are either not engaged with rationality or are at the beginning stage of becoming aspiring rationalists. Our goal is to get such people interested in exploring rationality more broadly, eventually getting them turned on to more advanced rationality, such as found on Less Wrong itself, in CFAR workshops, etc. The blog post is written in a style aimed to create cognitive ease, with a combination of personal stories and an engaging narrative, along with citations of relevant research and descriptions of strategies to manage one’s mind more effectively. This is part of our broader practice of asking for feedback from fellow Less Wrongers on our content (this post for example). We are eager to hear from you and revise our drafts (and even published content offerings) based on your thoughtful comments, and we did so previously, as you see in the Edit to this post. Any and all suggestions are welcomed, and thanks for taking the time to engage with us and give your feedback – much appreciated!
Explaining “map and territory” and “fundamental attribution error” to a broad audience
I am working on a blog post that aims to convey the concepts of “map and territory” and the “fundamental attribution error” to a broad audience in an engaging and accessible way. Since many people here focus on these subjects, I think it would be really valuable to get your feedback on what I’ve written.
For a bit of context, the blog post is part of the efforts of Intentional Insights to promote rational thinking to a broad audience and thus raise the sanity waterline, as described here. The target audience for the blog post is reason-minded youth and young adults who are either not engaged with rationality or are at the beginning stage of becoming aspiring rationalists. Our goal is to get such people interested in exploring rationality more broadly, eventually getting them turned on to more advanced rationality, such as found on Less Wrong itself, in CFAR workshops, etc. The blog post is written in a style aimed to create cognitive ease, with a combination of personal stories and an engaging narrative, along with citations of relevant research and descriptions of strategies to manage one’s mind more effectively.
This is part of our broader practice of asking for feedback from fellow Less Wrongers on our content (this post for example). We are eager to hear from you and revise our drafts (and even published content offerings) based on your thoughtful comments, and we did so previously, as you see in the Edit to this post.
Below the line is the draft post itself. After we get your suggestions, we will find an appropriate graphic to illustrate this article and post it on the Intentional Insights website. Any and all suggestions are welcomed, and thanks for taking the time to engage with us and give your feedback – much appreciated!
_______________________________________________________________________________________________________________________
Where Do Our Mental Maps Lead Us Astray?
So imagine you are driving on autopilot, as we all do much of the time. Suddenly the car in front of you cuts you off quite unexpectedly. You slam your brakes and feel scared and indignant. Maybe you flash your lights or honk your horn at the other car. What’s your gut feeling about the other driver? I know my first reaction is that the driver is rude and obnoxious.
Now imagine a different situation. You’re driving on autopilot, minding your own business, and you suddenly realize you need to turn right at the next intersection. You quickly switch lanes and suddenly hear someone behind you honking their horn. You now realize that there was someone in your blind spot and you forgot to check it in the rush to switch lanes. So you cut them off pretty badly. Do you feel that you are a rude driver? The vast majority of us do not. After all, we did not deliberately cut that car off, we just failed to see the driver. Or let’s imagine another situation: say your friend hurt herself and you are rushing her to the emergency room. You are driving aggressively, cutting in front of others. Are you a rude driver? Not generally. You’re merely doing the right thing for the situation.
So why do we give ourselves a pass, while attributing an obnoxious status to others? Why does our gut always make us out to be the good guys, and other people bad guys? Clearly, there is a disconnect between our gut reaction and reality here. It turns out that this pattern is not a coincidence. Basically, our immediate gut reaction attributes the behavior of others to their personality and not to the situation in which the behavior occurs. The scientific name for this type of error in thinking and feeling is called the fundamental attribution error, also called the correspondence bias. So if we see someone behaving rudely, we immediately and intuitively feel that this person IS rude. We don’t automatically stop to consider whether an unusual situation may cause someone to act this way. With the driver example, maybe the person who cut you off did not see you. Or maybe they were driving their friend to the emergency room. But that’s not what our automatic reaction tells us. On the other hand, we attribute our own behavior to the situation, and not our personality. Much of the time we feel like we have valid explanations for our actions.
Learning about the fundamental attribution error helped me quite a bit. I became less judgmental about others. I realized that the people around me were not nearly as bad as my gut feelings immediately and intuitively assumed. This decreased my stress levels, and I gained more peace and calm. Moreover, I became more humble. I realized that my intuitive self-evaluation is excessively positive and that in reality I am not quite the good guy as my gut reaction tells me. Additionally, I realized that those around me who are unaware of this thinking and feeling error, are more judgmental of me than my intuition suggested. So I am striving to be more mindful and thoughtful about the impression I make on others.
The fundamental attribution error is one of many problems in our natural thinking and feeling patterns. It is certainly very helpful to learn about all of these errors, but it’s hard to focus on avoiding all of them in our daily life. A more effective strategy for evaluating reality more intentionally to have more clarity and thus gain greater agency is known as “map and territory.” This strategy involves recognizing the difference between the mental map of the world that we have in our heads and the reality of the actual world as it exists – the territory.
For myself, internalizing this concept has not been easy. It’s been painful to realize that my understanding of the world is by definition never perfect, as my map will never match the territory. At the same time, this realization was strangely freeing. It made me recognize that no one is perfect, and that I do not have to strive for perfection in my view of the world. Instead, what would most benefit me is to try to refine my map to make it more accurate. This more intentional approach made me more willing to admit to myself that though I intuitively and emotionally feel something is right, I may be mistaken. At the same time, the concept of map and territory makes me really optimistic, because it provides a constant opportunity to learn and improve my assessment of the situation.
Now, what are the strategies for most effectively learning this information, and internalizing the behaviors and mental patterns that can help you succeed? Well, educational psychology research illustrates that engaging with this information actively, personalizing it to your life, linking it to your goals, and deciding on a plan and specific next steps you will take are the best practices for this purpose. So take the time to answer the questions below to gain long-lasting benefit from reading this article:
- What do you think of the concept of map and territory?
- How can it be used to address the fundamental attribution error?
- Where can the notion of map and territory help you in your life?
- What challenges might arise in applying this concept, and how can these challenges be addressed?
- What plan can you make and what specific steps can you take to internalize these strategies?
Improving Human Rationality Through Cognitive Change (intro)
This is the introduction to a paper I started writing long ago, but have since given up on. The paper was going to be an overview of methods for improving human rationality through cognitive change. Since it contains lots of handy references on rationality, I figured I'd publish it, in case it's helpful to others.
1. Introduction
During the last half-century, cognitive scientists have catalogued dozens of common errors in human judgment and decision-making (Griffin et al. 2012; Gilovich et al. 2002). Stanovich (1999) provides a sobering introduction:
For example, people assess probabilities incorrectly, they display confirmation bias, they test hypotheses inefficiently, they violate the axioms of utility theory, they do not properly calibrate degrees of belief, they overproject their own opinions onto others, they allow prior knowledge to become implicated in deductive reasoning, they systematically underweight information about nonoccurrence when evaluating covariation, and they display numerous other information-processes biases...
The good news is that researchers have also begun to understand the cognitive mechanisms which produce these errors (Kahneman 2011; Stanovich 2010), they have found several "debiasing" techniques that groups or individuals may use to partially avoid or correct these errors (Larrick 2004), and they have discovered that environmental factors can be used to help people to exhibit fewer errors (Thaler and Sunstein 2009; Trout 2009).
This "heuristics and biases" research program teaches us many lessons that, if put into practice, could improve human welfare. Debiasing techniques that improve human rationality may be able to decrease rates of violence caused by ideological extremism (Lilienfeld et al. 2009). Knowledge of human bias can help executives make more profitable decisions (Kahneman et al. 2011). Scientists with improved judgment and decision-making skills ("rationality skills") may be more apt to avoid experimenter bias (Sackett 1979). Understanding the nature of human reasoning can also improve the practice of philosophy (Knobe et al. 2012; Talbot 2009; Bishop and Trout 2004; Muehlhauser 2012), which has too often made false assumptions about how the mind reasons (Weinberg et al. 2001; Lakoff and Johnson 1999; De Paul and Ramsey 1999). Finally, improved rationality could help decision makers to choose better policies, especially in domains likely by their very nature to trigger biased thinking, such as investing (Burnham 2008), military command (Lang 2011; Williams 2010; Janser 2007), intelligence analysis (Heuer 1999), or the study of global catastrophic risks (Yudkowsky 2008a).
But is it possible to improve human rationality? The answer, it seems, is "Yes." Lovallo and Sibony (2010) showed that when organizations worked to reduce the effect of bias on their investment decisions, they achieved returns of 7% or higher. Multiple studies suggest that a simple instruction to "think about alternative hypotheses" can counteract overconfidence, confirmation bias, and anchoring effects, leading to more accurate judgments (Mussweiler et al. 2000; Koehler 1994; Koriat et al. 1980). Merely warning people about biases can decrease their prevalence, at least with regard to framing effects (Cheng and Wu 2010), hindsight bias (Hasher et al. 1981; Reimers and Butler 1992), and the outcome effect (Clarkson et al. 2002). Several other methods have been shown to meliorate the effects of common human biases (Larrick 2004). Judgment and decision-making appear to be skills that can be learned and improved with practice (Dhami et al. 2012).
In this article, I first explain what I mean by "rationality" as a normative concept. I then review the state of our knowledge concerning the causes of human errors in judgment and decision-making (JDM). The largest section of our article summarizes what we currently know about how to improve human rationality through cognitive change (e.g. "rationality training"). We conclude by assessing the prospects for improving human rationality through cognitive change, and by recommending particular avenues for future research.
[Link] Cognitive bias modification as a treatment for depression
This seems relevant to LessWrong, both as an extreme example of how biases can hurt people and as a possible rationality technique. Depression is presumably at the outer end of some spectrum; to the extent that it's caused by cognitive mistakes, people in the middle of the spectrum should be able to benefit from undoing the same mistakes.
http://www.sciencedaily.com/releases/2011/11/111117202935.htm
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)