[Link] Lifehack article promoting rationality-themed ideas, namely long-term orientation, mere-exposure effect, consider-the-alternative, and agency
Here's my article in Lifehack, one of the most prominent self-improvement websites, bringing rationality-style ideas to a broad audience, specifically long-term orientation, mere-exposure effect, consider-the-alternative, and agency :-)
P.S. Based on feedback from the LessWrong community, I made sure to avoid mentioning LessWrong or rationality in the article.
[Link] Lifehack Article Promoting LessWrong, Rationality Dojo, and Rationality: From AI to Zombies
Nice to get this list-style article promoting LessWrong, Rationality Dojo, and Rationality: From AI to Zombies, as part of a series of strategies for growing mentally stronger, published on Lifehack, a very popular self-improvement website. It's part of my broader project of promoting rationality and effective altruism to a broad audience, Intentional Insights.
EDIT: To be clear, based on my exchange with gjm below, the article does not promote these heavily and links more to Intentional Insights. I was excited to be able to get links to LessWrong, Rationality Dojo, and Rationality: From AI to Zombies included in the Lifehack article, as previously editors had cut out such links. I pushed back against them this time, and made a case for including them as a way of growing mentally stronger, and thus was able to get them in.
Supporting Effective Altruism through spreading rationality
So does spreading rationality contribute to Effective Altruism? I certainly think so, as a rationality popularizer and an Effective Altruist myself. My own donations of money and time is focused on my project, Intentional Insights, of trying to spread rationality to a broad audience and thus raise the sanity waterline, including about effective, evidence-based philanthropy. Specifically in relation to EA, in blogs for Intentional Insights, and in our resources page, I make sure to highlight EA as an awesome thing to get involved in.
I'd particularly appreciate feedback on a draft fundraising letter (link here) for Effective Altruists on the way that Intentional Insights contributes to improving the world and specifically by getting people more engaged with Effective Altruism. I'd like to hear any thoughts on how I can optimize the letter to make it more effective. You can simply respond in comments, or send an email to gleb@intentionalinsights.org
I'd also like to hear your opinion of the broader issue of how spreading rationality helps contribute to improving the world and the EA movement in particular. Let me share my take. For the first, I think that, as shown by Brian Tomasik in this essay, increasing rational thinking is robustly positive for a broad range of short and long term future outcomes, and thus our broader work contributes to improving people’s lives overall. For the second, getting people to think rationally about themselves and their interactions with the world and use evidence-based means to evaluate reality and make their decisions will result in people applying these methods of thinking to their altruism.
What do you think?
Agency is bugs and uncertainty
(Epistemic status: often discussed in bits in pieces, haven't seen it summarized in one place anywhere.)
Do you feel that your computer sometimes has a mind of its own? "I have no idea why it is doing that!" Do you feel that, the more you understand and predict someone's action, the less intelligent and more "mechanical" they appear?
My guess is that, in many cases, agency (as in, the capacity to act and make choices) is a manifestation of the observer's inability to explain and predict the agent's actions. To Omega in the Newcomb's problem humans are just automatons without a hint of agency. To a game player some NPCs appear stupid and others smart, and the more you play and the more you can predict the NPCs, the less agenty they appear to you.
Note that randomness is not the same as uncertainty, since if you can predict that someone or something behaves randomly, it is still a prediction. What I mean is more of a Knightian uncertainty, where one fails to make a useful prediction at all. Something like a tornado may appear to intentionally go after you if you fail to predict where it will be going and you have trouble escaping.
If you are a user of a computer program, and it does not behave as you expect it to, you often get a feeling of there being a hostile intelligence opposing you, occasionally resulting in an aggressive behavior toward it, usually with verbal violence, though occasionally getting physical, the way we would confront an actual enemy. On the other hand, if you are the programmer who wrote the code in question, you think of the misbehavior as bugs, not intentional hostility, and treat the code by debugging or documenting. Mostly. Sometimes I personalize especially nasty bugs.
I was told by a nurse that this is also how they are taught to treat difficult patients: you don't get upset at someone's misbehavior and instead treat them not as an agent, but more like an algorithm in need of debugging. Parents of young children are also advised to take this approach.
This seems to also apply to self-analysis, though to a lesser degree. If you know yourself well, and can predict what you would do in a specific situation, you may feel that your response is mechanistic or automatic and not agenty or intelligent. Or maybe not. I am not sure. I think if I had the capacity for full introspection, not just the surface level understanding of my thoughts and actions, I would ascribe much less agency to myself. Probably because it would cease to be a useful concept. I wonder if this generalizes to a superintelligence capable of perfect or near perfect self-reflection.
This leads us to the issue of feelings, deliberate choices, free will and ability to consent and take responsibility. These seem to be useful, if illusory, concepts for when you live among your intellectual peers and want to be treated at least as having as much agency as you ascribe to them. But this is a topic for a different post.
Sharing about my mental illness and popularizing future-oriented thinking: feedback appreciated!
I'd appreciate feedback on optimizing a blog post that shares about my mental illness and popularizes future-oriented thinking to a broad audience. I'm using story-telling as the driver of the narrative, and sprinkling in elements of rational thinking, such as hyperbolic discounting, mental maps, and future-oriented thinking, in a strategic way. The target audience is college-age youth and young adults. Any suggestions for what works well, and what can be improved would be welcomed! The blog draft itself is below the line.
P.S. For context, the blog is part of a broader project, Intentional Insights, aimed at promoting rationality to a broad audience, as I described in this LW discussion post. To do so, we couch rationality in the language of self-improvement and present it in a narrative style.
_______________________________________________________________________________________________________________________
Coming Out of the Mental Health Closet
My hand jerked back, as if the computer mouse had turned into a real mouse. I just couldn’t do it. Would they think I am crazy? Would they whisper behind my back? Would they never trust me again? These are the kinds of anxious thoughts that ran through my head as I was about to post on my Facebook profile revealing my mental illness to my Facebook friends, about 6 months after my condition began.
I really wanted to share much earlier about my mental illness, a mood disorder characterized by high anxiety, sudden and extreme fatigue, and panic attacks. It would have felt great to be genuinely authentic with people in my life, and not hide who I am. Plus, I would have been proud to contribute to overcoming the stigma against mental illness in our society, especially since this stigma impacts me on such a personal level.
Ironically, the very stigma against mental illness, combined with my own excessive anxiety response, made it very hard for me to share. I was really anxious about whether friends and acquaintances would turn away from me. I was also very concerned about the impact on my professional career of sharing publicly, due to the stigma in academia against mental illness, including at my workplace, Ohio State, as my colleague and fellow professor described in his article.
Whenever the thought of telling others entered my mind, I felt a wave of anxiety pass through me. My head began to pound, my heart sped up, my breathing became fast and shallow, almost like I was suffocating. If I didn’t catch it in time, the anxiety could lead to a full-blown panic attack, or sudden and extreme fatigue, with my body collapsing in place. Not a pretty picture.
Still, I did eventually start discussing my mental illness with some very close friends who I was very confident would support me. And one conversation really challenged my mental map, in other words how I perceive reality, about sharing my story of mental illness.
My friend told me something that really struck me, namely his perspective about how great would it be if all people who needed professional help with their mental health actually went to get such help. One of the main obstacles, as research shows, is the stigma against mental health. We discussed how one of the best ways to deal with such stigma is for well-functioning people with mental illness to come out of the closet about their condition.
Well, I am one of these well-functioning people. I have a great job and do it well, have wonderful relationships, and participate in all sorts of civic activities. The vast majority of people who know me don’t realize I suffer from a mental illness.
That conversation motivated me to think seriously through the roadblocks thrown up by the emotional part of my brain. Previously, I never sat down for a few minutes and forced myself to think what good things might happen if I pushed past all the anxiety and stress of telling people in my life about my mental illness.
I realized that I was just flinching away, scared of the short-term pain of rejection and not thinking about the long-term benefits to me and to others of sharing my story. I was falling for a thinking error that scientists call hyperbolic discounting, a reluctance to make short-term sacrifices for much higher long-term rewards.
To combat this problem, I imagined what world I wanted to live in a year from now – one where I shared about this situation now on my Facebook profile, or one where I did not. This approach is based on research showing that future-oriented thinking is very helpful for dealing with thinking errors associated with focusing on the present.
In the world where I would share right now about my condition, I would be very anxious about what people think of me. Anytime I saw someone who found out for the first time, I would be afraid about the impact on that person’s opinion of me. I would be watching her or his behavior closely for signs of distancing from me. And this would not only be my anxiety: I was quite confident that some people would not want to associate with me due to my mental illness. However, over time, this close watching and anxious thoughts would diminish. All the people who knew me previously would find out. All new people who met met would learn about my condition, since I would not keep it a secret. I would make the kind of difference I wanted to make in the world by fighting mental stigma in our society, and especially in academia. Just as important, it would be a huge burden off my back to not hide myself and be authentic with people in my life.
I imagined a second world. I would continue to hide my mental health condition from everyone but a few close friends. I would always have to keep this secret under wraps, and worry about people finding out about it. I would not be making the kind of impact on our society that I knew I would be able to make. And likely, people would find out about it anyway, whether if I chose to share about it or some other way, and I would get all the negative consequences later.
Based on this comparison, I saw that the first world was much more attractive to me. So I decided to take the plunge, and made a plan to share about the situation publicly. As part of doing so, I made that Facebook post. I had such a good reaction from my Facebook friends that I decided to make the post publicly available on my Facebook to all, not only my friends. Moreover, I decided to become an activist in talking about my mental condition publicly, as in this essay that you are reading.
What can you do?
So how can you apply this story to your life? Whether you want to come out of the closet to people in your life about some unpleasant news, or more broadly overcome the short-term emotional pain of taking an action that would help you achieve your long-term goals, here are some strategies.
1) Consider the world where you want to live a year from now. What would the world look like if you take the action? What would it look like if you did not take the action?
2) Evaluate all the important costs and benefits of each world. What world looks the most attractive a year from now?
3) Decide on the actions needed to get to that world, make a plan, and take the plunge. Be flexible about revising your plan based on new information such as reactions from others, as I did regarding sharing about my own condition.
What do you think?
- Do you ever experience a reluctance to tell others about something important to you because of your concern about their response? How have you dealt with this problem yourself?
- Is there any area of your life where an orientation to the short term undermines much higher long-term rewards? Do you have any effective strategies for addressing this challenge?
- Do you think the strategy of imagining the world you want to live in a year from now can be helpful in any area of your life? If so, where and how?
___________________________________________________________________________________________________________
Thanks in advance for your feedback and suggestions on optimizing the post!
Feedback on promoting rational thinking about one's career choice to a broad audience
I'd appreciate feedback on optimizing a blog post that promotes rational thinking about one's career choice to a broad audience in a way that's engaging, accessible, and fun to read. I'm aiming to use story-telling as the driver of the narrative, and sprinkling in elements of rational thinking, such as agency and mere-exposure effect, in a strategic way. The target audience is college-age youth and young adults, as you'll see from the narrative. Any suggestions for what works well, and what can be improved would be welcomed! The blog draft itself is below the line.
P.S. For context, the blog is part of a broader project, Intentional Insights, aimed at promoting rationality to a broad audience, as I described in this LW discussion post. To do so, we couch rationality in the language of self-improvement and present it in a narrative style.
____________________________________________________________________________________________________________
Title:
"Stop and Think Before It's Too Late!"
Body:
Back when I was in high school and through the first couple of years in college, I had a clear career goal.
I planned to become a medical doctor.
Why? Looking back at it, my career goal was a result of the encouragement and expectations from my family and friends.
My family immigrated from the Soviet Union when I was 10, and we spent the next few years living in poverty. I remember my parents’ early jobs in America, my dad driving a bread delivery truck and my mom cleaning other people’s houses. We couldn’t afford nice things. I felt so ashamed in front of other kids for not being able to get that latest cool backpack or wear cool clothes – always on the margins, never fitting in. My parents encouraged me to become a medical doctor. They gave up successful professional careers when they moved to the US, and they worked long and hard to regain financial stability. It’s no wonder that they wanted me to have a career that guaranteed a high income, stability, and prestige.
My friends also encouraged me to go into medicine. This was especially so with my best friend in high school, who also wanted to become a medical doctor. He wanted to have a prestigious job and make lots of money, which sounded like a good goal to have and reinforced my parents’ advice. In addition, friendly competition was a big part of what my best friend and I did. Whether debating complex intellectual questions, trying to best each other on the high school chess team, or playing poker into the wee hours of the morning. Putting in long hours to ace the biochemistry exam and get a high score on the standardized test to get into medical school was just another way for us to show each other who was top dog. I still remember the thrill of finding out that I got the higher score on the standardized test. I had won!
As you can see, it was very easy for me to go along with what my friends and family encouraged me to do.
I was in my last year of college, working through the complicated and expensive process of applying to medical schools, when I came across an essay question that stopped in me in my tracks:
“Why do you want to be a medical doctor?”
The question stopped me in my tracks. Why did I want to be a medical doctor? Well, it’s what everyone around me wanted me to do. It was what my family wanted me to do. It was what my friends encouraged me to do. It would mean getting a lot of money. It would be a very safe career. It would be prestigious. So it was the right thing for me to do. Wasn’t it?
Well, maybe it wasn’t.
I realized that I never really stopped and thought about what I wanted to do with my life. My career is how I would spend much of my time every week for many, many years, but I never considered what kind of work I would actually want to do, not to mention whether I would want to do the work that’s involved in being a medical doctor. As a medical doctor, I would work long and sleepless hours, spend my time around the sick and dying, and hold people’s lives in my hands. Is that what I wanted to do?
There I was, sitting at the keyboard, staring at the blank Word document with that essay question at the top. Why did I want to be a medical doctor? I didn’t have a good answer to that question.
My mind was racing, my thoughts were jumbled. What should I do? I decided to talk to someone I could trust, so I called my girlfriend to help me deal with my mini-life crisis. She was very supportive, as I thought she would be. She told me I shouldn’t do what others thought I should do, but think about what would make me happy. More important than making money, she said, is having a lifestyle you enjoy, and that lifestyle can be had for much less than I might think.
Her words provided a valuable outside perspective for me. By the end of our conversation, I realized that I had no interest in doing the job of a medical doctor. And that if I continued down the path I was on, I would be miserable in my career, doing it just for the money and prestige. I realized that I was on the medical school track because others I trust - my parents and my friends - told me it was a good idea so many times that I believed it was true, regardless of whether it was actually a good thing for me to do.
Why did this happen?
I later learned that I found myself in this situation because of a common thinking error which scientists call the mere-exposure effect. It means that we tend our tendency to believe something is true and good just because we are familiar with it, regardless of whether it is actually true and good.
Since I learned about the mere-exposure effect, I am much more suspicious of any beliefs I have that are frequently repeated by others around me, and go the extra mile to evaluate whether they are true and good for me. This means I can gain agency and intentionally take actions that help me toward my long-term goals.
So what happened next?
After my big realization about medical school and the conversation with my girlfriend, I took some time to think about my actual long-term goals. What did I - not someone else - want to do with my life? What kind of a career did I want to have? Where did I want to go?
I was always passionate about history. In grade school I got in trouble for reading history books under my desk when the teacher talked about math. As a teenager, I stayed up until 3am reading books about World War II. Even when I was on the medical school track in college I double-majored in history and biology, with history my love and joy. However, I never seriously considered going into history professionally. It’s not a field where one can make much money or have great job security.
After considering my options and preferences, I decided that money and security mattered less than a profession that would be genuinely satisfying and meaningful. What’s the point of making a million bucks if I’m miserable doing it, I thought to myself. I chose a long-term goal that I thought would make me happy, as opposed to simply being in line with the expectations of my parents and friends. So I decided to become a history professor.
My decision led to some big challenges with those close to me. My parents were very upset to learn that I no longer wanted to go to medical school. They really tore into me, telling me I would never be well off or have job security. Also, it wasn’t easy to tell my friends that I decided to become a history professor instead of a medical doctor. My best friend even jokingly asked if I was willing to trade grades on the standardized medical school exam, since I wasn’t going to use my score. Not to mention how painful it was to accept that I wasted so much time and effort to prepare for medical school only to realize that it was not the right choice for me. I really I wish this was something I realized earlier, not in my last year of college.
3 steps to prevent this from happening to you:
If you want to avoid finding yourself in a situation like this, here are 3 steps you can take:
1. Stop and think about your life purpose and your long-term goals. Write these down on a piece of paper.
2. Now review your thoughts, and see whether you may be excessively influenced by messages you get from your family, friends, or the media. If so, pay special attention and make sure that these goals are also aligned with what you want for yourself. Answer the following question: if you did not have any of those influences, what would you put down for your own life purpose and long-term goals? Recognize that your life is yours, not theirs, and you should live whatever life you choose for yourself.
3. Review your answers and revise them as needed every 3 months. Avoid being attached to your previous goals. Remember, you change throughout your life, and your goals and preferences change with you. Don’t be afraid to let go of the past, and welcome the current you with arms wide open.
What do you think?
· Do you ever experience pressure to make choices that are not necessarily right for you?
· Have you ever made a big decision, but later realized that it wasn’t in line with your long-term goals?
· Have you ever set aside time to think about your long-term goals? If so, what was your experience?
Rationality promoted by the American Humanist Association
Happy to share that I got to discuss rationality-informed thinking strategies on the American Humanist Association's well-known and popular podcast, the Humanist Hour (here's the link to the interview). Now, this was aimed at secular audiences, so even before the interview the hosts steered me to orient specifically toward what they thought the audience would find valuable. Thus, the interview focused more on secular issues, such as finding meaning and purpose from a science-based perspective. Still, I got to talk about map and territory and other rationality strategies, as well as cognitive biases such as planning fallacy and sunken costs. So I'd call that a win. I'd appreciate any feedback from you all on how to optimize the way I present rationality-informed strategies in future media appearances.
Making a Rationality-promoting blog post more effective and shareable
I wrote a blog post that popularizes the "false consensus effect" and the debiasing strategy of "imagining the opposite" and "avoiding failing at other minds." Thoughts on where the post works and where it can be improved would be super-helpful for improving our content and my writing style. Especially useful would be feedback on how to make this post more shareable on Facebook and other social media, as we'd like people to be motivated to share these posts with their friends. For example, what would make you more likely to share it? What would make others you know more likely to share it?
For a bit of context, the blog post is part of the efforts of Intentional Insights to promote rational thinking to a broad audience and thus raise the sanity waterline, as described here. The target audience for the blog post is reason-minded youth and young adults who are either not engaged with rationality or are at the beginning stage of becoming aspiring rationalists. Our goal is to get such people interested in exploring rationality more broadly, eventually getting them turned on to more advanced rationality, such as found on Less Wrong itself, in CFAR workshops, etc. The blog post is written in a style aimed to create cognitive ease, with a combination of personal stories and an engaging narrative, along with citations of relevant research and descriptions of strategies to manage one’s mind more effectively. This is part of our broader practice of asking for feedback from fellow Less Wrongers on our content (this post for example). We are eager to hear from you and revise our drafts (and even published content offerings) based on your thoughtful comments, and we did so previously, as you see in the Edit to this post. Any and all suggestions are welcomed, and thanks for taking the time to engage with us and give your feedback – much appreciated!
Explaining “map and territory” and “fundamental attribution error” to a broad audience
I am working on a blog post that aims to convey the concepts of “map and territory” and the “fundamental attribution error” to a broad audience in an engaging and accessible way. Since many people here focus on these subjects, I think it would be really valuable to get your feedback on what I’ve written.
For a bit of context, the blog post is part of the efforts of Intentional Insights to promote rational thinking to a broad audience and thus raise the sanity waterline, as described here. The target audience for the blog post is reason-minded youth and young adults who are either not engaged with rationality or are at the beginning stage of becoming aspiring rationalists. Our goal is to get such people interested in exploring rationality more broadly, eventually getting them turned on to more advanced rationality, such as found on Less Wrong itself, in CFAR workshops, etc. The blog post is written in a style aimed to create cognitive ease, with a combination of personal stories and an engaging narrative, along with citations of relevant research and descriptions of strategies to manage one’s mind more effectively.
This is part of our broader practice of asking for feedback from fellow Less Wrongers on our content (this post for example). We are eager to hear from you and revise our drafts (and even published content offerings) based on your thoughtful comments, and we did so previously, as you see in the Edit to this post.
Below the line is the draft post itself. After we get your suggestions, we will find an appropriate graphic to illustrate this article and post it on the Intentional Insights website. Any and all suggestions are welcomed, and thanks for taking the time to engage with us and give your feedback – much appreciated!
_______________________________________________________________________________________________________________________
Where Do Our Mental Maps Lead Us Astray?
So imagine you are driving on autopilot, as we all do much of the time. Suddenly the car in front of you cuts you off quite unexpectedly. You slam your brakes and feel scared and indignant. Maybe you flash your lights or honk your horn at the other car. What’s your gut feeling about the other driver? I know my first reaction is that the driver is rude and obnoxious.
Now imagine a different situation. You’re driving on autopilot, minding your own business, and you suddenly realize you need to turn right at the next intersection. You quickly switch lanes and suddenly hear someone behind you honking their horn. You now realize that there was someone in your blind spot and you forgot to check it in the rush to switch lanes. So you cut them off pretty badly. Do you feel that you are a rude driver? The vast majority of us do not. After all, we did not deliberately cut that car off, we just failed to see the driver. Or let’s imagine another situation: say your friend hurt herself and you are rushing her to the emergency room. You are driving aggressively, cutting in front of others. Are you a rude driver? Not generally. You’re merely doing the right thing for the situation.
So why do we give ourselves a pass, while attributing an obnoxious status to others? Why does our gut always make us out to be the good guys, and other people bad guys? Clearly, there is a disconnect between our gut reaction and reality here. It turns out that this pattern is not a coincidence. Basically, our immediate gut reaction attributes the behavior of others to their personality and not to the situation in which the behavior occurs. The scientific name for this type of error in thinking and feeling is called the fundamental attribution error, also called the correspondence bias. So if we see someone behaving rudely, we immediately and intuitively feel that this person IS rude. We don’t automatically stop to consider whether an unusual situation may cause someone to act this way. With the driver example, maybe the person who cut you off did not see you. Or maybe they were driving their friend to the emergency room. But that’s not what our automatic reaction tells us. On the other hand, we attribute our own behavior to the situation, and not our personality. Much of the time we feel like we have valid explanations for our actions.
Learning about the fundamental attribution error helped me quite a bit. I became less judgmental about others. I realized that the people around me were not nearly as bad as my gut feelings immediately and intuitively assumed. This decreased my stress levels, and I gained more peace and calm. Moreover, I became more humble. I realized that my intuitive self-evaluation is excessively positive and that in reality I am not quite the good guy as my gut reaction tells me. Additionally, I realized that those around me who are unaware of this thinking and feeling error, are more judgmental of me than my intuition suggested. So I am striving to be more mindful and thoughtful about the impression I make on others.
The fundamental attribution error is one of many problems in our natural thinking and feeling patterns. It is certainly very helpful to learn about all of these errors, but it’s hard to focus on avoiding all of them in our daily life. A more effective strategy for evaluating reality more intentionally to have more clarity and thus gain greater agency is known as “map and territory.” This strategy involves recognizing the difference between the mental map of the world that we have in our heads and the reality of the actual world as it exists – the territory.
For myself, internalizing this concept has not been easy. It’s been painful to realize that my understanding of the world is by definition never perfect, as my map will never match the territory. At the same time, this realization was strangely freeing. It made me recognize that no one is perfect, and that I do not have to strive for perfection in my view of the world. Instead, what would most benefit me is to try to refine my map to make it more accurate. This more intentional approach made me more willing to admit to myself that though I intuitively and emotionally feel something is right, I may be mistaken. At the same time, the concept of map and territory makes me really optimistic, because it provides a constant opportunity to learn and improve my assessment of the situation.
Now, what are the strategies for most effectively learning this information, and internalizing the behaviors and mental patterns that can help you succeed? Well, educational psychology research illustrates that engaging with this information actively, personalizing it to your life, linking it to your goals, and deciding on a plan and specific next steps you will take are the best practices for this purpose. So take the time to answer the questions below to gain long-lasting benefit from reading this article:
- What do you think of the concept of map and territory?
- How can it be used to address the fundamental attribution error?
- Where can the notion of map and territory help you in your life?
- What challenges might arise in applying this concept, and how can these challenges be addressed?
- What plan can you make and what specific steps can you take to internalize these strategies?
Is this dark arts and if it, is it justified?
I'd like the opinion of Less Wrongers on the extent to which it is appropriate to use Dark Arts as a means of promoting rationality.
I and other fellow aspiring rationalists in the Columbus, OH Less Wrong meetup have started up a new nonprofit organization, Intentional Insights, and we're trying to optimize ways to convey rational thinking strategies widely and thus raise the sanity waterline. BTW, we also do some original research, as you can see in this Less Wrong article on "Agency and Life Domains," but our primary focus is promoting rational thinking widely, and all of our research is meant to accomplish that goal.
To promote rationality as widely as possible, we decided it's appropriate to speak the language of System 1, and use graphics, narrative, metaphors, and orientation toward pragmatic strategies to communicate about rationality to a broad audience. Some example are our blog posts about gaining agency, about research-based ways to find purpose and meaning, about dual process theory and other blog posts, as well as content such as videos on evaluating reality and on finding meaning and purpose in life.
Our reasoning is that speaking the language of System 1 would help us to reach a broad audience who are currently not much engaged in rationality, but could become engaged if instrumental and epistemic rationality strategies are presented in such a way as to create cognitive ease. We think the ends of promoting rationality justify the means of using such light Dark Arts - although the methods we use do not convey 100% epistemic rationality, we believe the ends of spreading rationality are worthwhile, and that once broad audiences who engage with our content realize the benefits of rationality, they can be oriented to pursue more epistemic accuracy over time. However, some Less Wrongers disagreed with this method of promoting rationality, as you can see in some of the comments on this discussion post introducing the new nonprofit. Some commentators expressed the belief that it is not appropriate to use methods that speak to System 1.
So I wanted to bring up this issue for a broader discussion on Less Wrong, and get a variety of opinions. What are your thoughts about the utility of using light Dark Arts of the type I described above if the goal is to promote rationality - do the ends justify the means? How much Dark Arts, if any, is it appropriate to use to promote rationality?
Edit: After reading the comments, I see that this is not crossing into real Dark Arts territory in the traditional sense after all. I wasn't sure how LessWrong would perceive things, so thanks for your feedback!
Optimizing ways to convey rational thinking strategies to broad audience
What do you think of this post as a way to use graphics, narrative, metaphors, and orientation toward pragmatic strategies to communicate about dual process theory to a broad audience? It's part of the work of our new nonprofit organization, and we're trying to optimize ways to convey rational thinking strategies widely and thus raise the sanity waterline. So advice on how to improve this post, as well as our other posts, with an orientation toward a broad audience, would be helpful. Thanks, all!
Ethicality of Denying Agency
If your 5-year-old seems to have an unhealthy appetite for chocolate, you’d take measures to prevent them from consuming it. Any time they’d ask you to buy them some, you’d probably refuse their request, even if they begged. You might make sure that any chocolate in the house is well-hidden and out of their reach. You might even confiscate chocolate they already have, like if you forced them to throw out half their Halloween candy. You’d almost certainly trigger a temper tantrum and considerably worsen their mood. But no one would label you an unrelenting tyrant. Instead, you’d be labeled a good parent.
Your 5-year-old isn’t expected to have the capacity to understand the consequences to their actions, let alone have the efficacy to accomplish the actions they know are right. That’s why you’re a good parent when you force them to do the right actions, even against their explicit desires.
You know chocolate is a superstimulus and that 5-year-olds have underdeveloped mental executive functions. You have good reasons to believe that your child’s chocolate obsession isn’t caused by their agency, and instead caused by an obsolete evolutionary adaptation. But from your child’s perspective, desiring and eating chocolate is an exercise in agency. They’re just unaware of how their behaviors and desires are suboptimal. So by removing their ability to act upon their explicit desires, you’re denying their agency.
So far, denying agency doesn’t seem so bad. You have good reason to believe your child isn’t capable of acting rationally and you’re only helping them in the long run. But the ethicality gets murky when your assessment of their rationality is questionable.
Imagine you and your mother have an important flight to catch 2 hours from now. You realize that you have to leave to the airport now in order to make it on time. As you’re about to leave, you recalled the 2 beers you recently consumed. But you feel the alcohol left in your system will barely affect your driving, if at all. The problem is that if your mother found out about your beer consumption, she’d refuse to be your passenger until you completely sobered up - as she’s done in the past. You know this would cause you to miss your flight because she can’t drive and there are no other means of transportation.
A close family member died in a drunk driving accident several years ago and, ever since, she overreacts to drinking and driving risks. You think her reaction is irrational and reveals she has non-transitive preferences. For example, one time she was content on being your passenger after you warned her you were sleep deprived and your driving might be affected. Another time she refused to be your passenger after finding out you had one cocktail that hardly affected you. She’s generally a rational person, but with the recent incident and her past behavior, you deem her incapable of having a calibrated reaction. With all this in mind, you contemplate the ethicality of denying her agency by not telling her about your beer drinking.
Similar to the scenario with your 5-year-old, your intention is to ultimately help the person whose agency you’re denying. But in the scenario with your mother, it’s less clear whether you have enough information or are rational enough yourself to assess your mother’s capacity to act within her preferences. Humans are notoriously good at self-deception and rationalizing their actions. Your motivation to catch your flight might be making you irrational about how much alcohol affects your driving. Or maybe the evidence you collected against her rationality is skewed by confirmation bias. If you’re wrong about your assessment, you’d be disrespecting her wishes.
I can modify the scenario to make its ethicality even murkier. Imagine your mother wasn’t catching the plane with you. Instead, you promised to drive her back to her retirement home before your flight. You don’t want to break your promise nor miss your flight, so you contemplate not telling her about your beer consumption.
In this modified version, you’re not actually making your mother better off by denying her agency - you’re only benefiting yourself. You just believe her reaction to your beer consumption isn’t calibrated, and it would cause you to miss your flight. Even if you had plenty of evidence to back up your assessment of her rationality, would it be ethical to deny her agency when it’s only benefiting you?
What are some times you’ve denied someone’s agency? What are your justifications for doing so?
Military Rationalities and Irrationalities
In response to the question
"Does anyone happen to know of reliable ways for increasing one's supply of executive function, by the way? I seem to run out of it very quickly in general."
(Kaj_Solata)
I posted that my military experience seems effectively designed to increase executive function. Some examples of this from myself and metastable are
Uniforms- not having to think about your wardrobe, ever, saves a lot of time, mental effort, and money. Steve Jobs and President Obama are known for also using uniforms specifically for this purpose.
PT- Daily, routinized exercise. Done in a way that very few people are deciding what comes next.
-Maximum use of daylight hours
Med Group and Force Support-Minimized high-risk projects outside of workplace (paternalistic health care, insurance, and in many cases, housing and continuing education.)
After a moment's thought it occurred to me that there are some double-edged swords in Military Rationality as well, some of which lead to classic jokes like 'Military Intelligence is an oxymoron.'
Regulations- A select few 'experts' create policies which everyone else is required to follow at all times. Unfortunately these experts are never (never ever) encouraged to consider knock-on effects. Ugh.
Anybody else have insights on the military they want to share here? I feel a couple of good posts on increasing executive function might come out of a discussion on the rationalities and irrationalities of the armed forces.
Thoughts on the January CFAR workshop
So, the Center for Applied Rationality just ran another workshop, which Anna kindly invited me to. Below I've written down some thoughts on it, both to organize those thoughts and because it seems other LWers might want to read them. I'll also invite other participants to write down their thoughts in the comments. Apologies if what follows isn't particularly well-organized.
Feelings and other squishy things
The workshop was totally awesome. This is admittedly not strong evidence that it accomplished its goals (cf. Yvain's comment here), but being around people motivated to improve themselves and the world was totally awesome, and learning with and from them was also totally awesome, and that seems like a good thing.
Also, the venue was fantastic. CFAR instructors reported that this workshop was more awesome than most, and while I don't want to discount improvements in CFAR's curriculum and its selection process for participants, I think the venue counted for a lot. It was uniformly beautiful and there were a lot of soft things to sit down or take naps on, and I think that helped everybody be more comfortable with and relaxed around each other.
Main takeaways
Here are some general insights I took away from the workshop. Some of them I had already been aware of on some abstract intellectual level but hadn't fully processed and/or gotten drilled into my head and/or seen the implications of.
- Epistemic rationality doesn't have to be about big things like scientific facts or the existence of God, but can be about much smaller things like the details of how your particular mind works. For example, it's quite valuable to understand what your actual motivations for doing things are.
- Introspection is unreliable. Consequently, you don't have direct access to information like your actual motivations for doing things. However, it's possible to access this information through less direct means. For example, if you believe that your primary motivation for doing X is that it brings about Y, you can perform a thought experiment: imagine a world in which Y has already been brought about. In that world, would you still feel motivated to do X? If so, then there may be reasons other than Y that you do X.
- The mind is embodied. If you consistently model your mind as separate from your body (I have in retrospect been doing this for a long time without explicitly realizing it), you're probably underestimating the powerful influence of your mind on your body and vice versa. For example, dominance of the sympathetic nervous system (which governs the fight-or-flight response) over the parasympathetic nervous system is unpleasant, unhealthy, and can prevent you from explicitly modeling other people. If you can notice and control it, you'll probably be happier, and if you get really good, you can develop aikido-related superpowers.
- You are a social animal. Just as your mind should be modeled as a part of your body, you should be modeled as a part of human society. For example, if you don't think you care about social approval, you are probably wrong, and thinking that will cause you to have incorrect beliefs about things like your actual motivations for doing things.
- Emotions are data. Your emotional responses to stimuli give you information about what's going on in your mind that you can use. For example, if you learn that a certain stimulus reliably makes you angry and you don't want to be angry, you can remove that stimulus from your environment. (This point should be understood in combination with point 2 so that it doesn't sound trivial: you don't have direct access to information like what stimuli make you angry.)
- Emotions are tools. You can trick your mind into having specific emotions, and you can trick your mind into having specific emotions in response to specific stimuli. This can be very useful; for example, tricking your mind into being more curious is a great way to motivate yourself to find stuff out, and tricking your mind into being happy in response to doing certain things is a great way to condition yourself to do certain things. Reward your inner pigeon.
Here are some specific actions I am going to take / have already taken because of what I learned at the workshop.
- Write a lot more stuff down. What I can think about in my head is limited by the size of my working memory, but a piece of paper or a WorkFlowy document don't have this limitation.
- Start using a better GTD system. I was previously using RTM, but badly. I was using it exclusively from my iPhone, and when adding something to RTM from an iPhone the due date defaults to "today." When adding something to RTM from a browser the due date defaults to "never." I had never done this, so I didn't even realize that "never" was an option. That resulted in having due dates attached to RTM items that didn't actually have due dates, and it also made me reluctant to add items to RTM that really didn't look like they had due dates (e.g. "look at this interesting thing sometime"), which was bad because that meant RTM wasn't collecting a lot of things and I stopped trusting my own due dates.
- Start using Boomerang to send timed email reminders to future versions of myself. I think this might work better than using, say, calendar alerts because it should help me conceptualize past versions of myself as people I don't want to break commitments to.
I'm also planning to take various actions that I'm not writing above but instead putting into my GTD system, such as practicing specific rationality techniques (the workshop included many useful worksheets for doing this) and investigating specific topics like speed-reading and meditation.
The arc word (TVTropes warning) of this workshop was "agentiness." ("Agentiness" is more funtacular than "agency.") The CFAR curriculum as a whole could be summarized as teaching a collection of techniques to be more agenty.
Miscellaneous
A distinguishing feature the people I met at the workshop seemed to have in common was the ability to go meta. This is not a skill which was explicitly mentioned or taught (although it was frequently implicit in the kind of jokes people told), but it strikes me as an important foundation for rationality: it seems hard to progress with rationality unless the thought of using your brain to improve how you use your brain, and also to improve how you improve how you use your brain, is both understandable and appealing to you. This probably eliminates most people as candidates for rationality training unless it's paired with or maybe preceded by meta training, whatever that looks like.
One problem with the workshop was lack of sleep, which seemed to wear out both participants and instructors by the last day (classes started early in the day and conversations often continued late into the night because they were unusually fun / high-value). Offering everyone modafinil or something at the beginning of future workshops might help with this.
Overall
Overall, while it's too soon to tell how big an impact the workshop will have on my life, I anticipate a big impact, and I strongly recommend that aspiring rationalists attend future workshops.
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
The last point reminded me of speculation from the recent LessWrong article Conspiracy Theories as Agency Fictions:
Before thinking about these points and debating them I strongly recommend you read the full article.