Plant Seeds of Rationality
After his wife died, Elzéard Bouffier decided to cultivate a forest in a desolate, treeless valley. He built small dams along the side of the nearby mountain, thus creating new streams that ran down into the valley. Then, he planted one seed at a time.
After four decades of steady work, the valley throbbed with life. You could hear the buzzing of bees and the tweeting of birds. Thousands of people moved to the valley to enjoy nature at its finest. The government assumed the regrowth was a strange natural phenomenon, and the valley's inhabitants were unaware that their happiness was due to the selfless deeds of one man.
This is The Man Who Planted Trees, a popular inspirational tale.
But it's not just a tale. Abdul Kareem cultivated a forest on a once-desolate stretch of 32 acres along India's West Coast, planting one seed at a time. It took him only twenty years.
Like trees in the ground, rationality does not grow in the mind overnight. Cultivating rationality requires care and persistence, and there are many obstacles. You probably won't bring someone from average (ir)rationality to technical rationality in a fortnight. But you can plant seeds.
You can politely ask rationalist questions when someone says something irrational. Don't forget to smile!
You can write letters to the editor of your local newspaper to correct faulty reasoning.
You can visit random blogs, find an error in reasoning, offer a polite correction, and link back to a few relevant Less Wrong posts.
One person planting seeds of rationality can make a difference, and we can do even better if we organize. An organization called Trees for the Future has helped thousands of families in thousands of villages to plant more than 50 million trees around the world. And when it comes to rationality, we can plant more seeds if we, for example, support the spread of critical thinking classes in schools.
Do you want to collaborate with others to help spread rationality on a mass scale?
You don't even need to figure out how to do it. Just contact leaders who already know what to do, and volunteer your time and energy.
Email the Foundation for Critical Thinking and say, "How can I help?" Email Louie Helm and sign up for the Singularity Institute Volunteer Network.
Change does not happen when people gather to talk about how much they suffer from akrasia. Change happens when lots of individuals organize to make change happen.
Making Reasoning Obviously Locally Correct
x = y
x2 = x*y
x2 - y2 = x*y - y2
(x+y)(x-y) = y(x-y)
x+y = y
y+y = y
2*y = y
2 = 1
The above is an incorrect "proof" that 2=1. Even for those who know where the flaw is, it might seem reasonable to react to the existence of this "proof" by distrusting mathematical reasoning, which might contain such flaws that lead to erroneous results. But done properly, mathematical reasoning does not look like this "proof". It is more explicit, making each step obviously correct that an incorrect step cannot meet the standard. Let's take a look at what would happen when attempting to present this "proof" following this virtue:
The Limits of Curiosity
In principle, I agree with the notion that it is unforgivable to not want to know, and not want to improve your map to match the territory. However, even the most curious person in the world cannot maintain equal curiosity about all things, and even if they could there are limits on time and energy. In general, the things that inspire curiosity are determined by your personal likes, dislikes, and biases, and it is therefore worth considering carefully where these demarcations fall so as not to deprive ourselves of useful information. This is particularly important when it comes to things that inspire not just lack of interest, but aversion, or "anti-curiosity."
However, not all information is useful, and it can be useful to encourage a bias that cuts you off from information that is not particularly useful to you, so as to better allocate your time and energy. It is possible that it could also be useful to fabricate an "I don't want to know" stance about a certain type of information so as to better allocate your time, (for example, ceasing to watch television, and denying curiosity about what is happening on your favorite shows), but I will not discuss or advocate that here, largely because it's all I can do to hold the line against new time wasters.
The difficulty and danger of this method is that it is best accomplished by not thinking about the things you don't want to be curious about, and that can lead to not even realizing you aren't curious about them, so important things may slip through the cracks. For example, I have never smoked a cigarette, and it requires no effort on my part to not be curious about what it is like. That is such a deeply buried aversion that I might never have consciously noticed that lack of curiosity if I had not been writing this article. In this case, lack of curiosity about smoking is beneficial, but it could just as easily have been something that would be useful for me to be curious about, and I might never have noticed.
Analyzing your own areas of anti-curiosity is extremely difficult, both because your brain rebels at thinking about things it habitually doesn't think about, and because you will likely find a lack of rhyme or reason in which things you are anticurious about. Questioning things deeply held enough that you don't think about them is always deeply uncomfortable.
Many such anti-curiosity regions are more a matter of personal preference than anything else. One of mine falls in the area of video games: I've never played them much, and I deliberately cultivate a lack of curiosity about them because I don't believe the enjoyment or value they might give me would outweigh the amount of my precious time they would likely take up if I started. However, I spend more time than perhaps I should reading fanfiction. There are probably people reading this who are just the opposite, and there probably isn't any real difference between the two positions.
There are also many such regions that result from not having much knowledge or skill in an area, and, rather than rectifying the knowledge gap, developing a sense of superiority or disdain in relation to the area. One fairly common topic for this to occur around (at least for women) is the application of makeup. It is one I had to overcome myself. I didn't know how to put on makeup well as a teenager, and hadn't really tried, and looked down on the sorts of girls who came to class after an obvious half-hour beauty regimen. There were all sorts of plausible excuses for my disdain (women shouldn't make themselves into Barbies, intellect is more important, etc. ), but the real root reason was that I couldn't do it myself. It took time to overcome that enough to realize the real benefits to having that knowledge (even if I still don't bother on a daily basis), but there *are* real benefits to having that knowledge. At the very least, makeup is an expected part of formal or business attire for women in the US, and there are tangible benefits to following such social conventions regardless of how logical they are.
It is more difficult to overcome such an issue if it is rooted in lack of ability rather than lack of knowledge. I have long recognized intellectually the value of recognizing and responding appropriately to social cues, but it doesn't come easily to me, and my frustration often manifests itself in a feeling that I don't want to know. Recognizing that and overcoming it is an ongoing process.
Maintaining a balance on such things is difficult. I know that in areas in which I am comfortable, I excel at optimization, but if I am uncomfortable I subscribe strongly to the "If it ain't broke, don't fix it" philosophy. Both approaches have their merits and their place, the challenge is maintaining awareness of which I am using and why I am using it so that I don't fall into a trap of willful ignorance.
Even when you have identified an area in which you should reverse course and cultivate curiosity, the battle is not over. You still have to overcome the hurdle of learning about the subject. However, I am not qualified to write an article on overcoming procrastination because I am not nearly successful enough at avoiding it.
Working hurts less than procrastinating, we fear the twinge of starting
When you procrastinate, you're probably not procrastinating because of the pain of working.
How do I know this? Because on a moment-to-moment basis, being in the middle of doing the work is usually less painful than being in the middle of procrastinating.
(Bolded because it's true, important, and nearly impossible to get your brain to remember - even though a few moments of reflection should convince you that it's true.)
So what is our brain flinching away from, if not the pain of doing the work?
I think it's flinching away from the pain of the decision to do the work - the momentary, immediate pain of (1) disengaging yourself from the (probably very small) flow of reinforcement that you're getting from reading a random unimportant Internet article, and (2) paying the energy cost for a prefrontal override to exert control of your own behavior and begin working.
Thanks to hyperbolic discounting (i.e., weighting values in inverse proportion to their temporal distance) the instant pain of disengaging from an Internet article and paying a prefrontal override cost, can outweigh the slightly more distant (minutes in the future, rather than seconds) pain of continuing to procrastinate, which is, once again, usually more painful than being in the middle of doing the work.
I think that hyperbolic discounting is far more ubiquitous as a failure mode than I once realized, because it's not just for commensurate-seeming tradeoffs like smoking a cigarette in a minute versus dying of lung cancer later.
Affective Death Spirals
Followup to: The Affect Heuristic, The Halo Effect
Many, many, many are the flaws in human reasoning which lead us to overestimate how well our beloved theory explains the facts. The phlogiston theory of chemistry could explain just about anything, so long as it didn't have to predict it in advance. And the more phenomena you use your favored theory to explain, the truer your favored theory seems—has it not been confirmed by these many observations? As the theory seems truer, you will be more likely to question evidence that conflicts with it. As the favored theory seems more general, you will seek to use it in more explanations.
If you know anyone who believes that Belgium secretly controls the US banking system, or that they can use an invisible blue spirit force to detect available parking spaces, that's probably how they got started.
(Just keep an eye out, and you'll observe much that seems to confirm this theory...)
This positive feedback cycle of credulity and confirmation is indeed fearsome, and responsible for much error, both in science and in everyday life.
But it's nothing compared to the death spiral that begins with a charge of positive affect—a thought that feels really good.
A new political system that can save the world. A great leader, strong and noble and wise. An amazing tonic that can cure upset stomachs and cancer.
Heck, why not go for all three? A great cause needs a great leader. A great leader should be able to brew up a magical tonic or two.
Righting a Wrong Question
Followup to: How an Algorithm Feels from the Inside, Dissolving the Question, Wrong Questions
When you are faced with an unanswerable question—a question to which it seems impossible to even imagine an answer—there is a simple trick which can turn the question solvable.
Compare:
- "Why do I have free will?"
- "Why do I think I have free will?"
The nice thing about the second question is that it is guaranteed to have a real answer, whether or not there is any such thing as free will. Asking "Why do I have free will?" or "Do I have free will?" sends you off thinking about tiny details of the laws of physics, so distant from the macroscopic level that you couldn't begin to see them with the naked eye. And you're asking "Why is X the case?" where X may not be coherent, let alone the case.
"Why do I think I have free will?", in contrast, is guaranteed answerable. You do, in fact, believe you have free will. This belief seems far more solid and graspable than the ephemerality of free will. And there is, in fact, some nice solid chain of cognitive cause and effect leading up to this belief.
If you've already outgrown free will, choose one of these substitutes:
- "Why does time move forward instead of backward?" versus "Why do I think time moves forward instead of backward?"
- "Why was I born as myself rather than someone else?" versus "Why do I think I was born as myself rather than someone else?"
- "Why am I conscious?" versus "Why do I think I'm conscious?"
- "Why does reality exist?" versus "Why do I think reality exists?"
Self-empathy as a source of "willpower"
tl:dr; Dynamic consistency is a better term for "willpower" because its meaning is robust to changes in how we think constistent behavior actually manages to happen. One can boost consistency by fostering interactions between mutually inconsistent sub-agents to help them better empathize with each other.
Despite the common use of the term, I don't think of my "willpower" as an expendable resource, and mostly it just doesn't feel like one. Let's imagine Bob, who is somewhat overweight, likes to eat cake, and wants to lose weight to be more generically attractive and healthy. Bob often plans not to eat cake, but changes his mind, and then regrets it, and then decides he should indulge himself sometimes, and then decides that's just an excuse-meme, etc. Economists and veteran LessWrong readers know this oscillation between value systems is called dynamic inconsistency (q.v. Wikipedia). We can think of Bob as oscillating between being two different idealized agents living in the same body: a WorthIt agent, and a NotWorthIt agent.
The feeling of NotWorthIt-Bob's (in)ability to control WorthIt-Bob is likely to be called "(lack of) willpower", at least by NotWorthIt-Bob, and maybe even by WorthIt-Bob. But I find the framing and langauge of "willpower" fairly unhelpful. Instead, I think NotWorthIt-Bob and WorthIt-Bob just aren't communicating well enough. They try to ignore each other's relevance, but if they could both be present at the same time and actually talk about it, like two people in a healthy relationship, maybe they'd figure something out. I'm talking about self-empathy here, which is opposite to self-sympathy: relating to emotions of yours that you are not immediately feeling. Haven't you noticed you're better at convincing people to change their minds when you actually empathize with their position during the conversation? The same applies to convincing yourself.
Don't ask "Do I have willpower?", but "Am I a dynamically consistent team?"
Less Wrong Book Club and Study Group
Do you want to become stronger in the way of Bayes? This post is intended for people whose understanding of Bayesian probability theory is currently somewhat tentative (between levels 0 and 1 to use a previous post's terms), and who are interested in developing deeper knowledge through deliberate practice.
Our intention is to form an online self-study group composed of peers, working with the assistance of a facilitator - but not necessarily of a teacher or of an expert in the topic. Some students may be somewhat more advanced along the path, and able to offer assistance to others.
Our first text will be E.T. Jaynes' Probability Theory: The Logic of Science, which can be found in PDF form (in a slightly less polished version than the book edition) here or here.
We will work through the text in sections, at a pace allowing thorough understanding: expect one new section every week, maybe every other week. A brief summary of the currently discussed section will be published as an update to this post, and simultaneously a comment will open the discussion with a few questions, or the statement of an exercise. Please use ROT13 whenever appropriate in your replies.
A first comment below collects intentions to participate. Please reply to this comment only if you are genuinely interested in gaining a better understanding of Bayesian probability and willing to commit to spend a few hours per week reading through the section assigned or doing the exercises.
Trying to Try
"No! Try not! Do, or do not. There is no try."
—Yoda
Years ago, I thought this was yet another example of Deep Wisdom that is actually quite stupid. SUCCEED is not a primitive action. You can't just decide to win by choosing hard enough. There is never a plan that works with probability 1.
But Yoda was wiser than I first realized.
The first elementary technique of epistemology—it's not deep, but it's cheap—is to distinguish the quotation from the referent. Talking about snow is not the same as talking about "snow". When I use the word "snow", without quotes, I mean to talk about snow; and when I use the word ""snow"", with quotes, I mean to talk about the word "snow". You have to enter a special mode, the quotation mode, to talk about your beliefs. By default, we just talk about reality.
If someone says, "I'm going to flip that switch", then by default, they mean they're going to try to flip the switch. They're going to build a plan that promises to lead, by the consequences of its actions, to the goal-state of a flipped switch; and then execute that plan.
No plan succeeds with infinite certainty. So by default, when you talk about setting out to achieve a goal, you do not imply that your plan exactly and perfectly leads to only that possibility. But when you say, "I'm going to flip that switch", you are trying only to flip the switch—not trying to achieve a 97.2% probability of flipping the switch.
So what does it mean when someone says, "I'm going to try to flip that switch?"
Rationalization
Followup to: The Bottom Line, What Evidence Filtered Evidence?
In "The Bottom Line", I presented the dilemma of two boxes only one of which contains a diamond, with various signs and portents as evidence. I dichotomized the curious inquirer and the clever arguer. The curious inquirer writes down all the signs and portents, and processes them, and finally writes down "Therefore, I estimate an 85% probability that box B contains the diamond." The clever arguer works for the highest bidder, and begins by writing, "Therefore, box B contains the diamond", and then selects favorable signs and portents to list on the lines above.
The first procedure is rationality. The second procedure is generally known as "rationalization".
"Rationalization." What a curious term. I would call it a wrong word. You cannot "rationalize" what is not already rational. It is as if "lying" were called "truthization".
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)