Understanding vipassana meditation
Related to: The Trouble With "Good"
Followed by: Vipassana Meditation: Developing Meta-Feeling Skills
I describe a way to understand vipassana meditation (a form of Buddhist meditation) using the concept of affective judgment1. Vipassana aims to break the habit of blindly making affective judgments about mental states, and reverse the damage done by doing so in the past. This habit may be at the root of many problems described on LessWrong, and is likely involved in other mental issues. In the followup post I give details about how to actually practice vipassana.
Don't judge a skill by its specialists
tl;dr: The marginal benefits of learning a skill shouldn't be judged heavily on the performance of people who have had it for a long time. People are unfortunately susceptible to these poor judgments via the representativeness heuristic.
Warn and beware of the following kludgy argument, which I hear often and have to dispel or refine:
"Naively, learning «skill type» should help my performance in «domain». But people with «skill type» aren't significantly better at «domain», so learning it is unlikely to help me."
In the presence or absence of obvious mediating factors, skills otherwise judged as "inapplicable" might instead present low hanging fruit for improvement. But people too often toss them away using biased heuristics to continue being lazy and mentally stagnant. Here are some parallel examples to give the general idea (these are just illustrative, and might be wrong):
Weak argument: "Gamers are awkward, so learning games won't help my social skills."
Mediating factor: Lack of practice with face-to-face interaction.
Ideal: Socialite acquires moves-ahead thinking and learns about signalling to help get a great charity off the ground.Weak argument: "Physicists aren't good at sports, so physics won't help me improve my game."
Mediating factor: Lack of exercise.
Ideal: Athlete or coach learns basic physics and tweaks training to gain a leading edge.Weak argument: "Mathematicians aren't romantically successful, so math won't help me with dating."
Mediating factor: Aversion to unstructured environments.
Ideal: Serial dater learns basic probability to combat cognitive biases in selecting partners.Weak argument: "Psychologists are often depressed, so learning psychology won't help me fix my problems."
Mediating factor: Time spent with unhappy people.
Ideal: College student learns basic neuropsychology and restructures study/social routine to accommodate better unconscious brain functions.
Compartmentalization in epistemic and instrumental rationality
Related to: Humans are not automatically strategic, The mystery of the haunted rationalist, Striving to accept, Taking ideas seriously
I argue that many techniques for epistemic rationality, as taught on LW, amount to techniques for reducing compartmentalization. I argue further that when these same techniques are extended to a larger portion of the mind, they boost instrumental, as well as epistemic, rationality.
Imagine trying to design an intelligent mind.
One problem you’d face is designing its goal.
Every time you designed a goal-indicator, the mind would increase action patterns that hit that indicator[1]. Amongst these reinforced actions would be “wireheading patterns” that fooled the indicator but did not hit your intended goal. For example, if your creature gains reward from internal indicators of status, it will increase those indicators -- including by such methods as surrounding itself with people who agree with it, or convincing itself that it understood important matters others had missed. It would be hard-wired to act as though “believing makes it so”.
A second problem you’d face is propagating evidence. Whenever your creature encounters some new evidence E, you’ll want it to update its model of “events like E”. But how do you tell which events are “like E”? The soup of hypotheses, intuition-fragments, and other pieces of world-model is too large, and its processing too limited, to update each belief after each piece of evidence. Even absent wireheading-driven tendencies to keep rewarding beliefs isolated from threatening evidence, you’ll probably have trouble with accidental compartmentalization (where the creature doesn’t update relevant beliefs simply because your heuristics for what to update were imperfect).
Evolution, AFAICT, faced just these problems. The result is a familiar set of rationality gaps:
Compartmentalization as a passive phenomenon
We commonly discuss compartmentalization as if it were an active process, something you do. Eliezer suspected his altruism, as well as some people's "clicking", was due to a "failure to compartmentalize". Morendil discussed compartmentalization as something to avoid. But I suspect compartmentalization might actually be the natural state, the one that requires effort to overcome.
I started thinking about this when I encountered an article claiming that the average American does not know the answer to the following question:
If a pen is dropped on a moon, will it:
A) Float away
B) Float where it is
C) Fall to the surface of the moon
Now, I have to admit that the correct answer wasn't obvious to me at first. I thought about it for a moment, and almost settled on B - after all, there isn't much gravity on the moon, and a pen is so light that it might just be unaffected. It was only then that I remembered that the astronauts had walked on the surface of the moon without trouble. Once I remembered that piece of knowledge, I was able to deduce that the pen quite probably would fall.
A link on that page brought me to another article. This one described two students randomly calling 30 people and asking them the question above. 47 percent of them got the question correct, but what was interesting was that those who got it wrong were asked a follow-up question: "You've seen films of the APOLLO astronauts walking around on the Moon, why didn't they fall off?" Of those who heard it, about 20 percent changed their answer, but about half confidently replied, "Because they were wearing heavy boots".
While these articles were totally unscientific surveys, it doesn't seem to me like this would be the result of an active process of compartmentalization. I don't think my mind first knew that pens would fall down because of gravity, but quickly hid that knowledge from my conscious awareness until I was able to overcome the block. What would be the point in that? Rather, it seems to indicate that my "compartmentalization" was simply a lack of a connection, and that such connections are much harder to draw than we might assume.
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)