For millennia, the practice of meditation has been deeply intertwined with many of the world's major and minor religious and spiritual traditions, as a technique aimed as everything from developing magical powers to communing with gods and demons. By contrast, during the last few decades in the West, enthusiasts have promoted meditation (along with a variety of its secularized offshoots) as a good way to cultivate relaxation, creativity, and psychological self-improvement in the context of our hurried and stressful lives. Because of this variegated cultural history, it's no surprise that many people see it as either as an exercise that leads to irrationality and madness, or as a harmless but questionably-effective pop science fad---sometimes both at once!
Set against this backdrop, small and somewhat private groups with an interest in meditation have long gathered together in secret to discuss and learn. Not satisfied with the popular dogmas, they got down to figuring out, as best they could, whether meditation really leads to anything that could be called "enlightenment": by experimenting on themselves, comparing notes with others, seeing where it led them, and seeing whether it would repeatably lead others to the same point. Because their subject is taboo, they have labored in the shadows for a very long time; but the modern mass-adoption of the internet has allowed what they know to reach a widening audience. And while they fought for years to discover these things, you now have the opportunity to hear about them merely for the cost of your internet connection---for some of you that may be a blessing, but guard your minds so that it isn't also a curse.
Before I begin, there are three caveats:
1) The perspective I'm going to present is one most closely associated with Buddhism, and you may be inclined to ask "Is this a good description of what Buddhists believe?" or "Is this what Buddhism is really about?" or even shout "This doesn't sound like Buddhism to me!" The relation between this material and Buddhism is an interesting topic (and I'll discuss that in Part 2), but for now, I make no claims whatsoever. This material draws enormous inspiration from particular strains of Buddhism, and one may argue that it is a highly plausible interpretation of what Buddhism is 'really about,' but in the end it stands or falls by itself.
2) What is declassified on the internet is still taboo in many communities. If you walk into your local dharma group / meditation center / Buddhist sangha or what-have-you and start asking about enlightenment or talking concretely about your own meditation experiences and what you think they mean, you may not get the response you'd expect. Having warned you, my conscience will remain clear...even so, don't be a jerk, and please recognize that not everyone who appears to be interested in meditation wants to hear about these things.
3) What follows is the best attempt at writing this information up in a way that I think suits the LW community. No one besides me is to blame for any shortcomings it has.
Why meditate?
You may take up or have taken up meditation for all kinds of reasons. It may help you to relax, it may help you to think clearly, and it may even help you to fit in with your New-Agey friends and the alternative lifestyle crowd. But one of the best reasons to start meditating is so that you can stop being deluded.
Delusions come in many kinds, and the right medicine for one may be ineffectual for another. Some things called delusions are merely misinformation, and melt away in the light of evidence. Other types of delusions stem from mental illness and can be vanquished by therapy or medication. The common practices of rationalists are well-suited to eliminating delusions that spring from cognitive biases. The sane human mind is generally quite good at representing and then talking about these cases: you can call yourself out on these types of delusions, or failing that, someone else will call you out. If you disagree with their assessment, you at least can expect to understand what's at stake in the disagreement.
But there is another way to be deluded, in which you can't easily understand what it means to be deluded in that way. For the purpose of crafting a simple metaphor, think of beliefs, thoughts, various cognitive representations, etc. as tangible objects in the factory that is your mind, and think of the various cognitive transformations that your mind is capable of as industrial processes that take these objects as inputs and produce others as outputs. So, via one process, these objects can be looked at, via another their properties can be manipulated, or further objects can be generated as a function of the properties of the inputs; ultimately, all objects are either put to further use in-house as inputs to other processes, or marketed to consumers (= behaviors in the external world) at some point. Most processes are simple, but others are sophisticated (second-order) and can assess the ways that different inputs and outputs make the factory's gears grind differently, and adjust operations to compensate. If the outputs are built to spec, all's well; malformed outputs are either rejected on the market or gum up the works when used in-house, depending on what they are and what they're supposed to do.
There are lots of simple ways that factories can run badly: the processes are obsolete, there aren't enough doo-dads available when the machinery requires doo-dads to run, or someone puts sprockets in the chute clearly marked "COGS ONLY". But there are also systematic ways that production can be inefficient.
Suppose that some processes take objects and project their image, via a lens, onto a photosensitive site that controls the specifications of whatever that process outputs. If the lens is sufficiently good, there's no problem. If the lens has severe aberrations...well, it depends. Some processes may not be sensitive to the distortions that the lens imposes, so there is no practical effect. Other processes will output different objects than they otherwise would have due to the lens' distortion. Those malformed objects may be destined for the market, where consumers may or may not be sensitive to the malformation, or they may be inputs to other processes which are not sensitive to the malformation. But for those processes that ARE sensitive to it...if THEIR malformed outputs feed into processes that are also sensitive to it...and THEIR outputs do as well...there's a potential for some serious industrial mishaps.
How would you, the factory owner, assess whether such a problem exists? Perhaps there's a camera that feeds into a CCTV display in the main office, and you could have it point at the objects being generated, inspect them, and make an assessment. If you see that the objects are not built to spec, you can inspect the machinery, and, finding the junky lenses, replace them. Sounds good...unless...the camera was built in-house with a lens that also produces a distorted image. That's a more complicated problem.
If the camera's image looks distorted on the screen, you can always stop the production lines and take a look with your own two eyes, bypassing the camera and its problems.
Unfortunately, there is no homunculus perched on a chair somewhere in your brain, waiting to spring into action. In our metaphor, the camera's image is input for a second-order process, perhaps a rudimentary AI meant to regulate overall production, cobbled together by some sloppy but effective evolutionary process outside the factory. How likely is it for the AI to consider the possibility of a distorted camera lens? Suppose it's so unsophisticated that it does not even understand that the camera's output is a representation of anything, but assumes the output is direct access to the thing-in-itself? (Imagine that it does not even know that there is a camera, and is built in such a way that it receives the camera's output with the tag "WHAT's GOING ON AT COORDINATES X,Y,Z" and nothing else.) If it has no primitive concept of data representing something, and the process by which it receives data from the camera is completely opaque to it, then it may be quite oblivious to the fact that there even is a problem. Even if it responds to natural language, you can type "MACHINES X AND Y ARE MALFUNCTIONING AND THE CAMERA BY WHICH YOU OBSERVE THEM IS MISLEADING YOU!!!!!1" on the terminal all day with no guarantee of making headway.
To fix the problem, the AI needs to be adaptive enough to find a way to conceptualize it in the first place, and, depending on the idiosyncrasies of the evolutionary process that built it and on the degree to which that process selects for AIs that happen to be good at factory control rather than something else, the ways by which it might recognize that there is a problem so that it forms the relevant concepts to deal with it could well be limited.
Welcome to the human condition.
Building new concepts.
Here's a stylized story about how the the AI might manage to figure out that some of the machines it watches over, along with the cameras by which it watches, have lenses that produce distorted images, and these images are leading to production problems.
Suppose there are multiple cameras it receives data from, and the AI, for whatever reason (perhaps an unintended consequence of something else it's doing) directs them both towards the same machine. Lo and behold, two sources of information tagged "WHAT'S GOING ON AT COORDINATES X,Y,Z" are not identical! How strange. Perhaps from this and some adaptive inference it figures out that what representation is and that these data merely represent what's going on at coordinates X,Y,Z. If the camera lenses are only moderately distorting, the AI may point one camera at the other, match the image it sees to an image of a camera in its database, and by doing so, manage to peek into the black box that produces the data by which it monitors the factory. And perhaps now it has an inkling of an idea that, since production has been slower and more problematic than expected, that means something is wrong, despite the fact that all the data it has access to do not allow it to pinpoint any particular problem: because, as it now knows, the data could be inaccurate.
From here, there are various ways that the AI could discover that something is wrong with the camera lenses. If the distortions aren't uniform over the image the lens produces, it could rotate one camera, un-rotate the output, and see that this is not equivalent to the previous output from an un-rotated camera. Or, knowing the layout and dimensions of the factory, it could aim both cameras at the same location, transform the data from one camera so that in theory it would match the data from the other (given the known positions of the cameras as well as the machines on the factory floor being looked at), and yet find that they did not match. Now it can infer that at least one representation is inaccurate.
Since the factory makes lenses in-house, to test the hypothesis that one of the cameras' lenses is faulty, the AI could replace one camera's lens with a different one (of unknown quality), and depending on how clever it is and how much it knows about optics, try to work out what the lens aberrations at issue here are. If the malfunctioning machines are the ones that produce lenses, there may be multiple rounds of producing lenses with different kinds or degrees of aberrations, inserting them in the cameras, inspecting the machines, modifying the machines, building new lenses based on the modifications....successively getting closer to the point where it has enough data from the various distorted images it's collected to have managed to produce a lens of sufficiently high quality to
- Accurately observe the previous defective camera lenses,
- Reflect on how those lenses led to faulty information about the machines and their outputs,
- Accurately observe the malformed outputs of the machines,
- Accurately observe the defective lenses inside of the machines,
- Discover the details by which the defective lenses are leading to malformed outputs,
- Deduce a lens design that will not lead to malformed outputs, and
- Build and install such lenses.
What's wrong with your mind.
Understanding that it's an oversimplification, the preceding metaphor is a very good one for describing how human minds works by default with respect to what meditation is good for. Some cognitive processes within the mind have defects that yield distorted outputs. And the second-order processes that evaluate how the first-order processes work are themselves defective, yielding more distorted outputs. All these distorted outputs are fed back into various cognitive processes, where they muck up effective operation in all kinds of ways.
If I tell you about the defects in your own mind, it is unlikely that you will understand (really understand) what I mean. The first-order processes may be messed up and I could describe that to you, but when you attempt to introspect on their status, the image of those processes that you see is itself distorted. Further, you may not have even developed a concept by which you could understand what the distortion is, or even what it would be like for the outputs of your first- and second-order cognitive processes to be distorted or not distorted in the way I mean. So we would be talking past each other.
This post was inspired by Skatche's. He writes, parenthetically,
"[...in] the Buddhist system...the unenlightened mind can't truly understand reality anyway, so you'd best just shut up and meditate."
This is a common impression people have. It's also more or less true. Not because enlightenment is some wild, indescribable altered state of consciousness which people mistake for a glimpse into 'reality,' but because the unenlightened mind probably can't even begin to conceptualize the problem, and definitely doesn't have the tools with which to understand it with sufficient clarity. 8-year-olds typically can't grok higher mathematics, not because mathematicians are tripping balls, but because 8-year-old minds don't have the tools to grasp what mathematicians talk about, and don't have the concepts to understand what it is that there's not grasping. C'est la vie.
I'm sure you want to hear what the big deal is anyway, so here you go. Your first-order cognitive processes that take experiences as objects are malfunctioning, and output bizarre things such as 'attachment,' 'craving,' and 'hatred.' The second-order cognitive process that monitors them is malfunctioning, and can't see what is so bizarre about those outputs ("aren't these built to spec?"). The same process, when monitoring experiences, outputs bizarre things such as 'Self' (and many variations on it). When it takes itself as an object to see its own inner workings, no output is produced, and a function call to the rational inference module asking what process outputted 'Self' yields a variety of confabulated answers (typically 'the senses' or 'rational deduction,' and claimed to be built to spec). When high-level cognitive processes take 'Self' as an object, the outputs are more bizarre objects: for example, 'existential angst,' 'Cartesian dualism,' and so on. From then on, the lives of these malformed objects are variegated: 'existential angst' as an input for the poetry generation process yields a product roundly rejected by consumers, 'attachment' and 'existential angst' as inputs for the life-goal planning process yields questionable long-term plans, and 'Cartesian dualism' as an input into the philosophy of mind process causes a blue screen of death.
All this happens without you batting an eye, and yet if you reflect in a very general and nonspecific way on whether all these malformed objects are helping or hurting your functioning, and helping or hurting your everyday behavior, you may be able to see that, at least in some ways, they're gumming up the works. But can you see what's wrong with them? Aren't they built to spec? Don't you need them in order to lead a normal life?
You may be quick to say that you have a perfectly good meaning in mind when you say 'Self.' Either 'Self' is a matter of definition and can be defined innocuously, or better yet, describes the behavior of biological systems in a useful and accurate way---carving reality at the joints. So it is not a delusion, and anyone who says otherwise is...well...deluded.
Well, bull. You have at least two concepts, Self and Self*. What you are describing, what carves reality at the joints, is Self*, an output of the rational thought process. Because your lens distorts, Self and Self* look indistinguishable to you. When you make a function call to ask what process outputs [Self or Self*, since they look the same to you], the answer you invariably get is 'rational thought.' "See," you think, "no delusion!" as you happily feed Self into the processes that generate attachment, craving, hatred, existential angst, etc. etc. from it, even when Self* is not an input that would produce those outputs.
The rationally-derived concept Self2 that you use doesn't and couldn't play the role in your mental machinery that it seems to. When you were young, before you were mature enough to form the concept Self*, you had attachment, craving, and so on. Today, you still do. How likely is it that Self* is responsible for those things right now? When you feel, deep down in your bones, that you want something--sex, drugs, money, status, friendship, happiness, anything--what is the 'you' who appears to want it? Self*? Knowing what you know about human minds, human development and comparative neurophysiology, is the 'you' who appears to want it the kind of thing that is likely to be the output of a rational process?
Think about it. See if you can begin to form a new concept that better captures what could be going on in your mind.
This metaphor is just illustrative. If it doesn't make sense to you on some level, I know of no argument that will be able to change that. If, for example, you have the intuitive feeling that you are a homunculus experiencing the output of your brain and yet rationally know that that's not true, the tension between the two may be a starting point for you. Or if you've had experiences where your sense of self was altered in radical ways, you may be able to see that there's more to the way you normally conceive of the world in relation to you than first meets the eye.
But it isn't irrational for this not to make sense. If it doesn't make sense, you simply haven't built the right concepts and tools yet to allow it to make sense. Being ill-equipped is not a matter of irrationality. It's a simple problem that you can solve if you're motivated to.
Whichever case best describes you, I claim you can build the concepts and tools you need to understand this through meditation. If you're interested, you can do the experiment yourself, and see what happens.
How meditation works.
Meditation, at least meditation for the goal I've described, can be thought of as a series of attentional and perceptual exercises. Experience has shown that directing your attention and perception in particular ways will help you to begin to see the ways in which your cognitive machinery is distorted. As in the metaphor, you eventually need to build new lenses in order to get a handle on what's going on, but luckily, you don't need to know their specs or retrofit your cognitive machinery; if you do the exercises, neuroplasticity will handle those parts for you.
EDITED FOR CLARITY: There are a range of "attentional and perceptual exercises" (= meditation styles) that are effective, but it important to note that not all are especially effective, and more importantly, a couple tend to work really well compared to the rest. Common kinds of meditation instructions, such as "relax, follow your breath, and cultivate equanimity towards whatever thoughts arise without getting involved with them", are unfortunately not the kinds of instructions that have an especially good track record among typical meditators. At least with respect to attaining the kind of insight under discussion. Such instructions do seem to work very well for helping people to be relaxed and less overemotional, though. More details in Part 2.
Experience shows that doing the exercises will cause your mind to generate various new lenses with different aberrations (there are various ways to categorize how many different types), and as your mental processes adapt to the output that these aberrations engender, you gain more and more data with which you can piece together the ways in which these distorted outputs have misled you. When you have enough data, your mind is able to produce a lens that it strictly less distorting than everything that came before. Retrofitting everything with this new type of lens makes your life better, and it makes the exercises easier. As you continue the exercises and cycle through new lenses, eventually your mind is able to repeat the feat, and make a lens that is strictly less distorting than in the previous case. On and on.
The first time you generate and use a lens that is strictly less distorting, you are partially enlightened.
When you have generated and installed a lens that does not distort in ways that lead to attachment, you are fully enlightened.
These results do not depend on any effort to maintain, and they are not altered states of consciousness. The goal of this type of meditation is not to produce any particular mental state, but to fix your cognitive machinery. When it's fixed, it's fixed. Experience has shown that no maintenance is required.
Unlike what popular mythology says, this process need not take a lifetime, or half of a lifetime, and definitely doesn't require that you live on a mountaintop in the Himalayas. Bearing in mind that individual variation exists, contemporary methods can yield deep and powerful cognitive upgrades along these lines within a few years. Many people are able to reach what is considered to be the first partial stage of enlightenment within months, in the context of a dedicated and regular practice during their daily life, and this is not considered especially atypical.
Benefits.
The reasons you might pursue this kind of mental upgrade are individual---just as in every other case. I don't have THE REASON that this is important for you to do. But here are a selection that you as an individual might find compelling.
- Be happier; function better.
When you begin to cut off the automatic generation of attachment, craving, hatred, etc., those things get used less as inputs to other mental processes: your life will likely become a more fun, more carefree, and more worthwhile experience. As you begin to cut off the generation of the concept Self by second-order processes, it gets used less as inputs to higher-level cognitive processes: you will think more clearly about existential issues of all kinds.
- Know what your goals would be if you were more insightful.
It's easy to think about what you want, and build a plan for your life around what you think you want. But your ability to know what you want is curtailed by the fact that you have delusions about what 'you' means. If you begin to get rid of the delusions by beginning to cut off the flow of Self into various processes, you will be in a better position to decide on how to live your life. Imagine you could get a pre-Friendly AI glimpse into CEV; might that change your current goals? What would a glimpse of your own, private extrapolated volition be worth to you? What would you do to get such a glimpse?
- Be more rational.
As you do the attentional and perceptual exercises involved in meditation, you develop a less and less distorted view of your own mental processes. This eventually allows you to fix any processes that are systematically malfunctioning due to the want of non-distorting components. But as a side effect, it also lets you see an enormous selection of what's going on in your mind: lots of things that you might not have previously noticed, or that you would have previously called '"subconscious," may become effortlessly clear to you.
Suppose you are biased against non-degreed people, and one day, a high school dropout tells you something that you currently disbelieve. If the thought "he doesn't know anything, he has no education!" arose in your mind, you might not normally even notice it, or you might delusively attribute the thought to 'you' and then be invested in acting according to it and defending it (since it's 'yours,' or since 'you' thought it). As your mental processes snap into focus, it's much easier to see the thought, and regard it as 'a thought' rather than 'my thought' or 'my belief' or 'mental content I generated'. When your mind can't sweep it under the carpet and yet you have no special attachment to it, it is easy to face it explicitly and decide how to work with it. If you already have the motivation, accounting for and dealing with your own cognitive biases is likely to become much simpler and easier than before.
- Understand the origin of delusive intuitions.
One example. Many people have the intuition that they have free will, i.e. that they are homunculi controlling their bodies and minds in a way that is outside the normal laws of physics. Even those of us who know better may still have that feeling. Meditation can ultimately eliminate that feeling. Undercutting the intuition and seeing where that leaves the rational case for free will, from a first-person perspective, may be very informative for understanding other cases in which your intuitions are misleading you by corrupting your rational thought.
- Understand the limits of your own conceptual apparatus.
The space of potential minds is huge; the space of human-like minds is a tiny subset of it. You may believe that your human mind cannot really conceive of what other potential minds would be like if they were sufficiently different, but do you know that in your bones? The result of meditation is a mind that is well within the space of human-like minds...but you will not be able to imagine what having that kind of mind is like until you have it. That puts potential alien minds and AIs, or rather, your ability to imagine them with any sort of accuracy, into perspective.
Risks.
It is extremely important to realize that the process of replacing the lenses of your mental processes can lead to intense mental turmoil, potentially severe enough that it impacts your ability to function effectively for weeks, months, or even years. This does not happen to everyone, and it need not be severe when it does happen, but you should consider the degree to which you're committed to this before you start. I would recommend not starting at all unless you are willing to see it through and not give up just because it seems to have made things temporarily suck: experience has shown that giving up while things suck is a great way to make things suck for a long time. (And experience has shown that commitment helps to avoid this problem.)
It is also important to realize that this is an experiment in self-transformation. Past a certain point, there is no going back, and no undo button. As a matter of informed consent, you need to know that the style of meditation that leads to the goal I've described can and will change the functioning of your brain, permanently. Lots of people have found these changes worthwhile. That doesn't mean there's no need to think about what you're about to do before you do it.
More information forthcoming in Part 2. (Perhaps next week.)
Addendum.
I have made all kinds of claims in this post, some of which may be seen as wild, reckless, unfounded, unargued-for, and so on. Certainly I'm not doing very much hedging and qualification. The really remarkable thing that communities interested in this kind of human development have discovered is that people who work at meditation long enough will reliably and regularly say the same kinds of things, in the same order; and people who have stumbled onto the exercises that lead to this kind of development outside of these communities will also, reliably and regularly, say the same kinds of things (although some translation between cultural frameworks may have to go on first). Further. I have not known anyone to suffer from a deficit in rationality or in the ability to observe and assess themselves by practicing meditation in the way that leads to this kind of development. So my working hypothesis is:
- Certain styles of meditation lead to bona fide insight, and there is a consensus on what that insight is among people who meditate in those styles; anyone with the same cultural background (e.g. contemporary Westerners) who takes up meditation is likely to experience that insight and describe it a way that is broadly similar to everyone else's description, whether or not they are primed to do so by the discourse of the communities of which they are members.
I hope that exposing readers of Less Wrong to this information will help me to confirm or deny this hypothesis. More importantly, I'm also sharing the information that I am because I hope that learning about it will ultimately help people to benefit personally from it, as I have.
Also, please note that my metaphor of a factory is just a metaphor, intended to be intuitive and helpful, not intended to be anything like a precise and thorough description of how minds work or how meditation changes how minds work.
Finally, this was written as a blog post, not a final draft of a formal article. Criticisms related to tone and style are especially welcomed. And apologies in for the length of the piece, as well as any formatting issues it has (I have little experience with effective formatting for blogs.)
I did not communicate what I meant to say very well. I'll try again.
I view my utility function as a mathematical representation of all of my own preferences. My working definition of "preferences" is: conditions that, if satisfied by the universe, will cause me to feel good about the universe's state, and if unsatisfied by the universe, will cause me to feel bad about the universe's state. When I talk about "feeling good" and "feeling bad" in this context, I'm trying to refer to whatever motivation it is that causes us to try to maximize what we call "utility". I don't know a good way in english to differentiate between the emotion that is displayed, for instance when a person is self flagellating, and the emotion that causes someone to try to take down a corrupt ruler.
If I learn that some dictator ruling over some other country is torturing and killing that country's people, my internal stream of consciousness may register the statement, "That is not acceptable. What should I do to try to improve the situation of the people in that country?" That is a negative "feeling" produced by the set of preferences that I label "morality". I do not particularly want the parts of my brain that make me moral to vanish. I do not want to self modify in such a way that I will genuinely have no preference between a world where the leader of country X is committing war crimes, and a world where country X is at peace and the population is generally well off.
Should I mope around and feel terrible because the citizens of country X are being tortured? Of course not. That's unhelpful. I do not, in fact, have a positive term in my utility function for my own misery, as my earlier post, now that I've reread it, seems to imply. Rather, I have a positive term in my utility function for whatever it is that doesn't make a person a sociopath, and that was what I was trying to talk about.
That's not really a preference. A preference is, "I like strawberry ice cream better than vanilla". I experience more utility from strawberry than vanilla, but this doesn't make me feel bad if there's only vanilla.
It is a serious misunderstanding of the human cognitive architecture to assume that an u... (read more)