You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Are Intuitions Good Evidence? [Link]

3 XiXiDu 12 February 2011 02:57PM

Ethics leans especially heavily on appeals to intuition, with a whole school of ethicists (“intuitionists”) maintaining that a person can see the truth of general ethical principles not through reason, but because he “just sees without argument that they are and must be true.”6 Intuitions are also called upon to rebut ethical theories such as utilitarianism: maximizing overall utility would require you to kill one innocent person if, in so doing, you could harvest her organs and save five people in need of transplants. Such a conclusion is taken as a reductio ad absurdum, requiring utilitarianism to be either abandoned or radically revised – not because the conclusion is logically wrong, but because it strikes nearly everyone as intuitively wrong.

[...]

One central concern for the critics is that a single question can inspire totally different, and mutually contradictory, intuitions in different people. Personally, I’ve often been amazed at how completely I disagree with what a philosopher claims is “intuitively” the case. For example, I disagree with Moore’s intuition that it would be better for a beautiful planet to exist than an ugly one even if there were no one around to see it. I can’t understand what the words “better” and “worse,” let alone “beautiful” and “ugly,” could possibly mean outside the domain of the experiences of conscious beings. I know I’m not alone in my disagreement with Moore, yet I’ve also talked to other well-respected professional philosophers who claim to share his intuition.

Link: rationallyspeaking.blogspot.com/2011/01/are-intuitions-good-evidence.html

I think the article provides some interesting insights into philosophy. It is also food for thought when it comes to metaethics, the psychological diversity of mankind and intuitively wrong versus rationally right.

via Luke Muehlhauser

An Abortion Dialogue

10 gwern 12 February 2011 01:20AM

A few years ago, I wrote a little dialogue I imagined between 2 materialists, one of whom was for and one against abortion, centering on the personal identity question. I recently cleaned it up and added a number of references for the biological claims.

You can read it at An Abortion Dialogue.

Early feedback from #lesswrong is that it's a 'nicely enjoyable read' and 'quite good'. I hope everyone likes it, even if it doesn't exactly break new philosophical ground.

What does a calculator mean by "2"?

8 Wei_Dai 07 February 2011 02:49AM

I think my previous argument was at least partly wrong or confused, because I don't really understand what it means for a computation to mean something by a symbol. Here I'll back up and try to figure out what I mean by "mean" first.

Consider a couple of programs. The first one (A) is an arithmetic calculator. It takes a string as input, interprets it a formula written in decimal notation, and outputs the result of computing that formula. For example, A("9+12") produces "21" as output. The second (B) is a substitution cipher calculator. It "encrypts" its input by substituting each character using a fixed mapping. It so happens that B("9+12") outputs "c6b3".

What do A and B mean by "2"? Intuitively it seems that by "2", A means the integer (i.e., abstract mathematical object) 2, while for B, "2" doesn't really mean anything; it's just a symbol that it blindly manipulates. But A also just produces its output by manipulating symbols, so why does it seem like it means something by "2"? I think it's because the way A manipulates the symbol "2" corresponds to how the integer 2 "works", whereas the way B manipuates "2" doesn't correspond to anything, except how it manipulates that symbol. We could perhaps say that by "2" B means "the way B manipulates the symbol '2'", but that doesn't seem to buy us anything.

(Similarly, by "+" A means the mathematical operation of addition, whereas B doesn't really mean anything by it. Note that this discussion assumes some version of mathematical platonism. A formalist would probably say that A also doesn't mean anything by "2" and "+" except how it manipulates those symbols, but that seems implausible to me.)

Going back to meta-ethics, I think a central mystery is what do we mean by "right" when we're considering moral arguments (by which I don't mean Nesov's technical term "moral arguments", but arguments such as "total utilitarianism is wrong (i.e., not right) because it leads to the following conclusions ..., which are obviously wrong"). If human minds are computations (which I think they almost certainly are), then the way that a human mind processes such arguments can be viewed as an algorithm (which may differ from individual to individual). Suppose we could somehow abstract this algorithm away from the rest of the human, and consider it as, say, a program that when given an input string consisting of a list of moral arguments, thinks them over, comes to some conclusions, and outputs those conclusions in the form of a utility function.

If my understanding is correct, what this algorithm means by "right" depends on the details of how it works. Is it more like calculator A or B? It may be that the way we respond to moral arguments doesn't correspond to anything except how we respond to moral arguments. For example, if it's totally random, or depend in a chaotic fashion on trivial details of wording or ordering of its input. This would be case B, where "right" can't really be said to mean anything, at least as far as the part of our minds that considers moral arguments is concerned. Or it may be case A, where the way we process "right" corresponds to some abstract mathematical object or some other kind of external object, in which case I think "right" can be said to mean that external object.

Since we don't know which is the case yet, I think we're forced to say that we don't currently know what "right" means.

[LINK] Levels of Ethics

1 WrongBot 07 February 2011 01:41AM

I've resumed blogging For Real This Time™, starting with an introductory overview of the distinction between metaethics and normative ethics.

Should I cross-post it to LessWrong? Should I link or cross-post future blogging about metaethics and other LW-relevant topics? Is it rubbish? Inquiring minds (mostly mine) need to know!

Another Argument Against Eliezer's Meta-Ethics

9 Wei_Dai 05 February 2011 12:54AM

I think I've found a better argument that Eliezer's meta-ethics is wrong. The advantage of this argument is that it doesn't depend on the specifics of Eliezer's notions of extrapolation or coherence.

Eliezer says that when he uses words like "moral", "right", and "should", he's referring to properties of a specific computation. That computation is essentially an idealized version of himself (e.g., with additional resources and safeguards). We can ask: does Idealized Eliezer (IE) make use of words like "moral", "right", and "should"? If so, what does IE mean by them? Does he mean the same things as Base Eliezer (BE)? None of the possible answers are satisfactory, which implies that Eliezer is probably wrong about what he means by those words.

1. IE does not make use of those words. But this is intuitively implausible.

2. IE makes use of those words and means the same things as BE. But this introduces a vicious circle. If IE tries to determine whether "Eliezer should save person X" is true, he will notice that it's true if he thinks it's true, leading to Löb-style problems.

3. IE's meanings for those words are different from BE's. But knowing that, BE ought to conclude that his meta-ethics is wrong and morality doesn't mean what he thinks it means.

Philosophy of Artificial Intelligence (links)

7 lukeprog 25 January 2011 03:39AM

Earlier, I provided an overview of formal epistemology, a field of philosophy highly relevant to the discussions on Less Wrong. Today I do the same for another branch of philosophy: the philosophy of artificial intelligence (here's another overview).

Some debate whether machines can have minds at all. The most famous argument against machines achieving general intelligence comes from Hubert Dreyfus. The most famous argument against the claim that an AI can have mental states is John Searle's Chinese Room argument, to which there are many replies. The argument comes in several variations. Most Less Wrongers have already concluded that yes, machines can have minds. Others debate whether machines can be conscious

There is much debate on the significance of variations on the Turing Test. There is also lots of interplay between artificial intelligence work and philosophical logic. There is some debate over whether minds are multiply realizable, though most accept that they are. There is some literature on the problem of embodied cognition - human minds can only do certain things because of their long development; can these achievements be replicated in a machine written "from scratch"?

Of greater interest to me and perhaps most Less Wrongers is the ethics of artificial intelligence. Most of the work here so far is on the rights of robots. For Less Wrongers, the more pressing concern is that of creating AIs that behave ethically. (In 2009, robots programmed to cooperate evolved to lie to each other.) Perhaps the most pressing is the need to develop Friendly AI, but as far as I can find, no work on Good's intelligence explosion singularity idea has been published in a major peer-reviewed journal except for David Chalmers' "The Singularity: A Philosophical Analysis" (Journal of Consciousness Studies 17: 7-65). The next closest thing may be something like "On the Morality of Artificial Agents" by Floridi & Sanders.

Perhaps the best overview of the philosophy of artificial intelligence is chapter 26 of Russell & Norvig's Artificial Intelligence: A Modern Approach.

If reductionism is the hammer, what nails are out there?

14 AnnaSalamon 11 December 2010 01:58PM

EDIT: I'm moving this to the Discussion section because people seem to not like it (lack of upvotes) and to find the writing unclear.  I'd love writing advice, if anyone wants to offer some.

 


 

Related to: Dissolving the question, Explaining vs explaining away

I review parts of the reductionism sequence, in hopes of setting up for future reduction work.

So, you’ve been building up your reductionism muscles, on LW or elsewhere.  You’re no longer confused about a magical essence of free will; you understand how particular arrangements of atoms can make choices.  You’re no longer confused about a magical essence of personal identity; you understand where the feeling of “you” comes from, and how one could in principle create many copies of you that continued "your" experiences, and how the absence of an irreducible essence doesn’t reduce life’s meaning.

The natural next question is: what other phenomena can you reduce?  What topics are we currently confused about which may yield to the same tools?  And what tools, exactly, do we have for such reductions?

With the goal of paving the way for new reductions, then, let’s make a list of questions that persistently felt like questions about magical essences, including both questions that have been solved, and questions about which we are currently confused.  And let’s also list tools or strategies that assisted in their dissolution.  I made an attempt at such lists below; perhaps you can help me refine them?

continue reading »

Compatibilism in action

-4 rwallace 23 November 2010 05:58PM

A practical albeit fictional application of the philosophical conclusion that free will is compatible with determinism came up today in a discussion about a setting element from the role-playing game Exalted

(5:31:44 PM) Nekira Sudacne: So during the pirmodial war, one Yozi got his fetch killed and he reincarnated as Sachervell, He Who Knows The Shape of Things To Come. And he reincarnated asleep. and he has remained asleep. And the other primordials do all in their power to keep him asleep. and he wants to be asleep.

For you see, for as long as he sleeps, he dreams only of the present. should he awaken, he will see the totaltiy of exsistance, all things past and future exsactly as they will happen. quantumly speaking he will lock the universe into a single shape. All things that happen will happen as he sees them happen and there will be no chance for anyone to change it. effectivly nullifying chance for change. Even he cannot alter his vision for his vision takes into account all attempts to alter it.

And there's a big debate over rather or not this is a game ending thing. Essentially, does predestination negate freewill or not

(5:32:17 PM) Nekira Sudacne: and this is important, because one of the requirements for Exaltation to function is freewill. if Sachervell is able to negate freewill, then Exaltations will cease to function

(5:32:44 PM) Nekira Sudacne: and maddenly enough the game authors are also on the thread arguing because THEY don't agree where to go with it either :) 

(5:38:02 PM) rw271828: ah, well I happen to know the answer :-)

(5:39:23 PM) rw271828: one of the most important discoveries of 20th-century mathematics is that in general the behavior of a complex system cannot be predicted -- or rather, there is no easier way to predict it than to run it and see what happens. Note in particular:

(5:39:41 PM) rw271828: 1. This is a mathematical fact, so it applies in all possible universes, including Exalted

(5:40:01 PM) rw271828: 2. Humans and other sentient lifeforms are complex systems in the relevant sense

(5:41:33 PM) rw271828: so if you postulate an entity that can actually see the future (as opposed to just extrapolate what is likely to happen unless something intervenes), the only way to do that is for that entity to run a perfect simulation, a complete copy of the universe 

(5:42:50 PM) rw271828:  if you're willing to postulate that, well fine, continue the game, and just note that you are running it in the copy the entity is using to make the prediction - the people in the setting still have free will, it is their actions that determine the future, and thus the result of the prediction ^.^

(5:43:04 PM) Nekira Sudacne: Hah. nice one

PhilPapers survey results now include correlations

6 steven0461 09 November 2010 07:15PM

Now you can see how philosophical positions are correlated to each other and to some demographic variables:

http://philpapers.org/surveys/linear_most.pl

Bayesian Doomsday Argument

-5 DanielLC 17 October 2010 10:14PM

First, if you don't already know it, Frequentist Doomsday Argument:

There's some number of total humans. There's a 95% chance that you come after the last 5%. There's been about 60 to 120 billion people so far, so there's a 95% chance that the total will be less than 1.2 to 2.4 trillion.

I've modified it to be Bayesian.

First, find the priors:

Do you think it's possible that the total number of sentients that have ever lived or will ever live is less than a googolplex? I'm not asking if you're certain, or even if you think it's likely. Is it more likely than one in infinity? I think it is too. This means that the prior must be normalizable.

If we take P(T=n) ∝ 1/n, where T is the total number of people, it can't be normalized, as 1/1 + 1/2 + 1/3 + ... is an infinite sum. If it decreases faster, it can at least be normalized. As such, we can use 1/n as an upper limit.

Of course, that's just the limit of the upper tail, so maybe that's not a very good argument. Here's another one:

We're not so much dealing with lives as life-years. Year is a pretty arbitrary measurement, so we'd expect the distribution to be pretty close for the majority of it if we used, say, days instead. This would require the 1/n distribution.

After that,

T = total number of people

U = number you are

P(T=n) ∝ 1/n
U = m
P(U=m|T=n) ∝ 1/n
P(T=n|U=m) = P(U=m|T=n) * P(T=n) / P(U=m)
= (1/n^2) / P(U=m)
P(T>n|U=m) = ∫P(T=n|U=m)dn
= (1/n) / P(U=m)
And to normalize:
P(T>m|U=m) = 1
= (1/m) / P(U=m)
m = 1/P(U=m)
P(T>n|U=m) = (1/n)*m
P(T>n|U=m) = m/n

So, the probability of there being a total of 1 trillion people total if there's been 100 billion so far is 1/10.

There's still a few issues with this. It assumes P(U=m|T=n) ∝ 1/n. This seems like it makes sense. If there's a million people, there's a one-in-a-million chance of being the 268,547th. But if there's also a trillion sentient animals, the chance of being the nth person won't change that much between a million and a billion people. There's a few ways I can amend this.

First: a = number of sentient animals. P(U=m|T=n) ∝ 1/(a+n). This would make the end result P(T>n|U=m) = (m+a)/(n+a).

Second: Just replace every mention of people with sentients.

Third: Take this as a prediction of the number of sentients who aren't humans who have lived so far.

The first would work well if we can find the number of sentient animals without knowing how many humans there will be. Assuming we don't take the time to terreform every planet we come across, this should work okay.

The second would work well if we did tereform every planet we came across.

The third seems a bit wierd. It gives a smaller answer than the other two. It gives a smaller answer than what you'd expect for animals alone. It does this because it combines it for a Doomsday Argument against animals being sentient. You can work that out separately. Just say T is the total number of humans, and U is the total number of animals. Unfortunately, you have to know the total number of humans to work out how many animals are sentient, and vice versa. As such, the combined argument may be more useful. It won't tell you how many of the denizens of planets we colonise will be animals, but I don't think it's actually possible to tell that.

One more thing, you have more information. You have a lifetime of evidence, some of which can be used in these predictions. The lifetime of humanity isn't obvious. We might make it to the heat death of the universe, or we might just kill each other off in a nuclear or biological war in a few decades. We also might be annihilated by a paperclipper somewhere in between. As such, I don't think the evidence that way is very strong.

The evidence for animals is stronger. Emotions aren't exclusively intelligent. It doesn't seem animals would have to be that intelligent to be sentient. Even so, how sure can you really be. This is much more subjective than the doomsday part, and the evidence against their sentience is staggering. I think so anyway, how many animals are there at different levels of intelligence?

Also, there's the priors for total human population so far. I've read estimates vary between 60 and 120 billion. I don't think a factor of two really matters too much for this discussion.

So, what can we use for these priors?

Another issue is that this is for all of space and time, not just Earth.

Consider that you're the mth person (or sentient) from the lineage of a given planet. l(m) is the number of planets with a lineage of at least m people. N is the total number of people ever, n is the number on the average planet, and p is the number of planets.

l(m)/N
=l(m)/(n*p)
=(l(m)/p)/n

l(m)/p is the portion of planets that made it this far. This increases with n, so this weakens my argument, but only to a limited extent. I'm not sure what that is, though. Instinct is that l(m)/p is 50% when m=n, but the mean is not the median. I'd expect a left-skew, which would make l(m)/p much lower than that. Even so, if you placed it at 0.01%, this would mean that it's a thousand times less likely at that value. This argument still takes it down orders of magnitude than what you'd think, so that's not really that significant.

Also, a back-of-the-envolope calculation:

Assume, against all odds, there are a trillion times as many sentient animals as humans, and we happen to be the humans. Also, assume humans only increase their own numbers, and they're at the top percentile for the populations you'd expect. Also, assume 100 billion humans so far.

n = 1,000,000,000,000 * 100,000,000,000 * 100

n = 10^12 * 10^11 * 10^2

n = 10^25

Here's more what I'd expect:

Humanity eventually puts up a satilite to collect solar energy. Once they do one, they might as well do another, until they have a dyson swarm. Assume 1% efficiency. Also, assume humans still use their whole bodies instead of being a brain in a vat. Finally, assume they get fed with 0.1% efficiency. And assume an 80-year lifetime.

n = solar luminosity * 1% / power of a human * 0.1% * lifetime of Sun / lifetime of human

n = 4 * 10^26 Watts * 0.01 / 100 Watts * 0.001 * 5,000,000,000 years / 80 years

n = 2.5 * 10^27

By the way, the value I used for power of a human is after the inefficiencies of digesting.

Even with assumptions that extreme, we couldn't use this planet to it's full potential. Granted, that requires mining pretty much the whole planet, but with a dyson sphere you can do that in a week, or two years with the efficiency I gave.

It actually works out to about 150 tons of Earth per person. How much do you need to get the elements to make a person?

Incidentally, I rewrote the article, so don't be surprised if some of the comments don't make sense.

A Novice Buddhist's Humble Experiences

12 Will_Newsome 04 October 2010 10:40AM

This is an introduction and description of vipassana meditation [edit: actually, anapanasati, not vipassana as such] more than Buddhism. Nonetheless I hope it serves as some testament to the value of Buddhist thought outside of meditation.

One day I hope more people take up the mantle of the Buddhist Conspiracy, the Bayesanga, and preach the good word of Bayesian Buddhism for all to hear. Until then, though, I'd like to follow in the spirit of fellow Bayesian Buddhist Luke Grecki, and describe some of my personal experiences with anapanasati meditation in the hopes that they'll convince you to check it out.

Nearly everything I've learned about anapanasati/vipassana comes from this excellent guide. It's easy to read and it actually explains the reasoning behind all of the things you're asked to do in vipassana. I heavily encourage you to give it a look. Meditation without instruction didn't lead me anywhere: I spent hours letting my mind get tossed about while I tried in vain to think of nothing. Trying to think of nothing is not a good idea. Vipassana is the practice of mindfulness, and it is recommended that you focus on your breath (focusing on breath is sort of a form of vipassana, and sort of its own thing; I haven't quite figured it out yet). I chose that as my anchor for meditation as recommended. Since reading the above linked guide on meditation, I've meditated a mere 4 times, for a total of 100 minutes. I'm a total novice! So don't confuse my experiences for the wisdom of a venerable teacher. But I think that maybe since you, too, will be a novice, hearing a novice's experiences might be useful. A mere 100 minutes of practice, and I've had many insights that have helped me think more clearly about mindfulness, compassion, self-improvement, the nature of feedback cycles and cascades, relationships between the body and cognition, and other diverse subjects.

The first meditation session was for 10 minutes, the second for 40 minutes, the third for 10 minutes, and the fourth for 40 minutes again. Below are descriptions of the two 40 minutes sessions. In the first, I experienced a state of jhana (the second jhana, to be precise; I'm about 70% confident), which was profoundly moving and awe-inspiring. In the the second, my mind was a little too chatty to reach a jhana, but I did accidentally have a few insights that I think are important for me to have realized.

The below are very personal experiences, and I don't suspect that they're typical. But I hope that describing my experiences will inspire you to consider mindfulness meditation, or to continue with mindfulness meditation, even if your experiences end up being very different from mine. You might find that some of the 'physiological effects' I list are egregious, but I decided to leave them in, 'cuz they just might be relevant. For instance, I find that, quite surprisingly, my level of mindfulness seems to directly correlate with how numb various parts of my body are! Also, listing what parts of me were in pain at various points might alert future practitioners to what sorts of pain might be expected from sitting still for longer than thirty minutes. The most interesting observations will probably be in the 'insights' sections.


40 minutes, Evening/night, September 17, 2010.

Setting: First laying down on a bed with a pillow over my eyes, then sitting up on the bed on a pillow.

Physiological effects:

  • Before jhana:
  • I lay down on my bed with a pillow over my eyes. I think this is interesting, because many texts I've read emphasize the importance of sitting up straight. I don't think it is necessary. That said, they do seem to know what they're talking about, and I'm very new to this, so perhaps being able to enter a jhana from a position of lying down was something of a fluke.
  • I started concentrating on my breath.
  • My breath alternated between deep and slow and a more natural breath. As time went on and I became more comfortable, my breath became less slow and more normal.
  • I experienced numb facial muscles and random eye muscle flickers. I felt trong sense of peace, compassion, and wellbeing.
  • The numbness and joy gave way to a full-out jhana experience after about 5 to 10 minutes of meditation.
  • During jhana:
  • Incredibly intense feeling of bliss, compassion, and piece. I involuntarily laughed at loud about five times. I think there must have been some kind of feedback loop going on here. I felt clearheaded.
  • Incredibly intense body high. My whole body was quivering, including especially my eyelids. It was a numbness-like feeling, though perhaps different in that if felt like quivering. It could be that my perception of the feeling had changed.
  • I sat up on a pillow.
  • Watching the inside of my eyelids was entirely grey, where most of the time there are neon patterns on a black background. This was rather odd and the most obvious evidence that something really weird was going on with my perception.
  • I tried to sit in a half-lotus position. This was mildly painful, though the pain wasn't bad, if you take my meaning. I kept at it for about two to five minutes, after which I reverted to a normal cross-legged position.
  • I had a strong compulsion to sing out 108 'Om mane padme hum's, which I did, followed by 108 more, counting on my fingers.
  • I then got up and played a few blitz chess games online, still feeling the very strong effects of the meditation. Surprisingly, in the 3 games I played I was a tad subpar. I sorta expected to play amazingly well, though I wasn't sad when it turned out I was wrong. This might be a sign that my feelings of clearheadedness were not entirely justified, but the results aren't very indicative either way. By the third game the effects had mostly worn off, but I still felt very peaceful, compassionate, self-accepting, and joyful. The flittering quivering numbness and energy had mostly worn off.

Insights on breath:

  • I could feel the temperature difference of the air as it was inhaled and exhaled.
  • When I breathed heavily, inhalation was very slightly painful.
  • (A few others that I've forgotten.)
  • (I had the above insights before entering jhana. I think they helped achieve jhana.)

General insights:

  • Previously I'd heard that meditation could lead to feelings of profound bliss, compassion, and even a sort of very strong physical body high. I'd mostly discounted such reports on the grounds that 1) I've done some drugs and didn't expect the effects to be as strong as e.g. cannabis, and 2) it didn't seem clear how just focusing on your breath could cause significant physiological changes of the sort necessary to have such strong effects. After experiencing jhana, I can say I was wrong. However, I still do not understand the neurochemical mechanisms behind my experience, besides postulating the magical hypothesis of 'cascades'.
  • More generally, I realized more fully that the Buddhists really do have a lot of very good and very credible thoughts on mindfulness and rationality. I'd known this for awhile just by studying Buddhist texts and teachings, but feeling vipassana meditation working so strongly and obviously really made it sink in that Buddhism is very worth studying attentively.
  • Cascades and feedback loops in the mind are very, very strong. By becoming more mindful and more accepting, I allowed myself to become even more mindful and accepting, until the feedback loop led me to an incredible altered state. This led me to really believe that the mind is very messy and prone to accidentally allowing causation between two parameters when it'd probably be better to allow just one to push on the other, like happiness causing laughter and not the other way around. Nonetheless, I can use the messiness of my mind to my advantage by thinking the right kinds of thoughts. I got a better sense of this when I meditated again a two weeks later.
  • I am naturally rather severely self-critical. Previously I'd considered this, if not a virtue, then it least a necessary evil and a good habit that I should keep: it keeps me from being excessively narcissistic, it reminds me of areas where I can improve, it keeps me from feeling too justified in a dispute, and it allows me to better understand faults others see in me. However, becoming so accepting of both my faults and others' during meditation led me to think that perhaps the disgust I feel for myself and others is a needless emotion, and that simply acknowledging areas of improvement without associating them with negative affect is a much better way to make myself a more awesome person and understand the plights of others. The whole time I'd thought that getting angry at myself was a necessary part of being self-critical, but after meditating I realized that anger isn't a necessary part of realizing faults, just like self-love isn't a necessary part of realizing strengths. Both are affect-laden thoughts where simple awareness will do better. I have a feeling that this insight generalized to a lot of other problems.
  • If the Buddhist concept of Enlightenment is anything like a constant state of jhana (and this is somewhat implied by accounts of Gautama Buddha's path), then I can definitely see why people would want to aim for it, and I can see how it could be a very real, very effective, and very profound state of mind. It doesn't seem to me as if one has to postulate anything spiritual to think of Enlightenment as an amazing state of being that we should all aim for as rationalists. The magnaminity, compassion, competence, acceptance, and feeling of awesomeness created by the jhanas should be cultivated and drawn upon whenever possible.
  • Because of this, it is very worth researching ways to 'cheat' and induce jhana states without having to undergo careful meditation. Neurofeedback, isochronic beats, and transcranial magnetic stimulation all seem like potential paths towards easy Enlightenment. (The jhanas seem to allow strong clarity of mind where drugs do not; but it is possible that being on drugs as much as possible might also be an interesting path. I'd rather not go down it yet.) 'Course, we might still have to just do it the hard way.


40 minutes, Midnight, October 4, 2010.

Setting: Seated on a pillow on blanket on roof of my house in Tucson.

Physiological effects:

  • My left leg (quadricep) was mildly sore throughout from running/sprinting two days before. At times in went mildly numb, though not painfully so. My left foot also went slightly numb at various points throughout the sitting.
  • My shoulders and facial muscles would tense moderately at various times near the beginning of the sitting and slightly near the end. This normally followed losing track of my breath. My breathing also got heavier and faster during these times. When I focused on my breath again, my shoulders and facial muscles dropped and relaxed, and my breath returned to normal rapidity/intensity.
  • After 10 minutes and at various points after, for roughly 15 seconds each, I could feel certain facial muscles go slightly numb, though not painfully so.
  • Roughly 15 to 20 minutes in (not sure), my left hand went somewhat numb for one to three minutes.
  • Roughly 20 minutes in, my left arm went very numb for roughly two minutes, though I didn't feel any pain. My arm felt 'tight'. The numbness went away rather rapidly, followed immediately by what felt like increased blood flow and thus warmth in the rest of my body.
  • Roughly 25 minutes in I felt mild pain in my lower left back. It mostly went away within a minute or two.
  • After the meditation was over (40 minutes) I stood up and stretched. I felt very peaceful and happy. At first I felt a tad dizzy but soon felt fine.

Insights on breath:

  • Breathing was faster and more intense when I stopped focusing on it and thought of other things. (Sometimes it was slower and more intense. I think intensity was the real key change.) When I refocused on my breath, it naturally became smoother and at a more normal pace.
  • Previously, I'd always thought that air went 'up' my nose when I breathed in. I suddenly realized that air actually entered my nose diagonally, and this whole time I'd thought I'd been breating 'up' because of confirmation bias. All of a sudden it was obvious that I was breathing in diagonally. But moments later I realized I was actually mostly breathing 'up', and only a little diagonally: my new theory had also been subject to confirmation bias! So I settled on thinking that I did indeed breathe in 'up', but also a little diagonally.
  • I noticed that there are two types of breath. The first is very airy and goes through the top of your nose; it is the one that comes most naturally to me and I imagine most others. The second is throaty and maybe a little stuffy, and it seems as if less air is passing through. I tend to breath the second way a little more naturally when I try to tuck my chin in against my neck; but I can still breathe in the more airy way as well when I do this, so your mileage may very.

General insights:

  • Patterns of muscle contractions, patterns of thoughts, and patterns of breathing are all interrelated and can cause feedback loops. Being mindful of my thoughts helps me relax my muscles; relaxing my muscles helps my breathing be more natural; having a natural breath allows me to be more mindful; and so forth. This is good if I am diligent, but bad if I am not; I tend to gravitate towards whatever state I'm in. It takes effort to move between states of mind, but it seems that entropy and novel stimuli tend to push me toward patterns of thought that are irritant. I believe it is eminently possible that I could cultivate the disposition such that entropy and novel stimuli tend to push me towards mindfulness, compassion, and awesomeness.
  • Confirmation bias is there even at the very low instinctual level of breathing. As soon as you come up with a theory, even direct sensory experience doesn't always change it when it's wrong.
  • Psychic irritants, as they are sometimes called, are constantly mucking around in your brain, causing low level stress, anxiety, guilt, and general discomfort. It seems likely that this was the natural state of the brain for thousands upon thousands of years. I find it very odd that with an hour of focused mindfulness -- all you do is pay wordless attention to your breath! -- you can make a naturally fuzzy and pained human mind into a pure and blissful meditative engine. The difference is striking. It is hard for me to imagine why living in the moment has such a profound effect on cognition.

I'd love for others to share their meditative experiences, or offer feedback for this post. I'm not sure if it should become a top-level post or not. But hopefully LW starts moving in a more Buddhist and effectiveness-oriented direction.

Taken out of original essay for being egregious: I've talked previously of how there seems to be a libertarian/technophile/futurist set of rationalists and a liberal/Buddhist/scientist set of rationalists, and each eyes the other's origin with a cocked eyebrow. Well, I'm from the LBS origin group, and I still think it's the better of the two. We're better at cooperating and we're more okay with praise. But we also seem to lack an unfortunate meme that I've seen in the LTF crowd: uncharitable misinterpretation of what the best ideas of Buddhism really are, even if not every practitioner or teacher is at the standard of the best philosophers of that tradition. Hofstadter made Zen cool, but other easier and probably more useful forms of Buddhism have been left unplundered. I think it has more to do with an instinctual negative reaction towards anything that seems vaguely spiritual or religious. And don't get me wrong, there's a lot of religion and spirituality in Buddhist countries, especially of the Mahayana sort. But the best texts in the Theravada tradition have very good, very deep, and very insightful epistemology and rationality in them, of the kind that wasn't to be found anywhere else in the world for hundreds upon hundreds more years, if at all.

View more: Prev