Sorting Out Sticky Brains

50 Alicorn 18 January 2010 04:18AM

tl;dr: Just because it doesn't seem like we should be able to have beliefs we acknowledge to be irrational, doesn't mean we don't have them.  If this happens to you, here's a tool to help conceptualize and work around that phenomenon.

There's a general feeling that by the time you've acknowledged that some belief you hold is not based on rational evidence, it has already evaporated.  The very act of realizing it's not something you should believe makes it go away.  If that's your experience, I applaud your well-organized mind!  It's serving you well.  This is exactly as it should be.

If only we were all so lucky.

Brains are sticky things.  They will hang onto comfortable beliefs that don't make sense anymore, view the world through familiar filters that should have been discarded long ago, see significances and patterns and illusions even if they're known by the rest of the brain to be irrelevant.  Beliefs should be formed on the basis of sound evidence.  But that's not the only mechanism we have in our skulls to form them.  We're equipped to come by them in other ways, too.  It's been observed1 that believing contradictions is only bad because it entails believing falsehoods.  If you can't get rid of one belief in a contradiction, and that's the false one, then believing a contradiction is the best you can do, because then at least you have the true belief too.

The mechanism I use to deal with this is to label my beliefs "official" and "unofficial".  My official beliefs have a second-order stamp of approval.  I believe them, and I believe that I should believe them.  Meanwhile, the "unofficial" beliefs are those I can't get rid of, or am not motivated to try really hard to get rid of because they aren't problematic enough to be worth the trouble.  They might or might not outright contradict an official belief, but regardless, I try not to act on them.

continue reading »

Living Luminously

60 Alicorn 17 March 2010 01:17AM

The following posts may be useful background material:  Sorting Out Sticky Brains; Mental Crystallography; Generalizing From One Example

I took the word "luminosity" from "Knowledge and its Limits" by Timothy Williamson, although I'm using it in a different sense than he did.  (He referred to "being in a position to know" rather than actually knowing, and in his definition, he doesn't quite restrict himself to mental states and events.)  The original ordinary-language sense of "luminous" means "emitting light, especially self-generated light; easily comprehended; clear", which should put the titles into context.

Luminosity, as I'll use the term, is self-awareness.  A luminous mental state is one that you have and know that you have.  It could be an emotion, a belief or alief, a disposition, a quale, a memory - anything that might happen or be stored in your brain.  What's going on in your head?  What you come up with when you ponder that question - assuming, nontrivially, that you are accurate - is what's luminous to you.  Perhaps surprisingly, it's hard for a lot of people to tell.  Even if they can identify the occurrence of individual mental events, they have tremendous difficulty modeling their cognition over time, explaining why it unfolds as it does, or observing ways in which it's changed.  With sufficient luminosity, you can inspect your own experiences, opinions, and stored thoughts.  You can watch them interact, and discern patterns in how they do that.  This lets you predict what you'll think - and in turn, what you'll do - in the future under various possible circumstances.

continue reading »

Let There Be Light

39 Alicorn 17 March 2010 07:35PM

Sequence index: Living Luminously
Previously in sequence: You Are Likely To Be Eaten By A Grue
Next in sequence: The ABC's of Luminosity

You can start from psych studies, personality tests, and feedback from people you know when you're learning about yourself.  Then you can throw out the stuff that sounds off, keep what sounds good, and move on.

You may find your understanding of this post significantly improved if you read the first story from Seven Shiny Stories.

Where do you get your priors, when you start modeling yourself seriously instead of doing it by halfhearted intuition?

Well, one thing's for sure: not with the caliber of introspection you're most likely starting with.  If you've spent any time on this site at all, you know people are riddled with biases and mechanisms for self-deception that systematically confound us about who we are.  ("I'm splendid and brilliant!  The last five hundred times I did non-splendid non-brilliant things were outrageous flukes!")  Humans suck at most things, and obeying the edict "Know thyself!" is not a special case.

The outside view has gotten a bit of a bad rap, but I'm going to defend it - as a jumping-off point, anyway - when I fill our luminosity toolbox.  There's a major body of literature designed to figure out just what the hell happens inside our skulls: it's called psychology, and they have a rather impressive track record.  For instance, learning about heuristics and biases may let you detect them in action in yourself.  I can often tell when I'm about to be subject to the bystander effect ("There is someone sitting in the middle of the road.  Should I call 911?  I mean, she's sitting up and everything and there are non-alarmed people looking at her - but gosh, I probably don't look alarmed either..."), have made some progress in reducing the extent to which I generalize from one example ("How are you not all driven insane by the spatters of oil all over the stove?!"), and am suspicious when I think I might be above average in some way and have no hard data to back it up ("Now I can be confident that I am in fact good at this sort of problem: I answered all of these questions and most people can't, according to someone who has no motivation to lie!").  Now, even if you are a standard psych study subject, of course you aren't going to align with every psychological finding ever.  They don't even align perfectly with each other.  But - controlling for some huge, obvious factors, like if you have a mental illness - it's a good place to start.

continue reading »

Explaining vs. Explaining Away

46 Eliezer_Yudkowsky 17 March 2008 01:59AM

Followup toReductionism, Righting a Wrong Question

John Keats's Lamia (1819) surely deserves some kind of award for Most Famously Annoying Poetry:

                    ...Do not all charms fly
At the mere touch of cold philosophy?
There was an awful rainbow once in heaven:
We know her woof, her texture; she is given
In the dull catalogue of common things.
Philosophy will clip an Angel's wings,
Conquer all mysteries by rule and line,
Empty the haunted air, and gnomed mine—
Unweave a rainbow...

My usual reply ends with the phrase:  "If we cannot learn to take joy in the merely real, our lives will be empty indeed."  I shall expand on that tomorrow.

Today I have a different point in mind.  Let's just take the lines:

Empty the haunted air, and gnomed mine—
Unweave a rainbow...

Apparently "the mere touch of cold philosophy", i.e., the truth, has destroyed:

  • Haunts in the air
  • Gnomes in the mine
  • Rainbows

Which calls to mind a rather different bit of verse:

One of these things
Is not like the others
One of these things
Doesn't belong

continue reading »

Mind Projection Fallacy

35 Eliezer_Yudkowsky 11 March 2008 12:29AM

Followup toHow an Algorithm Feels From Inside

Monsterwithgirl_2In the dawn days of science fiction, alien invaders would occasionally kidnap a girl in a torn dress and carry her off for intended ravishing, as lovingly depicted on many ancient magazine covers.  Oddly enough, the aliens never go after men in torn shirts.

Would a non-humanoid alien, with a different evolutionary history and evolutionary psychology, sexually desire a human female?  It seems rather unlikely.  To put it mildly.

People don't make mistakes like that by deliberately reasoning:  "All possible minds are likely to be wired pretty much the same way, therefore a bug-eyed monster will find human females attractive."  Probably the artist did not even think to ask whether an alien perceives human females as attractive.  Instead, a human female in a torn dress is sexy—inherently so, as an intrinsic property.

They who went astray did not think about the alien's evolutionary history; they focused on the woman's torn dress.  If the dress were not torn, the woman would be less sexy; the alien monster doesn't enter into it.

continue reading »

Hand vs. Fingers

25 Eliezer_Yudkowsky 30 March 2008 12:36AM

Followup toReductionism, Explaining vs. Explaining Away, Fake Reductionism

Back to our original topic:  Reductionism, which (in case you've forgotten) is part of a sequence on the Mind Projection Fallacy.  There can be emotional problems in accepting reductionism, if you think that things have to be fundamental to be fun.  But this position commits us to never taking joy in anything more complicated than a quark, and so I prefer to reject it.

To review, the reductionist thesis is that we use multi-level models for computational reasons, but physical reality has only a single level.  If this doesn't sound familiar, please reread "Reductionism".


Today I'd like to pose the following conundrum:  When you pick up a cup of water, is it your hand that picks it up?

Most people, of course, go with the naive popular answer:  "Yes."

Recently, however, scientists have made a stunning discovery:  It's not your hand that holds the cup, it's actually your fingers, thumb, and palm.

Yes, I know!  I was shocked too.  But it seems that after scientists measured the forces exerted on the cup by each of your fingers, your thumb, and your palm, they found there was no force left over—so the force exerted by your hand must be zero.

continue reading »

Righting a Wrong Question

68 Eliezer_Yudkowsky 09 March 2008 01:00PM

Followup toHow an Algorithm Feels from the Inside, Dissolving the Question, Wrong Questions

When you are faced with an unanswerable question—a question to which it seems impossible to even imagine an answer—there is a simple trick which can turn the question solvable.

Compare:

  • "Why do I have free will?"
  • "Why do I think I have free will?"

The nice thing about the second question is that it is guaranteed to have a real answer, whether or not there is any such thing as free will.  Asking "Why do I have free will?" or "Do I have free will?" sends you off thinking about tiny details of the laws of physics, so distant from the macroscopic level that you couldn't begin to see them with the naked eye.  And you're asking "Why is X the case?" where X may not be coherent, let alone the case.

"Why do I think I have free will?", in contrast, is guaranteed answerable.  You do, in fact, believe you have free will.  This belief seems far more solid and graspable than the ephemerality of free will.  And there is, in fact, some nice solid chain of cognitive cause and effect leading up to this belief.

If you've already outgrown free will, choose one of these substitutes:

  • "Why does time move forward instead of backward?" versus "Why do I think time moves forward instead of backward?"
  • "Why was I born as myself rather than someone else?" versus "Why do I think I was born as myself rather than someone else?"
  • "Why am I conscious?" versus "Why do I think I'm conscious?"
  • "Why does reality exist?" versus "Why do I think reality exists?"

continue reading »

Dissolving the Question

44 Eliezer_Yudkowsky 08 March 2008 03:17AM

Followup toHow an Algorithm Feels From the Inside, Feel the Meaning, Replace the Symbol with the Substance

"If a tree falls in the forest, but no one hears it, does it make a sound?"

I didn't answer that question.  I didn't pick a position, "Yes!" or "No!", and defend it.  Instead I went off and deconstructed the human algorithm for processing words, even going so far as to sketch an illustration of a neural network.  At the end, I hope, there was no question left—not even the feeling of a question.

Many philosophers—particularly amateur philosophers, and ancient philosophers—share a dangerous instinct:  If you give them a question, they try to answer it.

Like, say, "Do we have free will?"

The dangerous instinct of philosophy is to marshal the arguments in favor, and marshal the arguments against, and weigh them up, and publish them in a prestigious journal of philosophy, and so finally conclude:  "Yes, we must have free will," or "No, we cannot possibly have free will."

Some philosophers are wise enough to recall the warning that most philosophical disputes are really disputes over the meaning of a word, or confusions generated by using different meanings for the same word in different places.  So they try to define very precisely what they mean by "free will", and then ask again, "Do we have free will?  Yes or no?"

A philosopher wiser yet, may suspect that the confusion about "free will" shows the notion itself is flawed.  So they pursue the Traditional Rationalist course:  They argue that "free will" is inherently self-contradictory, or meaningless because it has no testable consequences.  And then they publish these devastating observations in a prestigious philosophy journal.

But proving that you are confused may not make you feel any less confused.  Proving that a question is meaningless may not help you any more than answering it.

continue reading »

Why Our Kind Can't Cooperate

132 Eliezer_Yudkowsky 20 March 2009 08:37AM

Previously in series: Rationality Verification

From when I was still forced to attend, I remember our synagogue's annual fundraising appeal.  It was a simple enough format, if I recall correctly.  The rabbi and the treasurer talked about the shul's expenses and how vital this annual fundraise was, and then the synagogue's members called out their pledges from their seats.

Straightforward, yes?

Let me tell you about a different annual fundraising appeal.  One that I ran, in fact; during the early years of a nonprofit organization that may not be named.  One difference was that the appeal was conducted over the Internet.  And another difference was that the audience was largely drawn from the atheist/libertarian/technophile/sf-fan/early-adopter/programmer/etc crowd.  (To point in the rough direction of an empirical cluster in personspace.  If you understood the phrase "empirical cluster in personspace" then you know who I'm talking about.)

I crafted the fundraising appeal with care.  By my nature I'm too proud to ask other people for help; but I've gotten over around 60% of that reluctance over the years.  The nonprofit needed money and was growing too slowly, so I put some force and poetry into that year's annual appeal.  I sent it out to several mailing lists that covered most of our potential support base.

And almost immediately, people started posting to the mailing lists about why they weren't going to donate.  Some of them raised basic questions about the nonprofit's philosophy and mission.  Others talked about their brilliant ideas for all the other sources that the nonprofit could get funding from, instead of them.  (They didn't volunteer to contact any of those sources themselves, they just had ideas for how we could do it.)

Now you might say, "Well, maybe your mission and philosophy did have basic problems—you wouldn't want to censor that discussion, would you?"

Hold on to that thought.

Because people were donating.  We started getting donations right away, via Paypal.  We even got congratulatory notes saying how the appeal had finally gotten them to start moving.  A donation of $111.11 was accompanied by a message saying, "I decided to give **** a little bit more.  One more hundred, one more ten, one more single, one more dime, and one more penny.  All may not be for one, but this one is trying to be for all."

But none of those donors posted their agreement to the mailing list.  Not one.

continue reading »

Wrong Questions

34 Eliezer_Yudkowsky 08 March 2008 05:11PM

Followup toDissolving the Question, Mysterious Answers to Mysterious Questions

Where the mind cuts against reality's grain, it generates wrong questions—questions that cannot possibly be answered on their own terms, but only dissolved by understanding the cognitive algorithm that generates the perception of a question.

One good cue that you're dealing with a "wrong question" is when you cannot even imagine any concrete, specific state of how-the-world-is that would answer the question.  When it doesn't even seem possible to answer the question.

Take the Standard Definitional Dispute, for example, about the tree falling in a deserted forest.  Is there any way-the-world-could-be—any state of affairs—that corresponds to the word "sound" really meaning only acoustic vibrations, or really meaning only auditory experiences?

("Why, yes," says the one, "it is the state of affairs where 'sound' means acoustic vibrations."  So Taboo the word 'means', and 'represents', and all similar synonyms, and describe again:  How can the world be, what state of affairs, would make one side right, and the other side wrong?)

Or if that seems too easy, take free will:  What concrete state of affairs, whether in deterministic physics, or in physics with a dice-rolling random component, could ever correspond to having free will?

And if that seems too easy, then ask "Why does anything exist at all?", and then tell me what a satisfactory answer to that question would even look like.

continue reading »

View more: Prev | Next