Related on OB: Priming and Contamination
Related on LW: When Truth Isn't Enough
When I was a kid, I wanted to be like Mr. Spock on Star Trek. He was smart, he could kick ass, and he usually saved the day while Kirk was too busy pontificating or womanizing.
And since Spock loved logic, I tried to learn something about it myself. But by the time I was 13 or 14, grasping the basics of boolean algebra (from borrowed computer science textbooks), and propositional logic (through a game of "Wff'n'Proof" I picked up at a garage sale), I began to get a little dissatisfied with it.
Spock had made it seem like logic was some sort of "formidable" thing, with which you could do all kinds of awesomeness. But real logic didn't seem to work the same way.
I mean, sure, it was neat that you could apply all these algebraic transforms and dissect things in interesting ways, but none of it seemed to go anywhere.
Logic didn't say, "thou shalt perform this sequence of transformations and thereby produce an Answer". Instead, it said something more like, "do whatever you want, as long as it's well-formed"... and left the very real question of what it was you wanted, as an exercise for the logician.
And it was at that point that I realized something that Spock hadn't mentioned (yet): that logic was only the beginning of wisdom, not the end.
Of course, I didn't phrase it exactly that way myself... but I did see that logic could only be used to check things... not to generate them. The ideas to be checked, still had to come from somewhere.
But where?
When I was 17, in college philosophy class, I learned another limitation of logic: or more precisely, of the brains with which we do logic.
Because, although I'd already learned to work with formalisms -- i.e., meaningless symbols -- working with actual syllogisms about Socrates and mortals and whatnot was actually a good bit harder.
We were supposed to determine the validity of the syllogisms, but sometimes an invalid syllogism had a true conclusion, while a valid syllogism might have a false one. And, until I learned to mentally substitute symbols like A and B for the included facts, I found my brain automatically jumping to the wrong conclusions about validity.
So "logic", then -- or rationality -- seemed to require three things to actually work:
- A way to generate possibly-useful ideas
- A way to check the logical validity -- not truth! -- of those ideas, and
- A way to test those ideas against experience.
But it wasn't until my late thirties and early forties -- just in the last couple of years -- that I realized a fourth piece, implicit in the first.
And Spock, ironically enough, is the reason I found it so difficult to grasp that last, vital piece:
That to generate possibly-useful ideas in the first place, you must have some notion of what "useful" is!
And that for humans at least, "useful" can only be defined emotionally.
Sure, Spock was supposed to be immune to emotion -- even though in retrospect, everything he does is clearly motivated by emotion, whether it's his obvious love for Kirk, or his desire to be accepted as a "real" rationalis... er, Vulcan. (In other words, he disdains emotion merely because that's what he's supposed to do, not because he doesn't actually have any.)
And although this is all still fictional evidence, one might compare Spock's version of "unemotional" with the character of the undead assasin Kai, from a different science-fiction series.
Kai, played by Michael McManus, shows us a slightly more accurate version of what true emotionlessness might be like: complete and utter apathy.
Kai has no goals or cares of his own, frequently making such comments as "the dead do not want anything", and "the dead do not have opinions". He mostly does as he's asked, but for the most part, he just doesn't care about anything one way or another.
(He'll sleep in his freezer or go on a killing spree, it's all the same to him, though he'll probably tell you the likely consequences of whatever action you see fit to request of him.)
And scientifically speaking, that's a lot closer to what you actually get, if you don't have any emotions.
Not a "formidable rationalist" and idealist, like Spock or Eliezer...
But an apathetic zombie, like Kai.
As Temple Grandin puts it (in her book, Animals In Translation):
Everyone uses emotion to make decisions. People with brain damage to their emotional systems have a hard time making any decision at all, and when they do make a decision it's usually bad.
She is, of course, summarizing Antonio Damasio's work in relation to the somatic marker hypothesis and decision coherence. From the linked article:
Somatic markers explain how goals can be efficiently prioritized by a cognitive system, without having to evaluate the propositional content of existing goals. After somatic markers are incorporated, what is compared by the deliberator is not the goal as such, but its emotional tag. [Emphasis added]
The biasing function of somatic markers explains how irrelevant information can be excluded from coherence considerations. With Damasio's thesis, choice activation can be seen as involving emotion at the most basic computational level. [Emphasis added]
...
This sketch shows how emotions help to prevent our decision calculations from becoming so complex and cumbersome that decisions would be impossible. Emotions function to reduce and limit our reasoning, and thereby make reasoning possible. [Emphasis added]
Now, we can get into all sorts of argument about what constitues "emotion", exactly. I personally like the term "somatic marker", though, because it ties in nicely with concepts such as facial micro-expressions and gestural accessing cues. It also emphasizes the fact that an emotion doesn't actually need to be conscious or persistent, in order to act as a decision influencer and a source of bias.
But I didn't find out about somatic markers or emotional decisions because I was trying to find out more about logic or rationalism. I was studying akrasia1, and writing about it on my blog.
That is, I was trying to find out why I didn't always do what I "decided to do"... and what I could do to fix that.
And in the process, I discovered what somatic markers have to do with akrasia, and with motivated reasoning... long before I read any of the theories about the underlying machinery. (After all, until I knew what they did, I didn't know what papers would've been relevant. And in any case, I was looking for practice, not theory)
Now, in future posts in this series, I'll tie somatic markers, affective synchrony, and Robin Hanson's "near/far" hypothesis together into something I call the "Akrasian Orchestra"... a fairly ambitious explanation of why/how we "don't do what we decide to" , and for that matter, don't even think the way we decide to.
But for this post, I just want to start by introducing the idea of somatic markers in decision-making, and give a little preview of what that means for rationality.
Somatic markers are effectively a kind of cached thought. They are, in essence, the "tiny XML tags of the mind", that label things "good" or "bad", or even "rational" and "irrational". (Which of course are just disguised versions of "good" and "bad", if you're a rationalist.)
And it's imporant to understand that you cannot escape this labeling, even if you wanted to. (After all, the only reason you're able to want to, is because this labeling system exists!)
See, it's not even that only strong emotions do this: weak or momentary emotional responses will do just fine for tagging purposes. Even momentary pairing of positive or negative words with nonsense syllables can carry over into the perception of the taste of otherwise-identical sodas, branded with made-up names using the nonsense syllables!
As you can see, this idea ties in rather nicely with things like priming and the IAT: your brain is always, always, always tagging things for later retrieval.
Not only that, but it's also frequently replaying these tags -- in somatic, body movement form -- as you think about things.
For example, let's say that you're working on an equation or a computer program... and you get that feeling that something's not quite right.
As I wrote the preceding sentence, my face twisted into a slight frown, my brow wrinkling slightly as well -- my somatic marker for that feeling of "not quite right-ness". And, if you actually recall a situation like that for yourself, you may feel it too.
Now, some people would claim that this marker isn't "really" an emotion: that they just "logically" or "rationally" decided that something wasn't right with the equation or program or spaceship or whatever.
But if we were to put those same people on a brain scanner and a polygraph, and observe what happens to their brain and body as they "logically" think through various possibilities, we would see somatic markers flying everywhere, as hypotheses are being considered and discarded.
It's simply that, while your conscious attention is focused on your logic, you have little interest in attending directly to the emotions that are guiding you. When you get the "information scent" of a good or a bad hypothesis, you simply direct your attention to either following the hypothesis, or discarding it and finding a replacement.
Then, when you stop reasoning, and experience the frustration or elation of your results (or lack thereof), you finally have attention to spare for the emotion itself... leading to the common illusion that emotion and reasoning don't mix. (When what actually doesn't mix, at least without practice, is reasoning and paying conscious attention to your emotions/somatic markers at the same time.)
Now, some somatic markers are shared by all humans, such as the universal facial expressions, or the salivation and mouth-pursing that happens when you recall (or imagine) eating something sour. Others may be more individual.
Some markers persist for longer periods than others -- that "not quite right" feeling might just flicker for a moment while you're recalling a situation, but persist until you find an answer, when it's a response to the actual situation.
But it's not even necessary for a somatic marker to be expressed, in order for it to influence your thinking, since emotional associations and speed of recall are tightly linked. In effect, recall is prioritized by emotional affect... meaning that your memories are sorted by what makes you feel better.
(Or what makes you feel less bad ... which is not the same thing, as we'll see later in this series!)
What this means is that all reasoning is in some sense "motivated", but it's not always consciously motivated, because your memories are pre-sorted for retrieval in an emotionally biased fashion.
In other words, the search engine of your mind...
Returns paid results first.
This means that, strictly speaking, you don't know your own motivations for thinking or acting as you do, unless you explicitly perform the necessary steps to examine them in the moment. Even if you previously believe yourself to have worked out those motivations, you cannot strictly know that your analysis still stands, since priming and other forms of conditioning can change those motivations on the fly.
This is the real reason it's important to make beliefs pay rent, and to ground your thinking as much as possible in "near" hypotheses: keeping your reasoning tied closely to physical reality represents the only possible "independent fact check" on your biased "search engine".
Okay, that's enough of the "emotional decisions are bad and scary" frame. Let's take the opposite side now:
Without emotions, we couldn't reason at all.
Spock's dirty little secret is that logic doesn't go anywhere, without emotion. Without emotion, you have no way to narrow down the field of "all possible hypotheses" to "potentially useful hypotheses" or "likely to be true" hypotheses...
Nor would you have any reason to do so in the first place!
Because the hidden meaning of the word "reason", is that it doesn't just mean logical, sensible, or rational...
It also means "purpose".
And you can't have a purpose, without an emotion.
If Spock didn't make me feel something good, I might never have studied logic. If stupid people hadn't made me feel something bad, I might never have looked up to Spock for being smart. If procrastination hadn't made me feel bad, I never would've studied it. If writing and finding answers to provocative questions didn't make me feel good, I never would've written as much as I have.
The truth is, we can't do anything -- be it good or bad -- without some emotion playing a key part.
And that fact itself, is neither good nor bad: it's just a fact.
And as Spock himself might say, it's "highly illogical" to worry about it.
No matter what your somatic markers might be telling you.
Footnotes:
1. I actually didn't know I was studying "akrasia"... in fact, I'd never even heard the term akrasia before, until I saw it in a thread on LessWrong discussing my work. As far as I was concerned, I was working on "procrastination", or "willpower", or maybe even "self-help" or "productivity". But akrasia is a nice catch-all term, so I'll use it here.
The ideas are interesting, but I'm finding the use of italics and especially bold font somewhat distracting -- I feel like I'm being harangued. (Not as bad as all caps, but still.)
I agree that bolded text is a bit too much, particularly given the typography used here on LW. I think emphasis is fine, though.