The ideas are interesting, but I'm finding the use of italics and especially bold font somewhat distracting -- I feel like I'm being harangued. (Not as bad as all caps, but still.)
Strongly concur. I have the same reaction to pjeby's blog. I don't think it's only because of the bold; it's the writing style too, which consistently seems to me to be saying "I understand all this stuff, and you are stupid. So stupid I have to write in short sentences. And sentence fragments. Because otherwise ... you won't get it." And I find it offputting. Very offputting.
Which is a pity, because ...
... pjeby has some interesting things to say.
Very interesting post. If you can do even a fraction of what you say you will, it'll be a spectacular contribution. I already have your blog on my list of things I need to get around to reading, and it just moved up a few places on that list.
You're moving pretty quickly, though, and I have trouble following you at some areas. Maybe in the future break large essays like this into a few blog posts, one for each sub-point.
"good" or "bad", or even "rational" and "irrational". (Which of course are just disguised versions of "good" and "bad", if you're a rationalist.)
I saw this, and felt a strong urge to walk to work where my laptop is and correct it.
Rational agents/things are not synonymous with good things. A paperclip maximizer is the canonical example of an agent acting rationally. As far as most people are concerned, including me, the paperclip maximizer is not acting in a good way. When I see "rational&...
Even Spock's most famous gesture, that single raised eyebrow of his, is an expression of puzzlement or condescension. Course, he always claimed to have emotions under check rather than wiped out.
I like the idea of Kai as an properly emotionless rationalist. A robot. My friend just called hist newborn "Kai" but he's never seen Lexx.
I often figure that if you take emotion away from people you get Abulia rather than rationality anyway.
David Hume summed it up well : "Reason is, and ought only to be, the slave of the passions.”
Eliezer tells us that "Rationalists should WIN". But you can easily substitute 'win' for 'achieve whatever makes them happy', once again reinforcing the importance of emotions. Our passions are ultimately what drive us, rationality is just taking the best available information into account when trying to achieve them.
If you are interested in akrasia, you must read George Ainslie's "Breakdown of Will", which gives an economic account of akrasia based on the strong empirical evidence for hyperbolic discounting and the idea of intertemporal bargaining. See picoeconomics.com
There are some good points in this post. However, you have constructed an unwieldy overloading of the word emotion, forging it into phlogiston of your theory. Taboo "emotion". When you describe the quite real operations performed by human mind, consisting in assigning properties to things and priming to see some properties easier or at all, you bless this description with the action of emotion-substance for some reason.
...Somatic markers are effectively a kind of cached thought. They are, in essence, the "tiny XML tags of the mind", t
I agree with the other posts. I had a distinctly negative somatic marker when I read the word 'emotion' and this discomfort made it impossible for me to carefully read the rest of the post. If I was required to (say, for work), then I would have to wait until the negative response attenuated -- usually it take a half a day or so to willfully erase a somatic marker.
I'm not familiar with the psychological literature on emotions but its a little counter-intuitive (I think my brain is tagging it as annoying) to use the word emotions to describe all of these different tags. Maybe the process of tagging something "morally obligatory" is indistinguishable from tagging something "happy" on an fMRI but in common parlance and, I think phenomenologically, the two are different. Different enough to justify using a word other than emotion (which traditionally refers to a much smaller set of experiences). It i...
Damisio's view of the brain is very interesting stuff. His book Descartes' Error is a fairly easy introduction to it.
This is my view of why the brain and reasoning works off usefulness and emotion.
Consider the genes eye view of the brain. You want to control what a very complex and changeable system does so that it will propagate more of you, so you find a way to hook what and how that system behaves into signals from the body such as hunger, discomfit and desire. Because you can directly control those signals, you can get it do what you want. The genes do...
So: deep blue has emotions?!?
It seems like a definitional debate over what the term "emotion" means - without actually offering any definitions.
While I agree with the gist, I'm looking forward to a more detailed vision of emotions. This current post gives the false impression that emotions are neatly symetrical and one-dimensional (good-bad). In reality there are multiple dimensions to emotions (desirable-undesirable, pleasurable-displeasing), and they're not clearly symetrical. If fear is the symetrical of desire, then what is disgust?
Emotions are action triggers and regulators that existed way before cognition did. We might mistakenly believe that they help our cognition by sorting stimuli in go...
For the embodiment of pure rationality, why not simply a computer? Everyone knows one, we can all see that you put whatever you want on one end depending on your goals and values, and it very rationally obeys those commands to the letter, without taking initiatives. Well, that used to be that way at least.
I enjoyed reading this, pjeby. It answered and tied together a lot of the things I'd wondered since I started reading about artificial intelligence. I won't spell out the relationship between your post and the issues (this will be long anyway...), but I'll list the connections I saw and what it brought to mind:
-How evolution's "shards of desire" translate into actions.
-What it would mean to, as evolutionary psychology attempts to do, "explain emotions by the behaviors they induce". And similarly, what feelings an animal would have t...
You mention about this article being a part of series. Are you still planning to write those other articles on LW?
Read Diane Duane's "Spock's World". It goes to great lengths to correct the error you're making.
Among other things, it suggests that the word usually translated as "suppression of emotion" actually means something closer to "passion's mastery", and that the Vulcan ethos is to recognize and compensate for emotions instead of, as many seem to believe, denying them.
Also, as awesome as Kai is, Data is clearly a better example of a functioning rational being without emotions. Data isn't lacking in preferences, goals, and motivati...
Kai has no goals or cares of his own, frequently making such comments as "the dead do not want anything", and "the dead do not have opinions". He mostly does as he's asked, but for the most part, he just doesn't care about anything one way or another.
The way Kai is described certainly matches what an unemotional and goalless yet powerfully rational creature would be. Yet somehow, the authors manage to slip in a remarkable amount of goal direction and 'caring'. We just can't help but assume that amoral, inhuman creatures would take on human characteristics if we socialised them enough.
Related on OB: Priming and Contamination
Related on LW: When Truth Isn't Enough
When I was a kid, I wanted to be like Mr. Spock on Star Trek. He was smart, he could kick ass, and he usually saved the day while Kirk was too busy pontificating or womanizing.
And since Spock loved logic, I tried to learn something about it myself. But by the time I was 13 or 14, grasping the basics of boolean algebra (from borrowed computer science textbooks), and propositional logic (through a game of "Wff'n'Proof" I picked up at a garage sale), I began to get a little dissatisfied with it.
Spock had made it seem like logic was some sort of "formidable" thing, with which you could do all kinds of awesomeness. But real logic didn't seem to work the same way.
I mean, sure, it was neat that you could apply all these algebraic transforms and dissect things in interesting ways, but none of it seemed to go anywhere.
Logic didn't say, "thou shalt perform this sequence of transformations and thereby produce an Answer". Instead, it said something more like, "do whatever you want, as long as it's well-formed"... and left the very real question of what it was you wanted, as an exercise for the logician.
And it was at that point that I realized something that Spock hadn't mentioned (yet): that logic was only the beginning of wisdom, not the end.
Of course, I didn't phrase it exactly that way myself... but I did see that logic could only be used to check things... not to generate them. The ideas to be checked, still had to come from somewhere.
But where?
When I was 17, in college philosophy class, I learned another limitation of logic: or more precisely, of the brains with which we do logic.
Because, although I'd already learned to work with formalisms -- i.e., meaningless symbols -- working with actual syllogisms about Socrates and mortals and whatnot was actually a good bit harder.
We were supposed to determine the validity of the syllogisms, but sometimes an invalid syllogism had a true conclusion, while a valid syllogism might have a false one. And, until I learned to mentally substitute symbols like A and B for the included facts, I found my brain automatically jumping to the wrong conclusions about validity.
So "logic", then -- or rationality -- seemed to require three things to actually work:
But it wasn't until my late thirties and early forties -- just in the last couple of years -- that I realized a fourth piece, implicit in the first.
And Spock, ironically enough, is the reason I found it so difficult to grasp that last, vital piece:
That to generate possibly-useful ideas in the first place, you must have some notion of what "useful" is!
And that for humans at least, "useful" can only be defined emotionally.
Sure, Spock was supposed to be immune to emotion -- even though in retrospect, everything he does is clearly motivated by emotion, whether it's his obvious love for Kirk, or his desire to be accepted as a "real" rationalis... er, Vulcan. (In other words, he disdains emotion merely because that's what he's supposed to do, not because he doesn't actually have any.)
And although this is all still fictional evidence, one might compare Spock's version of "unemotional" with the character of the undead assasin Kai, from a different science-fiction series.
Kai, played by Michael McManus, shows us a slightly more accurate version of what true emotionlessness might be like: complete and utter apathy.
Kai has no goals or cares of his own, frequently making such comments as "the dead do not want anything", and "the dead do not have opinions". He mostly does as he's asked, but for the most part, he just doesn't care about anything one way or another.
(He'll sleep in his freezer or go on a killing spree, it's all the same to him, though he'll probably tell you the likely consequences of whatever action you see fit to request of him.)
And scientifically speaking, that's a lot closer to what you actually get, if you don't have any emotions.
Not a "formidable rationalist" and idealist, like Spock or Eliezer...
But an apathetic zombie, like Kai.
As Temple Grandin puts it (in her book, Animals In Translation):
She is, of course, summarizing Antonio Damasio's work in relation to the somatic marker hypothesis and decision coherence. From the linked article:
Now, we can get into all sorts of argument about what constitues "emotion", exactly. I personally like the term "somatic marker", though, because it ties in nicely with concepts such as facial micro-expressions and gestural accessing cues. It also emphasizes the fact that an emotion doesn't actually need to be conscious or persistent, in order to act as a decision influencer and a source of bias.
But I didn't find out about somatic markers or emotional decisions because I was trying to find out more about logic or rationalism. I was studying akrasia1, and writing about it on my blog.
That is, I was trying to find out why I didn't always do what I "decided to do"... and what I could do to fix that.
And in the process, I discovered what somatic markers have to do with akrasia, and with motivated reasoning... long before I read any of the theories about the underlying machinery. (After all, until I knew what they did, I didn't know what papers would've been relevant. And in any case, I was looking for practice, not theory)
Now, in future posts in this series, I'll tie somatic markers, affective synchrony, and Robin Hanson's "near/far" hypothesis together into something I call the "Akrasian Orchestra"... a fairly ambitious explanation of why/how we "don't do what we decide to" , and for that matter, don't even think the way we decide to.
But for this post, I just want to start by introducing the idea of somatic markers in decision-making, and give a little preview of what that means for rationality.
Somatic markers are effectively a kind of cached thought. They are, in essence, the "tiny XML tags of the mind", that label things "good" or "bad", or even "rational" and "irrational". (Which of course are just disguised versions of "good" and "bad", if you're a rationalist.)
And it's imporant to understand that you cannot escape this labeling, even if you wanted to. (After all, the only reason you're able to want to, is because this labeling system exists!)
See, it's not even that only strong emotions do this: weak or momentary emotional responses will do just fine for tagging purposes. Even momentary pairing of positive or negative words with nonsense syllables can carry over into the perception of the taste of otherwise-identical sodas, branded with made-up names using the nonsense syllables!
As you can see, this idea ties in rather nicely with things like priming and the IAT: your brain is always, always, always tagging things for later retrieval.
Not only that, but it's also frequently replaying these tags -- in somatic, body movement form -- as you think about things.
For example, let's say that you're working on an equation or a computer program... and you get that feeling that something's not quite right.
As I wrote the preceding sentence, my face twisted into a slight frown, my brow wrinkling slightly as well -- my somatic marker for that feeling of "not quite right-ness". And, if you actually recall a situation like that for yourself, you may feel it too.
Now, some people would claim that this marker isn't "really" an emotion: that they just "logically" or "rationally" decided that something wasn't right with the equation or program or spaceship or whatever.
But if we were to put those same people on a brain scanner and a polygraph, and observe what happens to their brain and body as they "logically" think through various possibilities, we would see somatic markers flying everywhere, as hypotheses are being considered and discarded.
It's simply that, while your conscious attention is focused on your logic, you have little interest in attending directly to the emotions that are guiding you. When you get the "information scent" of a good or a bad hypothesis, you simply direct your attention to either following the hypothesis, or discarding it and finding a replacement.
Then, when you stop reasoning, and experience the frustration or elation of your results (or lack thereof), you finally have attention to spare for the emotion itself... leading to the common illusion that emotion and reasoning don't mix. (When what actually doesn't mix, at least without practice, is reasoning and paying conscious attention to your emotions/somatic markers at the same time.)
Now, some somatic markers are shared by all humans, such as the universal facial expressions, or the salivation and mouth-pursing that happens when you recall (or imagine) eating something sour. Others may be more individual.
Some markers persist for longer periods than others -- that "not quite right" feeling might just flicker for a moment while you're recalling a situation, but persist until you find an answer, when it's a response to the actual situation.
But it's not even necessary for a somatic marker to be expressed, in order for it to influence your thinking, since emotional associations and speed of recall are tightly linked. In effect, recall is prioritized by emotional affect... meaning that your memories are sorted by what makes you feel better.
(Or what makes you feel less bad ... which is not the same thing, as we'll see later in this series!)
What this means is that all reasoning is in some sense "motivated", but it's not always consciously motivated, because your memories are pre-sorted for retrieval in an emotionally biased fashion.
In other words, the search engine of your mind...
Returns paid results first.
This means that, strictly speaking, you don't know your own motivations for thinking or acting as you do, unless you explicitly perform the necessary steps to examine them in the moment. Even if you previously believe yourself to have worked out those motivations, you cannot strictly know that your analysis still stands, since priming and other forms of conditioning can change those motivations on the fly.
This is the real reason it's important to make beliefs pay rent, and to ground your thinking as much as possible in "near" hypotheses: keeping your reasoning tied closely to physical reality represents the only possible "independent fact check" on your biased "search engine".
Okay, that's enough of the "emotional decisions are bad and scary" frame. Let's take the opposite side now:
Without emotions, we couldn't reason at all.
Spock's dirty little secret is that logic doesn't go anywhere, without emotion. Without emotion, you have no way to narrow down the field of "all possible hypotheses" to "potentially useful hypotheses" or "likely to be true" hypotheses...
Nor would you have any reason to do so in the first place!
Because the hidden meaning of the word "reason", is that it doesn't just mean logical, sensible, or rational...
It also means "purpose".
And you can't have a purpose, without an emotion.
If Spock didn't make me feel something good, I might never have studied logic. If stupid people hadn't made me feel something bad, I might never have looked up to Spock for being smart. If procrastination hadn't made me feel bad, I never would've studied it. If writing and finding answers to provocative questions didn't make me feel good, I never would've written as much as I have.
The truth is, we can't do anything -- be it good or bad -- without some emotion playing a key part.
And that fact itself, is neither good nor bad: it's just a fact.
And as Spock himself might say, it's "highly illogical" to worry about it.
No matter what your somatic markers might be telling you.
Footnotes:
1. I actually didn't know I was studying "akrasia"... in fact, I'd never even heard the term akrasia before, until I saw it in a thread on LessWrong discussing my work. As far as I was concerned, I was working on "procrastination", or "willpower", or maybe even "self-help" or "productivity". But akrasia is a nice catch-all term, so I'll use it here.