Julia Galef's Skepticon IV talk, The Straw Vulcan, is the best intro-to-rationality talk for the general public I've ever seen. Share the link with everyone you know!

 

Update: Below is the transcript prepared by daenerys:

Emcee: You may recognize our next speaker from somewhere-- She was here earlier on our panel, she’s known for her work with the New York City Skeptics and their podcast “Rationally Speaking”, and she’s also the co-author of the rationality blog “Measure of Doubt” with her brother Jesse: Julia Galef!

[applause]

Julia: Hey, it’s really nice to be back. I’m so excited to be giving a talk at Skepticon. Last year was my first year and I got to moderate a panel, so this is an exciting new step for me. My talk today has been sort of organically growing over the last couple of years as I’ve become more and more involved in the skeptic and rationality movements, and I’ve gotten more and more practice, and learned many lessons the hard way about communicating ideas about rationality and critical thinking and skepticism to people.

The title of the talk is “The Straw Vulcan: Hollywood’s Illogical View of Logical Decision-Making”. So if there’s anyone in the audience who doesn’t recognize this face, this is Mr. Spock…Someone’s raising their hands, but it’s a Vulcan salute, so I don’t believe you don’t know him…So this is Mr. Spock; He’s one of the main characters on Star Trek and he’s the First Officer and the Science Officer on the Starship Enterprise. And his mother is human, but his father is Vulcan. 

The Vulcans are this race of aliens that are known for trying to live in strict adherence of the rules of reason and logic, and also for eschewing emotion. This is something I wasn’t clear on when I was remembering the show from my childhood, but it’s not that the Vulcans don’t have emotion, it’s just that over time they’ve developed very strict and successful ways of transcending and suppressing their emotions. So Spock being half-Vulcan has more lapses than a pure-blood Vulcan, but still, on the show Star Trek he is “The Logical Character” and that makes up a lot of the inter-character dynamics and the storylines on the show. 

[2:30]

So, here’s Spock. Here are the Vulcans. And I asked this question: “Vulcans: Rational Aliens?” with a question mark because the brand of rationality that’s practiced by Spock and his fellow Vulcans isn’t actually rationality. And that’s what my talk is going to be about today.

This term, “Straw Vulcan”, I wish I could personally take credit for it, but I borrowed it from a website called TvTropes [audience cheers], Yes! TvTropes! Some of the highest level of rationality that I can find on the internet, let alone on another pop culture or television blog. I highly recommend you check it out.

So they coined the term “Straw Vulcan” to refer to the type of fictional character who is supposed to be “The Logical One” or “The Rational One”, but his brand of rationality is not real rationality. It’s sort of this caricature; this weak, gimpy caricature of rationality that…Well, essentially, you would think that if someone were super-rational that they’d be running circles around all the other characters, in the TV show or in the movie.

 But because it’s this sort of “fake” rationality that’s designed to demonstrate that the real success, the real route to glory and happiness and fulfillment, is all of these things that people consider to make us essentially human, like our passion, and our emotion, and our intuition, and yes, our irrationality. And since that’s the point of the character, his brand of rationality is sort of this woeful character, and that’s why it’s called “A Straw Vulcan”. 

Because if you’re arguing against some viewpoint that you disagree with, and you caricature that viewpoint in as simplistic and exaggerated a way as possible, to make it easy for yourself to just knock it down and pretend that you’ve knocked that entire viewpoint down, that’s a “Straw Man”…So these are “Straw Vulcans”.

As I was saying, Spock and his fellow Straw Vulcans play this role in their respective TV shows and movies, of seeming like the character that should be able to save the day, but in practice the day normally gets saved by someone like this. [Kirk slide][laughter]

Yup! “I’m sorry I can’t hear you over the sound of how awesome I am” 

So my talk today is going to be about Straw Vulcan rationality and how it diverges from actual rationality. And I think this is an important subject because…It’s possible that many of you in the audience have some misconceptions about rationality that have been shaped by these Straw Vulcan characters that are so prevalent. And even if you haven’t it’s really useful to understand the concepts that are in people’s minds when you talk to them about rationality.

Because as I’ve learned the hard way again and again; Even if it’s so clear in your mind that rationality can make your life better, and can make the world better, if people are thinking of Straw Vulcan rationality you’re never going to have any impact on them. So it’s really useful to understand the differences between what you’re thinking of and what many other people are thinking of when they talk about rationality. 

First what I’m going to do is define what I mean by “rationality”. This is actual rationality. I’m just defining this here because I’m going to refer back to it throughout my talk, and I want you to know what I’m talking about. 

There are two concepts that we use rationality to refer to.  One of them is sometimes called “epistemic rationality”, and it’s the method of obtaining an accurate view of reality, essentially. So the method of reasoning, and collecting evidence about the world, and updating your beliefs so as to make them as true as possible, hewing as closely to what’s actually out there as possible.

The other sense of the word rationality that we use is “instrumental rationality”. This is the method of achieving your goals, whatever they are. They could be selfish goals; They could be altruistic goals. Whatever you care about and want to achieve, instrumental rationality is defined as the method most likely to help you achieve them.

And obviously they’re related. It helps to have an accurate view of reality if you want to achieve your goals, with very few exceptions… But I’m not going to talk about that right now, I just want to define the concepts for you.

This is the first principle of Straw Vulcan Rationality: Being rational means expecting other people to be rational too. This is the sort of thing that tends to trip up a Straw Vulcan, and I’m going to give you an example:

This scene that’s about to take place, the Starship’s shuttle has just crash-landed on this potentially hostile alien planet. Mr. Spock is in charge and he’s come up with this very rational plan, in his mind, that is going to help them escape the wrath of the potentially aggressive aliens: They’re going to display their superior force, and the aliens are going to see that, and they’re going to think rationally: “Oh, they have more force than we do, so it would be against our best interests to fight back and therefore we won’t.” And this is what Spock does and it goes awry because the aliens are angered by the display of aggression and they strike back. 

This scene is taking place between Spock and McCoy, who’s like Spock’s foil. He’s the very emotional, passion and intuition-driven doctor on the ship.

[7:45]

[video playing]

McCoy: Well, Mr. Spock, they didn’t stay frightened very long, did they? 

Spock: Most illogical reaction. When we demonstrated our superior weapons, they should have fled.

McCoy: You mean they should have respected us?

Spock: Of course!

McCoy: Mr. Spock, respect is a rational process. Did it ever occur to you that they might react emotionally, with anger?

Spock: Doctor, I’m not responsible for their unpredictability.

McCoy: They were perfectly predictable. To anyone with feeling. You might as well admit it, Mr. Spock. Your precious logic brought them down on us!

[end video]

[8:45}

Julia: So you see what happens when you try to be logical…People die!

Except of course, exactly the opposite. This is irrationality, not rationality. Rationality is about having as accurate a view of the world as possible and also about achieving your goals. And clearly Spock has persistent evidence accumulated again and again over time that other people are not actually perfectly rational, and he’s just willfully neglecting the evidence; The exact opposite of epistemic rationality. Of course it also leads to the opposite of instrumental rationality too, because if people behave constantly the opposite of what you expect them to, you can’t possibly make decisions that are going to be achieving your goals.

So this concept of rationality, or this particular tenet of Straw Vulcan Rationality can be found outside of Star Trek as well. I was sort of surprised by the prevalence of that, but I’ll give you an example: This was an article earlier this year in InfoWorld and basically the article is making the argument that one of the big problems with Google, and Microsoft, and Facebook, is that the engineers there don’t really understand that their customers don’t have the same worldview and values and preferences that they do. 

For example, if you remember the debacle that was Google Buzz; It was a huge privacy disaster because it signed you up automatically and then as soon as you were signed up all of your close personal contacts, like the friends that you emailed the most, suddenly got broadcast publicly to all your other friends. So the author of this article was arguing that, “Well, people at Google don’t care about this privacy, and so it didn’t occur to them that other people in the world would actually care about privacy.”

And there’s nothing wrong with that argument. That’s a fine point to make. Except, he titled the article: “Google’s biggest problem is that it’s too rational”, which is exactly the same problem as the last example. That is they’re too Straw Vulcan Rational, which is irrational.

This is another example from a friend of mine who’s also a skeptic writer and author of several really good books. This is Dan Gardner; he wrote Future Babel, and he’s spoken at North-East Conference of Science and Skepticism where I’ve also moderated and organized last year, and he’s great! He’s really smart. But on his blog I found this article that he wrote about how… he was criticizing an economist who was making the argument that the best way to fight crime would be to make harsher penalties because that would be a deterrent and that would reduce crime, because people respond to incentives.

And Dan said, “well that would make sense except for that the empirical argument shows that crime rates don’t nearly respond quite as much to deterrent incentives as we think they do and so this economist is failing to update his model on how he thinks people should behave based on how the evidence suggests they actually do behave.” Which again is fine, except that his conclusion was; “Don’t Be Too Rational About Crime Policy.” So it’s exactly the same kind of thinking.

It’s sort of a semantic point, in that he’s defining rationality in this weird way, although I’m not disagreeing with his actual argument. But it’s this kind of thinking about rationality that can be detrimental in the long run.

 

This is the second principle of Straw Vulcan rationality: Being rational means you should never make a decision until you have all the information. 

I’ll give you an example. So I couldn’t find a clip of this, unfortunately, but this scene takes place in an episode called “The Immunity Syndrome” in season 2, and basically people on the Starship Enterprise are falling ill mysteriously in droves, and there’s this weird high-pitched sound that they’re experiencing that’s making them nauseated and Kirk and Spock see this big black blob on their screen and they don’t know what it is… It turns out it’s a giant space amoeba…Of course!

But at this point early in the episode they don’t really know much about it and so Kirk turns to Spock for input, for advice, for his opinion on what he thinks this thing is and what they should do. And Spock’s response is: “I have no analysis due to insufficient information…The computers contain nothing on this phenomenon. It is beyond our experience, and the new information is not yet significant.”

It’s great to be loathe, to be hesitant to make a decision based on small amounts of evidence that isn’t yet significant if you have a reasonable amount of time. But there are snap judgments that need to be made all the time, and you have to decide between paying the cost of all of the additional information that you want (and that cost could be in time or in money or in risk, if waiting is forcing you to incur more risk.) or just acting based on what you have at the moment. 

The rational approach, what a rationalist wants to do, is to maximize his…essentially to make sure he has the best possible expected outcome. The way to do that is not to always wait until you have all the information, but to weigh the cost of the information against how much do you think you’re going to get from getting that information. 

We all know this intuitively in other areas of life; Like you don’t want the best sandwich you can get, you want the best sandwich relative to how much you have to pay for it. So you’d be willing to spend an extra dollar in order to make your sandwich a lot better, but if you had to spend $300 to make your sandwich slightly better, that wouldn’t be worth it. You wouldn’t actually be optimizing if you paid those $300 to make your sandwich slightly better.

And again, this phenomenon, this interpretation of rationality, I found outside of Star Trek as well. Gerd Gigerenzer is a very well respected psychologist, but this is him describing how a rational actor would find a wife:

“He would have to look at the probabilities of various consequences of marrying each of them—whether the woman would still talk to him after they’re married, whether she’d take care of their children, whatever is important to him—and the utilites of each of these…After many years of research he’d probably find out that his final choice had already married another person who didn’t do these computations, and actually just fell in love with her.”

So Gerd Gigerenzer is a big critic of the idea of rational decision making, but as far as I can tell, one of the reasons he’s a critic is because this is how he defines rational decision making. Clearly this isn’t actual optimal decision making. Clearly someone who’s actually interested in maximizing their eventual outcome would take into account the fact that doing years of research would limit the amount of women who would still be available and actually interested in dating you after all of that research was said and done.

 

This is Straw Vulcan Rationality Principle number 3: Being rational means never relying on intuition.

Here’s an example. This is Captain Kirk, this is in the original series, and he and Spock are playing a game of three-dimensional chess.

[16:00]

[video starts, but there’s no sound]

Julia as Kirk: Checkmate! (He said)

Julia as Spock: Your illogical approach to chess does have its advantages on occasion, Captain.

[end video]

[laughter and applause]

Julia: Um, let me just check my sound—Maximize my long-term expected outcome in this presentation by incurring a short-term cost. Well, we’ll hope that doesn’t happen again.

Anyway, so clearly an approach that causes you to win at chess cannot by any sensible definition be called an illogical way of playing chess. But from the perspective of Straw Vulcan Rationality it can, because anything intuition-based is illogical in Straw Vulcan rationality. 

Essentially there are two systems that people use to make decisions. They’re rather boringly called System 1 and System 2, but they’re more colloquially known as the intuitive system of reasoning and the deliberative system of reasoning. 

The intuitive system of reasoning is an older system; it allows us to make automatic judgments, to make judgments using shortcuts which are sometimes known as heuristics. They’re sort of useful rules of thumb for what’s going to work, that don’t always work. But they’re good enough most of the time. They don’t require a lot of cognitive processing ability or memory or time or attention.

And then System 2, the deliberative system of reasoning, is much more recently evolved. It takes a lot more cognitive resources, a lot more attention, but it allows us to do more abstract critical thinking. It allows us to construct models of what might happen when it’s something that hasn’t happened before, whereas say a System 1 approach would decide what to do based on how things happened in the past.

System 2 is much more useful when you can’t actually safely rely on precedence and you actually think. “What are the possible future scenarios and what would likely happen if I behaved in a certain way in each of those scenarios?” That’s System 2.

 System 1 is more prone to bias. Eliezer Yudkowsky gave a great talk earlier this morning about some of the biases that we can fall prey to, especially when we’re engaging in System 1 reasoning. But that doesn’t mean that it’s always the wrong system to use. 

I’ll give you a couple examples of System 1 reasoning before I go any farther. So there’s a problem that logic teachers sometimes give to their students. It’s a very simple problem; They say a bat and a ball together add up to $1.10. The bat costs a dollar more than the ball. How much does the ball cost?

The intuitive System 1 answer to that question in 10 cents, because you look at $1.10, and you look at $1 , and you take away the dollar and you get 10 cents. But if the ball was actually 10 cents and the bat was actually a dollar, then the bat would not cost a dollar more than the ball. So essentially that’s the kind of answer you get when you’re not really thinking about the problem, you’re just feeling around for…“Well, what do problems like this generally involve? Well, you generally take one thing away from another thing, so I dunno, do that.”

In fact, when this problem was given to a class at Princeton, 50% of them got the wrong answer. It just shows how quickly we reach for our System 1 answer and how rarely we feel the need to actually go back and check in, in a deliberative fashion.

Another example of System 1 reasoning that I really like is…you may have heard of this classic social psychology experiment in which researchers sent someone to wait in line at a copy machine, and they asked the person ahead of them; “Excuse me, do you mind if I cut in line?” And maybe about 50% or 40% of them agreed to let the experimenters’’ plant cut ahead of them. 

But when the experimenters redid the study, and this time instead of saying “Can I cut in front of you?”, they said “Can I cut in front of you because I need to make copies?” Then the agreement rate went up to like 99%. Something really high. 

And there’s literally….Like of course they need to make copies! That’s the only reason they would have to cut in line to a copy machine. Except, because the request was phrased in terms of giving a reason, our system 1 reasoning kicks in and we go “Oh, they have a reason! So, sure! You have a reason.”

[21:00]

System 1 and System 2 have their pros and cons in different contexts. System 1 is especially good when you have a short time span, and a limited amount of resources and attention to devote to a problem. It’s also good when you know that you have experience and memory that’s relevant to the question, but it’s not that easily accessible; like you’ve had a lot of experiences of things like this problem, but our memories aren’t stored in this easy list where we can sort according to key words and find the mean of the number of items in our memory base. So you have information in there, and really the only way to access it sometimes is to rely on your intuition. It’s also helpful when there are important factors that go into a decision that are hard to quantify.

 There are a number of recent studies which have been exploring when System 1 reasoning is successful, and it tends to be successful when people are making purchasing decisions or other decisions about their personal life. And there are a lot of factors involved; there’s dozens of factors relevant to what car you buy that you could consider, but a lot of what makes you happy with your purchase or your choice is just your personal liking of the car. And that’s not the sort of thing that’s easy to quantify. 

When people try to think about using their System 2 reasoning they don’t really know how to quantify their liking of the car, and so when they rely on System 2 they often tend to just look at the mileage, and the cost, and all these other things, which are important but they don’t really get at that emotional preference about the car. So that kind of information can be helpfully drawn out by System 1 reasoning. 

Also if you’re an expert in a field, say chess for example, you can easily win over someone who’s using careful deliberative reasoning just based on all of your experience; You’ve built up this incredible pattern recognition ability with chess. So a chess master can just walk past a chess board, glance at it, and say “Oh, white’s going to checkmate black in three moves.” Or chess masters can play many different chess games at once and win them all. And obviously they don’t have the cognitive resources to devote to each game fully, but their automatic pattern recognition system that they’ve built up over thousands and thousands of chess games works just well enough.

Intuition is less reliable in cases where the kinds of heuristics or the kinds of biases that Eliezer spoke about earlier are relevant, or when you have a good reason to believe that your intuition is based on something that isn’t relevant to the task at hand. So if you’re using  your intuition to say how likely work in artificial intelligence is going to be to lead to some sort of global disaster, you might rely on your intuition. But you also have to think about the fact that your intuition in this case is probably shaped by fiction. There are a lot more stories about robot apocalypses and AI explosions that took over the world than there are stories about AI going in a nice, boring, pleasant way. So being able to recognize where your intuition comes from can help you decide when it’s a good guide in a particular context.

System 2 is better when you have more resources and more time. It’s also good, as I mentioned, in new and unprecedented situations, new and unprecedented decision making contexts, where you can’t just rely on patterns of what’s worked in the past. So a problem like global warming, or a problem like other existential risks that face our world, our species, potential of a nuclear war…We don’t really have precedence to draw on, so it’s hard to think that we can rely on our intuition to tell us what’s going to happen or what we should do. And System 2 tends to be worse when there are many, many factors to consider and we don’t have the cognitive ability to consider them all fairly.

But the main takeaway of the System 1/System 2 comparison is that both systems have their strengths and weaknesses. And rationality is about trying to find the truest path to an accurate picture of reality, and it’s about trying to find what actually maximizes your own happiness or whatever goal you have. So what you do is you don’t rely on one or the other blindly. You decide: Based on this context, which method is going to be the most likely one to get me to what I want? The truth, or whatever other goals I have. 

And I think that a lot of the times when you hear people say that it’s possible to be too rational, what they’re really talking about is that it’s possible to use System 2 deliberative reasoning in contexts where it’s inappropriate, or to use it poorly.

Here’s a real life example: This is a headline article that came out earlier this year. So if you can’t read it, it says “Is the Teen Brain too Rational?” And the argument of the article (It was actually a study) and it found that teenagers, when they were deciding to take some risks like to do drugs or to drive above the speed limit, they often do what is technically System 2 reasoning, so they’ll think about the pros and cons, and think about what the risks are likely to be.

 But the reason they do it anyways is because they’re really bad at this System 2 reasoning. They poorly weigh the risks and the benefits and that’s why they end up doing stupid things. So the conclusion I would draw from that is: Teens are bad at system 2 reasoning. The conclusion the author drew from that is that teens are too rational. 

Another example: I found this quote when I was Googling around for examples to use in this talk, and I found what I thought was a perfect quote illustrating this principle that I’m trying to describe to you:

“It is therefore equally unbalanced to be mostly “intuitive” (i.e. ignoring that one’s first impression can be wrong), or too rational (i.e. ignoring one’s hunches as surely misguided)”

Here I would say if you ignore your hunches blindly and assume they’re misguided then you’re not being rational, you’re being irrational. And so I was happily copying down the quote, before having looked at the author. Then I check to see who the author of the post was and it’s the co-host of my podcast, “Rationallly Speaking”, Massimo Pigliucci, who I am very fond of, and probably going to get in trouble with now. But I couldn’t pass up this perfect example, and this is just how committed I am to teaching you guys about true rationality that I will brave the wrath of that Italian man there.

 

So Straw Vulcan Rationality Principle number four: Being rational means not having emotions. 

And this is something I want to focus on a lot, because I think the portrayal of rationality and emotions by Spock’s version, by the Straw Vulcan version of rationality, is definitely confused, is definitely wrong. But I think the truth is nuanced and complicated, so I want to draw this one out a little bit more.

But first, a clip

[video]

Julia: Oh! Spock thinks the captain’s dead.

Spock: Doctor, I shall be resigning my commission immediately of course, so I would appreciate your making the final arrangements.

McCoy: Spock, I…

Spock: Doctor, please. Let me finish. There can be no excuse for the crime of which I am guilty. I intend to offer no defense. Furthermore, I will order Mr. Scot to take immediate command of this vessel.

Kirk: (walking up from behind) Don’t you think you better check with me, first?

Spock: Captain! Jim! (Big smile, then regains control)….I’m…pleased…to see you again, Captain. You seem…uninjured.

[end video]

[29:15]

Julia: So he almost slipped up there, but he caught himself just in time. Hopefully none of the other Vulcans found out about it. 

This is essentially the Spock model of how emotions and rationality relate to each other: You have a goal, and use rationality, unencumbered by emotion, to figure out what action to take to achieve that goal. Then emotion can get in the way and screw up this process if you’re not really careful. This is the Spock model. And it’s not wrong per se. Emotions can clearly, and frequently do, screw up attempts at rational decision making. 

I’m sure you all have anecdotal examples just like I do, but to throw some out there; if you’re really angry it can be hard to recognize the clear truth that lashing out at the person you’re angry at is probably not going to be a good idea for you in the long run. Or if you’re in love it can be hard to recognize the ways in which you are completely incompatible with this other person and that you’re going to be really unhappy with this person in the long run if you stay with them. Or if you’re disgusted and irritated by hippies, it can be hard to objectively evaluate arguments that you associate with hippies, like say criticisms of capitalism. 

These are just a few examples. These are just anecdotal examples, but there’s plenty of experimental research out there that demonstrates that people’s rational decision making abilities suffer when they’re in states of heightened emotion. For example when people are anxious they over-estimate risks by a lot. When people are depressed, they under-estimate how much they are going to enjoy some future activity that’s proposed to them. 

And then there’s a series of really interesting studies by a couple of psychologists named Woodsman and Galinski that demonstrate that when people are feeling threatened or vulnerable, or like they don’t have control, they tend to be much more superstitious; they perceive patterns where there are no patterns; they’re likely to believe conspiracy theories; they’re more likely to see patterns in companies and financial data that aren’t actually there; and they’re more likely to invest, to put their own money down, based on these non-existent patterns that they thought they saw.

So Spock is not actually wrong. The problem with this model is that it is just incomplete. And the reason it’s incomplete is that “Goal” box. Where does that “Goal” box come from? It’s not handed down to us from on high. It’s not sort of written into the fabric of the universe. The only real reason reason that you have goals is because you have emotions-- because you care about some outcomes of the world’s more than others; because you feel positively about some potential outcomes and negatively about other potential outcomes. 

If you really didn't care about any potential state of the world more or less than other potential state of the world, it wouldn't matter how skilled your reasoning abilities were, you'd never have reason to do anything. Essentially you’d just look like this... “Meh!” I mean even rationality for its own sake isn’t really coherent without some emotion, because if you want to do rationality, if you want to be rational, it’s because you care more about having the truth than you do about being ignorant.

 Emotions are clearly necessary for forming the goals, rationality is simply lame without them. But there’s also some interesting evidence that emotions are important for making the decisions themselves.

 There’s a psychologist named Antonio Demasio who studies patients with brain damage to a certain part of their brain…Ventral parietal frontal cortex…I can’t remember the name, but essentially it’s part of the brain that’s crucial for reacting emotionally to one’s thoughts. 

The patients who suffered from this injury were perfectly undamaged in other ways. They could perform just as well on tasks on visual perception, and language processing, and probabilistic reasoning, and all these other forms of deliberative reasoning and other senses. But their lives very quickly fell apart after this injury, because when they were making decisions they couldn’t actually simulate viscerally what the value was to them of the different options. So their jobs fell apart, their interpersonal relations fell apart, and also a lot of them became incredibly indecisive.

Demasio tells the story of one patient of his, who, when he left the doctor’s office Demasio gave him the choice of a pen or a wallet... Some cheap little wallet, whatever you want… And the patient sat there for about twenty minutes trying to decide. Finally he picked the wallet, but when he went home he left a message on the doctor’s voicemail saying “I changed my mind. Can I come back tomorrow and take the pen instead of the wallet?”

And the problem is that the way we make decisions is we sort of query our brains to see how we feel about the different options, and if you can’t feel, then you just don’t know what to do. So it seems like there’s a strong case that emotions are essential for this ideal decision-making process, not just in forming your goals, but in actually weighing your different options in the context for a specific decision.

This is the first revision I would make to the model of Straw Vulcan decision-making. And this is sort of the standard model for ideal decision-making as say economics formulates it. You have your values. (Economics doesn’t particularly care what they are.) But the way economics formulates a rational actor is someone who acts in such a way as to maximize their chances of getting what they value, whatever that is.

And again, that’s a pretty good model. It’s not a bad simplification of what’s going on. But the thing about this model is that it takes your emotional desire as a given. It just says: “Given what you desire, what’s the best way to get it?” And we don’t have to take our desires as a given. In fact, I think this is where rationality comes back into the equation. We can actually use rationality to think about our instinctual emotional desires, and as a consequence of them, the things that we value: our goals. And think about what makes sense rationally. 

It’s a little bit of a controversial statement. Some psychologists and philosophers would say that emotions and desires can’t be rational or irrational; you just want what you want. And certainly they can’t be rational or irrational in the same way that the leaps can’t be rational or irrational. Some philosophers might argue about this, but I would say that you can't be wrong about what you want.

But I think there’s still a strong case to be made for some emotions being irrational, and if you think back to the two definitions of rationality that I gave you earlier; There was epistemic rationality which was about making your beliefs about the world as true as possible, and there was instrumental rationality, which was about maximizing your chances of getting what you want, whatever that is. So I think it makes sense to talk about emotions as being epistemically irrational, if they’re implicitly, at their core, based on a false model of the world. 

And this happens all the time. For example, you might be angry at your husband for not asking how this presentation at work went. It was a really important presentation, and you can’t believe he didn’t ask you. And that anger is predicated on the assumption, whether conscious or not, that he should have known that was important. That he should have known that this was an important presentation to you. But if you actually take a step back and think about it, it could be that no, you never actually ever gave him any indication that this was important, and that you were worried about it. So then that would make that emotion irrational, because it’s based on a false model of reality.

Or for example you might feel guilty about something. Even though when you consciously think about it, you would have to acknowledge that you know you did nothing to cause it and that there was nothing you could have done to prevent it. So I would be inclined to call that guilt also epistemically irrational. 

Or for example, people might feel depressed because that’s predicated on the assumption that there’s nothing they could do to better their situation, and sometimes that might be true, but a lot of the times it’s not.  I would call that also an irrational emotion, because you may have some false beliefs about your capabilities of improving whatever the problem is. That’s epistemic irrationality. 

Emotions can clearly be instrumentally irrational if they’re making you worse off. Something like jealousy, or spite, or rage, or envy is unpleasant to you and it’s not actually inspiring you to make any positive changes in your life. And it’s perhaps causing rifts with people you care about, and making you less happy that way, then I’d say that’s pretty clearly preventing you from achieving your goals.

Emotions can be instrumentally and epistemically irrational. Using rationality is what helps us recognize that and shape our goals based not on what our automatic emotional desires are, but on what our rationality-filtered emotional desires are. I put several emotional desires here, because another role that rationality plays in this ideal decision process is recognizing when you have conflicting desires, and weighing them against each other; Deciding which is more important to you and one of them can be changed, etc, etc. 

For example you might value being the kind of person who tells the truth, but you also might value being the kind of person that’s liked by people. So you have to have some kind of way of weighing those two desires against each other before you decide what your goal in a particular situation actually is. 

This would be my next update to the Straw Vulcan model of emotion and rationality. But you can actually improve it a little bit more too. You can change your emotions using rationality. This is not that easy and it can often take some time. But it’s definitely something that we know how to do, at least in limited ways. 

For example there’s a field of psychotherapy called cognitive therapy, sometimes combined with behavioral techniques and called cognitive behavioral therapy. Their motto, if you can call it that, is “Changing the way you think can change the way you feel.” They have all of these techniques and exercises you can do to get over depression and anger, or anxiety, and other instrumentally and often epistemically irrational emotions.

Here’s an example: This is a cognitive therapy worksheet. A “Thought Record” is one of the most common exercises that cognitive therapy has their patients do, and it’s common sense, essentially. It’s about writing down and noting your thoughts when your emotions start to run away with you, or run away with themselves. And then stopping, asking “What is the evidence that supports this thought that I have?”

I’m sorry, to back up…Noting the thoughts that are underlying the emotions that you’re feeling. So I was talking about these implicit assumptions about the world that your emotions are based on. It gets you to make those explicit, and then question whether you actually have good evidence for believing them. 

This sort of process, plus lots of other exercises that psychotherapists do with their patients and that people can even do at home by themselves from a book, is by far the most empirically validated and well-tested form of psychotherapy. In fact, some would say it’s the only one that’s really supported by evidence so far.

Even if you’re not doing an official campaign of cognitive therapy to change your emotions, and by way of your emotions, your desires and goals, there’s still plenty of informal ways to rationally change your emotions and make yourself better off. 

For example, in the short term you could recognize when you feel that first spark of anger, and decide whether or not you want to fuel that anger by dwelling on what that person has done in the past that angered you, and imagining what they were thinking about you at that moment. Or you could decide to try to dampen the flames of your burgeoning anger by instead thinking about times that you’ve screwed up in the past, or thinking about things that that person has done for you that were actually kind. So you do have actually a lot of conscious control, if you choose to take it, over which direction your emotions push you in, than I think a lot of people are used to realizing.

[41:00]

In the longer term, you can even change what you value. It’s hard, and it does tend to take a while, but let’s say that you wish you were a more compassionate person. And you have conflicting desires. One of your desires is to lie on the couch every night, and another one of your desires is to be the kind of person that other people will look up to. So you want to bring those conflicting desires into harmony with each other.

You can actually, to some extent, make yourself more compassionate over time if that’s what you want to do. You can choose to expose yourself to material, to images, and to descriptions of suffering refuges, and you can consciously decide to imagine that it’s you in that situation. Or that it’s your friends and family and you can do the thought experiment of asking yourself what the difference is between these people suffering, and you or your friends and family suffering, and you can bring the emotions about by thinking of the situations rationally.

This is essentially my rough final model of the relationships between emotions and rationality in ideal decision-making, as distinguished from the Straw Vulcan model of the relationships between emotions and rationality.

Here’s Straw Vulcan Rationality Principle number 5: Being rational means valuing only quantifiable things—like money, efficiency, or productivity.

[video]

[43:45]

McCoy: There’s just one thing, Mr. Spock. You can’t tell me when you first saw Jim alive, you weren’t on the verge of giving us an emotional scene that brought the house down.

Spock: Merely my quite logical relief that Starfleet had not lost a highly proficient captain.

Kirk: Yes. I understand.

Spock: Thank you, Captain.

McCoy: Of course, Mr. Spock. You’re reaction was quite logical.

Spock: Thank you, Doctor. (starts walking away)

McCoy: In a pig’s eye.

Kirk: Come on, Spock. Let’s go mind the store.

[end video]

Julia: So it’s not acceptable, in Straw Vulcan Rationality world, to feel happiness because your best friend and Captain is actually alive instead of dead. But it is acceptable to feel relief, I suppose Spock did say, because a proficient worker in your Starfleet is alive, and can therefore do more proficient work. That kind of thing is rationally justifiable from the Straw Vulcan model of how rationality works.

Here’s another example: This is from an episode called “This Side of Paradise”, and in this episode they’re visiting a planet where there are these flowers that release spores, where if you inhale them, you suddenly get really emotional. This woman, who has a crush on Spock, makes sure to position him in front of one of the flowers when it opens and releases its spores. So all of a sudden, he’s actually romantic and emotional. This is Kirk and the crew trying to get in touch with him while he’s out frolicking in the meadow with his lady love. 

[45:30]

[video]

Kirk: (on communication device) Spock?

Spock: That one looks like a dragon. See the tail, and dorsal spines.

Lady: I’ve never seen a dragon.

Spock: I have. On Berengarias 7. But I’ve never stopped to look at clouds before, or rainbows. Do you know I can tell you exactly why one appears in the sky. But considering its beauty has always been out of the question.

[end video]

Julia: So this model of rationality, in which the only things to value are quantifiable things that don’t have to do with love or joy or beauty… I’ve been trying to figure out where this came from. One of my theories is that is comes from the way economists talk about rationality, where a rational actor maximizes his expected monetary gain. 

This is a convenient proxy, because in a lot of ways money can proxy for happiness, because whatever it is that you want that you think is going to make you happy you can often buy with money, or things that are making you unhappy you can often get rid of with money. It’s obviously not a perfect model, but it’s good enough in a way that economists use money as a proxy for utility or happiness sometimes, when they’re modeling how a rational actor should make decisions. 

But no economists in their right mind would tell you that money is inherently valuable or useful. You can’t do anything with it. It can only be useful, valuable, worth caring about for what it can do for you. 

All of these things in the Straw Vulcan method of rationality, which they consider acceptable things to value, make no sense as values in and of themselves. It makes no sense to value productivity in and of itself if you are not allowed to get happy over someone you care about being alive instead of dead. It doesn’t make sense at all to care about productivity or efficiency. The only way that could possibly be useful to you is in getting you more outcomes like the one where your best friend is alive instead of dead. So if you don’t value that, then don’t bother.

This is one more example from real life, if you can consider an internet message board “real life”. I found a discussion where people were arguing whether or not it was possible to be too rational, and one of them said, “Well, sure it is!”, and one of them said, “Well, give me an example,” and he said “Well fine, I will.” 

His example was two guys driving in a car, and one of them says “Oh, well we need to get from here to there, so let’s take this road.” And the second guys says, “No, but that road has all this beautiful scenery, and it has this historical site which is really exciting, and it might have a UFO on it.” And the first guy says, “No we have to take this first road because it is .2 miles shorter, and we will save .015 liters of gas.” 

And that was this message board commenter’s model of how a rational actor would think about things. So I don’t actually know if that kind of thinking is what created Straw Vulcans in TV and the movies, or whether Straw Vulcans is what created people’s thinking about what rationality is, or it’s probably some combination of the two. But it’s definitely a widespread conception of what rationality consists of.

 I myself had a conversation with a friend of mine a couple years back when I was first starting to get excited about rationality, and read about it, and study it. She said, “Oh it’s interesting that you’re interested in this because I’m trying to be less rational.” 

It took me a while to get to the bottom of what she actually meant by that. But it turns out what she meant was that she was trying to enjoy life more. And she thought that rationality was about valuing money, and getting a good job, and being productive and efficient. And she just wanted to relax and enjoy sunsets, and take it easy. To express that she said that she wanted to be less rational.

Here’s one more clip of Spock and Kirk after they left that planet. Basically, sorry for the spoiler, guys, but Kirk finds a way to cure Spock of his newfound emotions, and bring him back on board as the emotionless Vulcan he always was.

[50:00]

[video]

Kirk: We haven’t heard much from you about Omnicron Seti 3, Mr. Spock.

Spock: I have little to say about it, Captain. Except that, for the first time in my life, I was happy.

[end video]

Julia: I know, awww. So I want to end on this because I think the main takeaway from all of this that I want to leave you guys with is:

If you think you’re acting rationally, but you consistently keep getting the wrong answer, and you consistently keep ending up worse off than you could be….Then the conclusion that you should draw from that is not that rationality is bad. It’s that you’re being bad at rationality. In other words, you’re doing it wrong! Thank you!

[applause]

 

 

New to LessWrong?

New Comment
36 comments, sorted by Click to highlight new comments since: Today at 4:27 AM

Thanks for the link, this is awesome! It looks like there are going to be a LOT of great videos coming out of Skepticon.

Do you guys want it transcribed?

Yes.

Update: I should add that I will personally benefit from a transcription of the video, as I plan to quote from it regularly.

[-][anonymous]12y20

It looks like there are going to be a LOT of great videos coming out of Skepticon.

Agreed, all of the talks I have watched so far have been excellent. I hope Eliezer's is posted soon, it looks promising.

[-][anonymous]12y50

While EY's love affair with leather vests continues, that seems quite the contrast compared to his singularity summit appearance. Has he been taking fashion advice from someone?

[-][anonymous]12y30

It may just be the lighting, but it looks far too tight on him.

In the talk, rationality is positioned as something that decides which of System 1 or System 2 should be used in a particular situation. But that's straw System 2! What's actually happening is that your System 2 is smart enough to get input from System 1 and update on it appropriately (or conversely, as in taking ideas seriously on an intuitive level).

Great point, in many cases, such as when you're trying to decide what school to go to, and you make the decision deliberatively but taking into account the data from your intuitive reactions to the schools.

But in other cases, such as chess-playing, aren't you mainly just deciding based on your System 1 judgments? (Admittedly I'm no chess player; that's just my impression of how it works.)

I agree you need to use System 2 for your meta-judgment about which system to use in a particular context, but once you've made that meta-judgment, I think there are some cases in which you make the actual judgment based on System 1.

Am I correctly understanding your point?

To the moderate theist who says he or she believes some things based on science/rationality/reason/etc. and some based on faith, I reply that the algorithm that sorts claims between categories is responsible for all evaluations. This means that when he or she only selects reasonable religious claims to be subject to reason, reason is responsible for none of the conclusions, and faith is responsible for all of them.

In the same way, apparently pure System 1 judgments are best thought of as a special case of System 2 judgments so long as System 2 decided how to make them.

I think implicit in almost any decision to use System 1 judgments is that if System 2 sees an explicit failure of them, one will not execute the System 1 judgment.

I take issue with this section [31:57]:

The only reason that you have goals is because you have emotions, because you care about some outcomes of the world's more than others, because you feel positively about some potential outcomes and negatively about other potential outcomes. If you really didn't care about any potential state of the world more or less than other potential state of the world, it wouldn't matter how skilled your reasoning abilities were, you'd never have reason to do anything. [...] Emotions are clearly necessary for forming the goals, rationality is simply lame without them. [...] I would say that you can't be wrong about what you want.

This walks a fine line between naturalistic fallacy, taking goals outside System 2's domain, declaring reflection on own goals or improvement on System 1's recommendations impossible, acceptance of emotion's recommendations a sacred fact of mysterious origin; and giving a descriptive account of human cognition, similar in this non-normative role to evolutionary psychology. (Incidentally, this is the kind of thinking (or framing of thoughts) that leads people to declare that AIs must necessarily have emotions, or they'd just sit there, doing nothing.)

This is partially retracted later, based on ideas of distinction between terminal and instrumental goals; and rejection of emotion that's known to be caused by a false belief [38:26]:

Emotions can be instrumentally and epistemically irrational, and using rationality is what helps us recognize that and shape our goals based not on what our automatic emotional desires are, but on what our rationality-filtered emotional desires are.

This still leaves the status quo justification in place, with personal emotion being the source of purpose with some exceptions.

I disagree. Not all rationality deals with AI and the Singularity. I know the stuff I prefer reading about doesn't, and I thought it was pretty obvious that this talk doesn't as well. So yes, perhaps some super-intelligence wouldn't need emotions to have goals, but humans do. And it is implied that Julia is talking about human rationality in her speech.

It's like when you first learn physics, and they say "Think of the world this way. It's not actually this way, but for what you need to learn about right now, that's the best model to use"

Especially in an introductory course you can't get anywhere if you are so bogged down with trying to explain all the exceptions to the rules. Sometimes you've got to generalize.

The goal of preparing a good explanation is not served best by making these particular mistaken claims.

This is the kind of thinking (or framing of thoughts) that leads people to declare that AIs must necessarily have emotions, or they'd just sit there, doing nothing.

I disagree. Not all rationality deals with AI and the Singularity.

It's not clear what you disagree with, but the point was that a way of thinking about goals led to a wrong conclusion. The AI example was an attempt to show the assertion is wrong, but there is no telling what mischief a wrong thought will ultimately lead to.

The point was not that talks on human rationality should deal with AI or a singularity.

I don't necessarily agree with Vladmir_Nesov's comment, it depends on the interpretation an ambiguous quote he cited, and I haven't heard this speech.

Thanks for pointing out. I was very unclear there.

Julia says that generally speaking, emotions are where goals come from.

She never explicitly says for humans, but I think that that is implied by the fact that she doesn't mention AI, etc.

If I understand Vladimir's (that's my dad's name, by the way!) argument, he is saying that he disagrees that goals have to come from emotions (especially in non-humans), and he dislikes this statement because then people assume that AI's must necessarily have emotions in order to function.

Now that you mention it, I guess I do agree with his facts (goals don't necessarily come from emotion in non-humans, and if people thought that then they would think AI's need emotion), but I do disagree that that needed to be a point in the talk. So what I disagree with is the fact that he takes issue with the talk for not including information that, while true, wasn't needed in the talk.

tl;dr- Vladimir says "you didn't include the non-human exception to the rule in the talk". I say "the non-human exception may be true, but isn't needed in the talk"

If I understand Vladimir's (that's my dad's name, by the way!) argument, he is saying that he disagrees that goals have to come from emotions (especially in non-humans), and he dislikes this statement because then people assume that AI's must necessarily have emotions in order to function.

This isn't my point, or even particularly important for my point (edited to clarify the incidental nature of my AI remark). Goals seem to be indeed significantly determined by emotions in humans. But this is not a defining property of something being a goal, and even in humans not a necessary way of implementing goals. You can do something despite not wanting to; a belief would have explanatory power in that case, since the drives that allow you to be driven by beliefs are present unconditionally, whether you have beliefs driving you to action against other emotions or not.

The heuristic of taking cues from your emotions that aren't known to be instrumentally or epistemically problematic is just that, a heuristic. It shouldn't be assigned the status of a fundamental principle that defines what a "goal" is, which is the connotational impression I got from the talk, and which is what I object to, leaving aside Julia's actual philosophical position on the question.

Goals seem to be indeed significantly determined by emotions in humans. But this is not a defining property of something being a goal, and even in humans not a necessary way of implementing goals.

I don't think she implies that emotions are necessary for implementing a goal - that was the point of mentioning a rationality "filter," which can aid in accurately translating emotional desires into practical goals that best fulfill those desires, and then in translating practical goals into effective actions.

Can we trace the flow chart back to any entirely non-emotional desires/preferences? I suspect that it would quickly become a semantic issue surrounding the word "emotion."

I don't think she implies that emotions are necessary for implementing a goal

That phrase was primarily in reply to daenerys, not Julia.

Can we trace the flow chart back to any entirely non-emotional desires/preferences? I suspect that it would quickly become a semantic issue surrounding the word "emotion."

What about laws of physics, or evolution? While true (if technically vague) explanations for actions, they are not true cognitive or decision theoretic or normative reasons for actions. See this post.

Upvoted for the clarification. Thanks!

What about laws of physics, or evolution? While true (if technically vague) explanations for actions, they are not true cognitive reasons for actions.

"I don't want to die," for example, is obviously both an emotional preference and the result of the natural evolution of the brain. That the brain is an evolved organ isn't disputed here.

Upvoting everyone. This was a really useful conversation, and I'm pretty sure I was wrong, so I definitely learned something. The evolutionary drives example was much more useful to me than the AI example. Thanks!

(Though I am still of the opinion that the speech itself was still great without the info; Due to being an introduction to the topic, I still don't expect it to be able to cover everything. )

There are explanations of different kinds that hold simultaneously. An explanation of the wrong kind (for example, evolutionary explanation) that is only similar (because of shared reasons) to the relevant explanation (of the right kind, in this case "goals", a normative or at least cognitive explanation) can be used to gain correct answers, used as a heuristic (evolutionary psychology has a bit of predictive power about human behavior and even goals). This further simplifies confusing them, so that instead of a rule of thumb, a source of knowledge, an explanation of the wrong kind would try taking a role that doesn't belong to it, becoming a definition of the thing being sought. For example, "maximizing inclusive fitness" can be believed to be an actual human goal.

Kudos to Julia for not only introducing a solid take on the relationship between reasoning and emotion, but also for doing so in a way that had the audience eating out of her hand. Of all the Skepticon talks that dealt with rationality, I think this was received the most enthusiastically.

She handled the impromptu voice-over brilliantly, too! I nearly strangled the sound guy.

Emotions are clearly necessary for forming the goals, rationality is simply lame without them.

What does this mean?

a) Emotions are logically necessary for forming goals, rational beings are incapacitated without emotions.
b) Emotions are logically necessary for forming goals, rational beings are incapacitated without goals.
c) Emotions are logically necessary for forming goals, rationality has no normative value to a rational being without emotions.
d) Emotions are logically necessary for forming goals, rationality has no normative value to a rational being without goals.
e) Emotions are necessary for forming goals among humans, rational humans are incapacitated without emotions.
f) Emotions are necessary for forming goals among humans, rational humans are incapacitated without goals.
g) Emotions are necessary for forming goals among humans, rationality has no normative value to humans without emotions.
h) Emotions are necessary for forming goals among humans, rationality has no normative value to humans without goals.
i) (Other.)

Good question. My intended meaning was closest to (h). (Although isn't (g) pretty much equivalent?)

Yay! Word of God on the issue! (Warning: TvTropes). Good to know I wasn't too far off-base.

I can see how g and h can be considered equivalent using the: emotions-> goals . In fact I would assume that would also make a and b pretty much equivalent, as well as c and d, e and f, etc.

Incidentally, the filmmaker didn't capture my slide with the diagram of the revised model of rationality and emotions in ideal human* decision-making, so I've uploaded it.

The Straw Vulcan model of ideal human* decisionmaking: http://measureofdoubt.files.wordpress.com/2011/11/screen-shot-2011-11-26-at-3-58-00-pm.png

My revised model of ideal human* decisionmaking: http://measureofdoubt.files.wordpress.com/2011/11/screen-shot-2011-11-26-at-3-58-14-pm.png

*I realize now that I need this modifier, at least on Less Wrong!

If emotions are necessary but not sufficient for forming goals among humans, the claim might be that rationality has no normative value to humans without goals without addressing rationality's normative value to humans with emotions who don't have goals.

If you see them as equivalent, this implies that you believe emotions are necessary and sufficient for forming goals among humans.

As much as this might be true for humans, it would be strange to say that after goals are formed, the loss of emotion in a person would obviate all their already formed non-emotional goals. So it's not just that you're discussing the human case and not the AI case, you're discussing the typical human.

[-][anonymous]12y00

.

From the context of her talk I have a high confidence that the "them" at the end of her sentence refers to emotions, not goals. Therefore I would reject translations b, d, f, and h.

I would also reject a as being far too reaching for the level of her talk.

Also from the context of her talk I would say that the "normative value" translations are much more likely than the "incapacitated" translations. My confidence in this is much lower than my confidence in my first assertion though.

That leaves us with c, g, and other. I've already argued that I think her talk was implied to be about human rationality, leaving us with g, or other.

Can't think of a better option, so my personal opinion is g.

[-][anonymous]12y30

Excellent!

My favorite insight from the talk: When people say "you're being too rational," they mean "you're using too much System 2 thinking."

The problem is that they are ignoring the data produced by System 1 thinking, which they wouldn't, had they used a bit more (well-informed) System 2 thinking (deliberate rationality), or trained System 1 to recognize better when it has a chance of directly producing useful judgments.

[-][anonymous]12y20

I agree that this is the most common form of the mistake, but I think people would say "you're being too rational" even if you did incorporate data from System 1 thinking. I suspect what triggers the "you're too rational" response is the duration of one's System 2 thinking: Spending more than the normal amount of time doing System 2 thinking is what sets off the Spock alarm. Relevant xkcd.

You can also come to weird conclusions, and people would say things like "This isn't what normal people do [in this situation]!" But taking entirely too long to decide is actually an error, as Julia pointed out.

(I see that I missed your point.)

[-][anonymous]12y10

Thanks! I will probably link to this talk.

I am wondering though if I was to try and explain the material presented in the video, what would be the pros and cons of using "intuition" and "deductive reasoning" instead of "system 1" and "system 2" with a lay person?

The most obvious pro of using "intuition" and "deductive reasoning" would be that it would be a smaller cognitive leap for the audience to follow. For example, when you say "System 1" the audience has to translate: System 1-> Intuition -> One of the 2 types of reasoning

Just saying "intuition" removes the need for this extra step.

A possible pro of using "system 1" and "system 2" is that it might allow the audience to distance themselves from any emotional reactions they might have to the ideas of intuition and deduction.

Yup, I went through the same reasoning myself -- I decided on "system 1" and "system 2" for their neutral tone, and also because they're Stanovich's preferred terms.

A "skim" of the talk - https://skimmablevideos.herokuapp.com/skims/show/54b60aa632b49802009b98b3 .

I broke it down into sections and subsections with associated descriptions, so it's easy to skim.