Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Viliam 18 January 2017 03:36:46PM *  0 points [-]

There is a group (not CFAR) that allegedly uses the following tactics:

1) They teach their students (among other things) that consistency is good, and compartmentalization is bad and stupid.
2) They make the students admit explicitly that the seminar was useful for them.
3) They make the students admit explicitly that one of their important desires is to help their friends.
...and then...
4) They create strong pressure on the students to tell all their friends about the seminar, and to make them sign up for one.

The official reasoning is that if you want to be consistent, and if you want good things to happen to your friends, and if the seminar is a good thing... then logically you should want to make your friends attend the seminar. And if you want to make your friends attend the seminar, you should immediately take an action that increases the probability of that, especially if all it takes is to take your phone and make a few calls!

If there is anything stopping you, then you are inconsistent -- which means stupid! -- and you have failed at the essential lesson that was taught to you during the previous hours -- which means you will keep failing at life, because you are a comparmentalizing loser, and you can't stop being one even after the whole process was explained to you in a great detail, and you even paid a lot of money to learn this lesson! Come on, don't throw away everything, pick up the damned phone and start calling; it is not that difficult, and your first experience with overcoming compartmentalization will feel really great afterwards, trust me!

So, what exactly is wrong about this reasoning?

First, when someone says "A implies B", that doesn't mean you need to immediately jump and start doing B. There is still an option that A is false; and an option that "A implies B" is actually a lie. Or maybe "A implies B" only in some situation, or only with certain probability. Probabilistic thinking and paying attention to detail are not the opposite of consistency.

Second, just because something is good, it is not necessarily the best available option. Maybe you should spend some time thinking about even better options.

Third, there is a difference between trying to be consistent, and believing in your own infallibility. You are allowed to have probabilistic beliefs, and to admit openly that those beliefs are probabilistic. That you believe that with probability 80% A is true, but you also admit the possiblity that A is false. That is not an opposite of consistency. Furthermore, you are allowed to take an outside view, and admit that with certain probability you are wrong. That is especially important in calculating expected utility of actions that strongly depend on whether you are right or wrong.

Fourth, the most important consistency is internal. Just because you are internally consistent, it doesn't mean you have to explain all your beliefs truthfully and meaningfully to everyone, especially to people who are obviously trying to manipulate you.

...but if you learned about the concept of consistency just a few minutes ago, you probably don't realize all this.

Comment author: Qiaochu_Yuan 19 January 2017 04:56:12AM 0 points [-]

I would describe the problem as a combination of privileging the hypothesis and privileging the question. First, even granted that you want to both be consistent and help your friends, it's not clear that telling them about the seminar is the most helpful thing you can do for your friends; there are lots of other hypotheses you could try generating if you were given the time. Second, there are lots of other things you might want and do something about wanting, and someone's privileging the question by bringing these particular things to your attention in this particular way.

This objection applies pretty much verbatim to most things strangers might try to persuade you to do, e.g. donate money to their charity.

Comment author: Qiaochu_Yuan 12 January 2017 07:27:01AM *  1 point [-]

Really not a fan of the title; my objection is basically the same as moridinamael's. The thing you're pointing out as a mistake is not rationality but a particular bad move in a social game, namely stating certain kinds of unpopular opinions.

One possible steelman of the point I think you're making can be found in Paul Christiano's If we can't lie to others, we will lie to ourselves. In any case, the way I get around this is not engaging even slightly publicly in conversations where I might even have an opportunity to state my least popular opinions.

Comment author: Qiaochu_Yuan 12 January 2017 06:51:18AM *  5 points [-]

It's very annoying trying to have this conversation without downvotes. Anyway, here are some sentences.

  1. This is not quite the St. Petersburg paradox; in the St. Petersburg setup, you don't get to choose when to quit, and the confusion is about how to evaluate an opportunity which apparently has infinite expected value. In this setup the option "always continue playing" has infinite expected value, but even if you toss it out there are still countably many options left, namely "quit playing after N victories," each of which has higher expected value than the last, and it's still unclear how to pick between them.

  2. Utility not being linear in money is a red herring here; you can just replace money with utility in the problem directly, as long as your utility function is unbounded. One resolution is to argue that this sort of phenomenon suggests that utility functions ought to be bounded. (One way of concretizing what it means to have an unbounded utility function: you have an unbounded utility function if and only if there is a sequence of outcomes each of which is at least "twice as good" as the previous in the sense that you would prefer a 50% chance of the better outcome and a 50% chance of some fixed outcome to a 100% chance of the worse outcome.)

  3. Thinking about your possible strategies before you start playing this game, there are infinitely many: for every nonnegative integer N, you can choose to stop playing after N rounds, or you can choose to never stop playing. Each strategy is more valuable than the next, and the last strategy has infinite expected value. If you state the question in terms of utilities, that means there's some sense in which the naive expected utility maximizer is doing the right thing, if it has an unbounded utility function.

  4. On the other hand, the foundational principled argument for taking expected utility maximization seriously as a (arguably toy) model of good decision-making is the vNM theorem, and in the setup of the vNM theorem lotteries (probability distributions over outcomes) always have finite expected utility, because 1) the utility function always takes finite values; an infinite value violates the continuity axiom, and 2) lotteries are only ever over finitely many possible states of the world. In this setup, without a finite bound on the total number of rounds, the possible states of the world are given by possible sequences of coin flips, of which there are uncountably many, and the lottery over them you need to consider to decide how good it would be to never stop playing involves all of them. So, you can either reject the setup because the vNM theorem doesn't apply to it, or reject the vNM theorem because you want to understand decision making over infinitely many possible outcomes; in the latter case there's no reason a priori to talk about expected utility maximization. (This point also applies to the St. Petersburg paradox.)

  5. If you want to understand decision making over infinitely many possible outcomes, you run into a much more basic problem which has nothing to do with expected values: suppose I offer you a sequence of possible outcomes, each of which is strictly more valuable than the previous one (and this can happen even with a bounded utility function as long as it takes infinitely many values, although, again, there's no reason a priori to talk about expected utility maximization in this setting). Which one do you pick?

Comment author: gwern 05 January 2017 09:23:24PM 8 points [-]

Isn't this just the St Petersburg paradox?

Comment author: Qiaochu_Yuan 12 January 2017 06:15:42AM *  1 point [-]

No. In the St. Petersburg setup you don't get to choose when to quit, you only get to choose whether to play the game or not. In this game you can remove the option for the player to just keep playing, and force the player to pick a point after which to quit, and there's still something weird going on there.

Comment author: JonahSinick 21 December 2016 04:26:16AM *  3 points [-]

Glad you liked it :-).

So I'd be interested to hear a little more info on methodology - what programming language(s) you used, how you generated the graphs, etc.

I used R for this analysis. Some resources that you might find relevant:

And depending on how far back most of this data was collected, plausibly most of the Berkeley respondents were high school or college students (UC Berkeley alone has over 35,000 students), since for awhile that was the main demographic of Facebook users, and probably for awhile longer that was the main demographic of Facebook users willing to take personality tests.

Douglas_Knight is correct – the average age of users is quite low, at ~26 years old both for the high conscientiousness cities and the low conscientiousness cities.

Comment author: Qiaochu_Yuan 23 December 2016 07:16:28PM 1 point [-]

Thanks for the links!

Comment author: Douglas_Knight 20 December 2016 10:11:57PM *  2 points [-]

Actually, two of your complaints cancel out. You should expect that the population living in Berkeley has a very young personality, but if all the data is from college students, then there's nothing special about Berkeley (except that it is large and thus small effects are statistically significant — but the claim is that it has a large effect).

I think you are correct that the data is all college students (or at least fairly young people). I believe this because the cities being discussed are the hometown, not the current residence, which is the kind of thing you'd do with college students. In any event, studying hometown controls for the age demographics of Berkeley. But Jonah should have explicitly controlled for age.

Added: poking around the website I don't see a clear answer to how old the data is. Most of it seems to have been collected by 2011, but I'm not sure because there are lots of variations. Each big5 score is labeled with the date taken.

Comment author: Qiaochu_Yuan 22 December 2016 06:41:16PM 0 points [-]

I think you are correct that the data is all college students (or at least fairly young people). I believe this because the cities being discussed are the hometown, not the current residence, which is the kind of thing you'd do with college students. In any event, studying hometown controls for the age demographics of Berkeley. But Jonah should have explicitly controlled for age.

Good point, I missed this.

Comment author: Qiaochu_Yuan 20 December 2016 09:08:15PM *  2 points [-]

Thanks for writing this! I really think people should be doing this (applying well-known algorithms to interesting datasets and seeing what happens) a lot more often overall, and it's on my list of skills I'd really like to learn personally. So I'd be interested to hear a little more info on methodology - what programming language(s) you used, how you generated the graphs, etc.

I'm pretty skeptical of making any connections to the Bay Area rationalist community based on Berkeley's conscientiousness score (which I think is interesting but not for this reason). There are 100,000 people living in Berkeley, and most of them aren't rationalists. And depending on how far back most of this data was collected, plausibly most of the Berkeley respondents were high school or college students (UC Berkeley alone has over 35,000 students), since for awhile that was the main demographic of Facebook users, and probably for awhile longer that was the main demographic of Facebook users willing to take personality tests. (Edit: But see Douglas_Knight's comment below.) In general I'd think more about selection effects like this before drawing any conclusions.

Comment author: LawrenceC 20 December 2016 08:44:01PM 1 point [-]

I think they're equivalent in a sense, but that bucket diagrams are still useful. A bucket can also occur when you conflate multiple causal nodes. So in the first example, the kid might not even have a conscious idea that there are three distinct causal nodes ("spelled oshun wrong", "I can't write", "I can't be a writer"), but instead treats them as a single node. If you're able to catch the flinch, introspect, and notice that there are actually three nodes, you're already a big part of the way there.

Comment author: Qiaochu_Yuan 20 December 2016 08:52:05PM 5 points [-]

The bucket diagrams are too coarse, I think; they don't keep track of what's causing what and in what direction. That makes it harder to know what causal aliefs to inspect. And when you ask yourself questions like "what would be bad about knowing X?" you usually already get the answer in the form of a causal alief: "because then Y." So the information's already there; why not encode it in your diagram?

Comment author: Benquo 20 December 2016 05:25:51PM 2 points [-]

Claim 3: Irrelevant nitpicks are an important problem in comment sections on sites such as LessWrong.

If you want to discuss this claim, I encourage you to do it as a reply to this comment.

Comment author: Qiaochu_Yuan 20 December 2016 08:45:40PM 5 points [-]

I think it is somewhat important (it affects incentives for writers to write), but I think it's a symptom of something very important: namely, that the comment is motivated by a desire to win an argument rather than by a desire to find out what's true. If you want to win an argument you nitpick the weakest part of a post; if you want to find out what's true you update on the strongest part.

I think it would be amazing if LW could in fact become a community of people primarily motivated by a desire to find out what's true (appropriately tempered by a desire to actually do something with all those truths), but it's unclear how realistic this desire is. Another approach more in line with the kind of stuff Paul's been thinking about is to think about mechanisms for getting good comments out of people who just want to win arguments.

Comment author: HungryHippo 20 December 2016 05:55:18PM *  3 points [-]

Very interesting article!

I'm incidentally re-reading "Feeling Good" and parts of it deal with situations exactly like the ones Oshun-Kid is in.

From Chapter 6 ("Verbal Judo: How to talk back when you're under the fire of criticism"), I quote:

Here’s how it works. When another person criticizes you, certain negative thoughts are automatically triggered in your head. Your emotional reaction will be created by these thoughts and not by what the other person says. The thoughts which upset you will invariably contain the same types of mental errors described in Chapter 3: overgeneralization, all-or-nothing thinking, the mental filter, labeling, etc. For example, let’s take a look at Art’s thoughts. His panic was the result of his catastrophic interpretation: “This criticism shows how worthless I am.” What mental errors is he making? In the first place, Art is jumping to conclusions when he arbitrarily concludes the patient’s criticism is valid and reasonable. This may or may not be the case. Furthermore, he is exaggerating the importance of whatever he actually said to the patient that may have been undiplomatic (magnification), and he is assuming he could do nothing to correct any errors in his behavior (the fortune teller error). He unrealistically predicted he would be rejected and ruined professionally because he would repeat endlessly whatever error he made with this one patient (overgeneralization). He focused exclusively on his error (the mental filter) and over-looked his numerous other therapeutic successes (disqualifying or overlooking the positive). He identified with his erroneous behavior and concluded he was a “worthless and insensitive human being” (labeling). The first step in overcoming your fear of criticism concerns your own mental processes: Learn to identify the negative thoughts you have when you are being criticized. It will be most helpful to write them down using the double-column technique described in the two previous chapters. This will enable you to analyze your thoughts and recognize where your thinking is illogical or wrong. Finally, write down rational responses that are more reasonable and less upsetting.

And quoting your article:

(You might take a moment, right now, to name the cognitive ritual the kid in the story should do (if only she knew the ritual). Or to name what you think you'd do if you found yourself in the kid's situation -- and how you would notice that you were at risk of a "buckets error".)

I would encourage Oshun-Kid to cultivate the following habit:

  1. Notice when you feel certain (negative) emotions. (E.g. anxiety, sadness, fear, frustration, boredom, stressed, depressed, self-critical, etc.) Recognizing these (sometimes fleeting) moments is a skill that you get better at as you practice.
  2. Try putting down in words (write it down!) why you feel that emotion in this situation. This too, you will get better at as you practice. These are your Automatic Thoughts. E.g. "I'm always late!".
  3. Identify the cognitive distortions present in your automatic thought. E.g. Overgeneralization, all-or-nothing thinking, catastrophizing, etc.
  4. Write down a Rational Response that is absolutely true (don't try to deceive yourself --- it doesn't work!) and also less upsetting. E.g.: I'm not literally always late! I'm sometimes late and sometimes on time. If I'm going to beat myself up for the times I'm late, I might as well feel good about myself for the times I'm on time. Etc.

Write steps 2., 3., and 4., in three columns, where you add a new row each time you notice a negative emotion.

I'm actually surprised that Cognitive Biases are focused on to a greater degree than Cognitive Distortions are in the rational community (based on google-phrase search on site:lesswrong.com), especially when Kahneman writes more or less in Thinking: Fast and Slow that being aware of cognitive biases has not made him that much better at countering them (IIRC) while CBT techniques are regularly used in therapy sessions to alleviate depression, anxiety, etc. Sometimes as effectively as in a single session.

I also have some objections as to how the teacher behaves. I think the teacher would be more effective if he said stuff like: "Wow! I really like the story! You must have worked really hard to make it! Tell me how you worked at it: did you think up the story first and then write it down, or did you think it up as you were writing it, or did you do it a different way? Do you think there are authors who do it a different way from you or in a similar way to you? Do you think it's possible to become a better writer, just like a runner becomes a faster runner or like a basketball player becomes better at basketball? How would you go about doing that to become a better author? If a basketball player makes a mistake in a game, does it always make him a bad basketball player? Do the best players always do everything perfectly, or do they sometimes make mistakes? Should you expect of yourself to always be a perfect author, or is it okay for you to sometimes make mistakes? What can you do if you discover a mistake in your writing? Is it useful to sometimes search through your writings to find mistakes you can fix? Etc."

Edit: I personally find that when tutoring someone and you notice in real time that they are making a mistake or are just about to make a mistake, it's more effective to correct them in the form of a question rather than outright saying "that's wrong" or "that's incorrect" or similar.

E.g.:

Pupil, saying: "... and then I multiply nine by eight and get fifty-four ..." Here, I wouldn't say: "that's a mistake." I would rather say, "hmm... is that the case?" or "is that so?" or "wait a second, what did you say that was again?" or "hold on, can you repeat that for me?". It's a bit difficult for me to translate my question-phrases from Norwegian to English, because a lot of the effect in the tone of voice. My theory for why this works is that when you say "that's wrong" or similar, you are more likely to express the emotion of disapproval at the student's actions or the student herself (and the student is more likely to read that emotion into you whether or not you express it). Whereas when you put it in the form of a question, the emotions you express are more of the form: mild surprise, puzzlement, uncertainty, curiosity, interest, etc. which are not directly rejecting or disapproving emotions on your part and therefore don't make the student feel bad.

After you do this a couple of times, the student becomes aware that every time you put a question to them, they are expected to double check that something is correct and to justify their conclusion.

Comment author: Qiaochu_Yuan 20 December 2016 08:37:00PM 7 points [-]

I'm actually surprised that Cognitive Biases are focused on to a greater degree than Cognitive Distortions are in the rational community (based on google-phrase search on site:lesswrong.com), especially when Kahneman writes more or less in Thinking: Fast and Slow that being aware of cognitive biases has not made him that much better at countering them (IIRC) while CBT techniques are regularly used in therapy sessions to alleviate depression, anxiety, etc. Sometimes as effectively as in a single session.

The concept of cognitive biases is sort of like training wheels; I continue teaching people about them (at SPARC, say) as a first step on the path to getting them to recognize that they can question the outputs of their brain processes. It helps make things feel a lot less woo to be able to point to a bunch of studies clearly confirming that some cognitive bias exists, at first. And once you've internalized that things like cognitive biases exist I think it's a lot easier to then move on to other more helpful things, at least for a certain kind of a person (like me; this is the path I took historically).

View more: Next