Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Checklist of Rationality Habits

111 Post author: AnnaSalamon 07 November 2012 09:19PM
As you may know, the Center for Applied Rationality has run several workshops, each teaching content similar to that in the core sequences, but made more practical, and more into fine-grained habits.

Below is the checklist of rationality habits we have been using in the minicamps' opening session.  It was co-written by Eliezer, myself, and a number of others at CFAR.  As mentioned below, the goal is not to assess how "rational" you are, but, rather, to develop a personal shopping list of habits to consider developing.  We generated it by asking ourselves, not what rationality content it's useful to understand, but what rationality-related actions (or thinking habits) it's useful to actually do.

I hope you find it useful; I certainly have.  Comments and suggestions are most welcome; it remains a work in progress. (It's also available as a pdf.) 

---

This checklist is meant for your personal use so you can have a wish-list of rationality habits, and so that you can see if you're acquiring good habits over the next year—it's not meant to be a way to get a 'how rational are you?' score, but, rather, a way to notice specific habits you might want to develop.  For each item, you might ask yourself: did you last use this habit...
  • Never
  • Today/yesterday
  • Last week
  • Last month
  • Last year
  • Before the last year

  1. Reacting to evidence / surprises / arguments you haven't heard before; flagging beliefs for examination.
    1. When I see something odd - something that doesn't fit with what I'd ordinarily expect, given my other beliefs - I successfully notice, promote it to conscious attention and think "I notice that I am confused" or some equivalent thereof. (Example: You think that your flight is scheduled to depart on Thursday. On Tuesday, you get an email from Travelocity advising you to prepare for your flight “tomorrow”, which seems wrong. Do you successfully raise this anomaly to the level of conscious attention? (Based on the experience of an actual LWer who failed to notice confusion at this point and missed their plane flight.)

    2. When somebody says something that isn't quite clear enough for me to visualize, I notice this and ask for examples. (Recent example from Eliezer: A mathematics student said they were studying "stacks". I asked for an example of a stack. They said that the integers could form a stack. I asked for an example of something that was not a stack.) (Recent example from Anna: Cat said that her boyfriend was very competitive. I asked her for an example of "very competitive." She said that when he’s driving and the person next to him revs their engine, he must be the one to leave the intersection first—and when he’s the passenger he gets mad at the driver when they don’t react similarly.)


    3. I notice when my mind is arguing for a side (instead of evaluating which side to choose), and flag this as an error mode. (Recent example from Anna: Noticed myself explaining to myself why outsourcing my clothes shopping does make sense, rather than evaluating whether to do it.)


    4. I notice my mind flinching away from a thought; and when I notice, I flag that area as requiring more deliberate exploration. (Recent example from Anna: I have a failure mode where, when I feel socially uncomfortable, I try to make others feel mistaken so that I will feel less vulnerable. Pulling this thought into words required repeated conscious effort, as my mind kept wanting to just drop the subject.)


    5. I consciously attempt to welcome bad news, or at least not push it away. (Recent example from Eliezer: At a brainstorming session for future Singularity Summits, one issue raised was that we hadn't really been asking for money at previous ones. My brain was offering resistance, so I applied the "bad news is good news" pattern to rephrase this as, "This point doesn't change the fixed amount of money we raised in past years, so it is good news because it implies that we can fix the strategy and do better next year.")


  2. Questioning and analyzing beliefs (after they come to your attention).
    1. I notice when I'm not being curious. (Recent example from Anna: Whenever someone criticizes me, I usually find myself thinking defensively at first, and have to visualize the world in which the criticism is true, and the world in which it's false, to convince myself that I actually want to know. For example, someone criticized us for providing inadequate prior info on what statistics we'd gather for the Rationality Minicamp; and I had to visualize the consequences of [explaining to myself, internally, why I couldn’t have done any better given everything else I had to do], vs. the possible consequences of [visualizing how it might've been done better, so as to update my action-patterns for next time], to snap my brain out of defensive-mode and into should-we-do-that-differently mode.)


    2. I look for the actual, historical causes of my beliefs, emotions, and habits; and when doing so, I can suppress my mind's search for justifications, or set aside justifications that weren't the actual, historical causes of my thoughts. (Recent example from Anna: When it turned out that we couldn't rent the Minicamp location I thought I was going to get, I found lots and lots of reasons to blame the person who was supposed to get it; but realized that most of my emotion came from the fear of being blamed myself for a cost overrun.)


    3. I try to think of a concrete example that I can use to follow abstract arguments or proof steps. (Classic example: Richard Feynman being disturbed that Brazilian physics students didn't know that a "material with an index" meant a material such as water. If someone talks about a proof over all integers, do you try it with the number 17? If your thoughts are circling around your roommate being messy, do you try checking your reasoning against the specifics of a particular occasion when they were messy?)


    4. When I'm trying to distinguish between two (or more) hypotheses using a piece of evidence, I visualize the world where hypothesis #1 holds, and try to consider the prior probability I'd have assigned to the evidence in that world, then visualize the world where hypothesis #2 holds; and see if the evidence seems more likely or more specifically predicted in one world than the other (Historical example: During the Amanda Knox murder case, after many hours of police interrogation, Amanda Knox turned some cartwheels in her cell. The prosecutor argued that she was celebrating the murder. Would you, confronted with this argument, try to come up with a way to make the same evidence fit her innocence? Or would you first try visualizing an innocent detainee, then a guilty detainee, to ask with what frequency you think such people turn cartwheels during detention, to see if the likelihoods were skewed in one direction or the other?)


    5. I try to consciously assess prior probabilities and compare them to the apparent strength of evidence. (Recent example from Eliezer: Used it in a conversation about apparent evidence for parapsychology, saying that for this I wanted p < 0.0001, like they use in physics, rather than p < 0.05, before I started paying attention at all.)


    6. When I encounter evidence that's insufficient to make me "change my mind" (substantially change beliefs/policies), but is still more likely to occur in world X than world Y, I try to update my probabilities at least a little. (Recent example from Anna: Realized I should somewhat update my beliefs about being a good driver after someone else knocked off my side mirror, even though it was legally and probably actually their fault—even so, the accident is still more likely to occur in worlds where my bad-driver parameter is higher.)


  3. Handling inner conflicts; when different parts of you are pulling in different directions, you want different things that seem incompatible; responses to stress.
    1. I notice when I and my brain seem to believe different things (a belief-vs-anticipation divergence), and when this happens I pause and ask which of us is right. (Recent example from Anna: Jumping off the Stratosphere Hotel in Las Vegas in a wire-guided fall. I knew it was safe based on 40,000 data points of people doing it without significant injury, but to persuade my brain I had to visualize 2 times the population of my college jumping off and surviving. Also, my brain sometimes seems much more pessimistic, especially about social things, than I am, and is almost always wrong.)


    2. When facing a difficult decision, I try to reframe it in a way that will reduce, or at least switch around, the biases that might be influencing it. (Recent example from Anna's brother: Trying to decide whether to move to Silicon Valley and look for a higher-paying programming job, he tried a reframe to avoid the status quo bias: If he was living in Silicon Valley already, would he accept a $70K pay cut to move to Santa Barbara with his college friends? (Answer: No.))


    3. When facing a difficult decision, I check which considerations are consequentialist - which considerations are actually about future consequences. (Recent example from Eliezer: I bought a $1400 mattress in my quest for sleep, over the Internet hence much cheaper than the mattress I tried in the store, but non-returnable. When the new mattress didn't seem to work too well once I actually tried sleeping nights on it, this was making me reluctant to spend even more money trying another mattress. I reminded myself that the $1400 was a sunk cost rather than a future consequence, and didn't change the importance and scope of future better sleep at stake (occurring once per day and a large effect size each day).


  4. What you do when you find your thoughts, or an argument, going in circles or not getting anywhere.
    1. I try to find a concrete prediction that the different beliefs, or different people, definitely disagree about, just to make sure the disagreement is real/empirical. (Recent example from Michael Smith: Someone was worried that rationality training might be "fake", and I asked if they could think of a particular prediction they'd make about the results of running the rationality units, that was different from mine, given that it was "fake".)


    2. I try to come up with an experimental test, whose possible results would either satisfy me (if it's an internal argument) or that my friends can agree on (if it's a group discussion). (This is how we settled the running argument over what to call the Center for Applied Rationality—Julia went out and tested alternate names on around 120 people.)


    3. If I find my thoughts circling around a particular word, I try to taboo the word, i.e., think without using that word or any of its synonyms or equivalent concepts. (E.g. wondering whether you're "smart enough", whether your partner is "inconsiderate", or if you're "trying to do the right thing".) (Recent example from Anna: Advised someone to stop spending so much time wondering if they or other people were justified; was told that they were trying to do the right thing; and asked them to taboo the word 'trying' and talk about how their thought-patterns were actually behaving.)


  5. Noticing and flagging behaviors (habits, strategies) for review and revision.
    1. I consciously think about information-value when deciding whether to try something new, or investigate something that I'm doubtful about. (Recent example from Eliezer: Ordering a $20 exercise ball to see if sitting on it would improve my alertness and/or back muscle strain.) (Non-recent example from Eliezer: After several months of procrastination, and due to Anna nagging me about the value of information, finally trying out what happens when I write with a paired partner; and finding that my writing productivity went up by a factor of four, literally, measured in words per day.)


    2. I quantify consequences—how often, how long, how intense. (Recent example from Anna: When we had Julia take on the task of figuring out the Center's name, I worried that a certain person would be offended by not being in control of the loop, and had to consciously evaluate how improbable this was, how little he'd probably be offended, and how short the offense would probably last, to get my brain to stop worrying.) (Plus 3 real cases we've observed in the last year: Someone switching careers is afraid of what a parent will think, and has to consciously evaluate how much emotional pain the parent will experience, for how long before they acclimate, to realize that this shouldn't be a dominant consideration.)


  6. Revising strategies, forming new habits, implementing new behavior patterns.
    1. I notice when something is negatively reinforcing a behavior I want to repeat. (Recent example from Anna: I noticed that every time I hit 'Send' on an email, I was visualizing all the ways the recipient might respond poorly or something else might go wrong, negatively reinforcing the behavior of sending emails. I've (a) stopped doing that (b) installed a habit of smiling each time I hit 'Send' (which provides my brain a jolt of positive reinforcement). This has resulted in strongly reduced procrastination about emails.)


    2. I talk to my friends or deliberately use other social commitment mechanisms on myself. (Recent example from Anna: Using grapefruit juice to keep up brain glucose, I had some juice left over when work was done. I looked at Michael Smith and jokingly said, "But if I don't drink this now, it will have been wasted!" to prevent the sunk cost fallacy.) (Example from Eliezer: When I was having trouble getting to sleep, I (a) talked to Anna about the dumb reasoning my brain was using for staying up later, and (b) set up a system with Luke where I put a + in my daily work log every night I showered by my target time for getting to sleep on schedule, and a — every time I didn't.)


    3. To establish a new habit, I reward my inner pigeon for executing the habit. (Example from Eliezer: Multiple observers reported a long-term increase in my warmth / niceness several months after... 3 repeats of 4-hour writing sessions during which, in passing, I was rewarded with an M&M (and smiles) each time I complimented someone, i.e., remembered to say out loud a nice thing I thought.) (Recent example from Anna: Yesterday I rewarded myself using a smile and happy gesture for noticing that I was doing a string of low-priority tasks without doing the metacognition for putting the top priorities on top. Noticing a mistake is a good habit, which I’ve been training myself to reward, instead of just feeling bad.)


    4. I try not to treat myself as if I have magic free will; I try to set up influences (habits, situations, etc.) on the way I behave, not just rely on my will to make it so. (Example from Alicorn: I avoid learning politicians’ positions on gun control, because I have strong emotional reactions to the subject which I don’t endorse.) (Recent example from Anna: I bribed Carl to get me to write in my journal every night.)


    5. I use the outside view on myself. (Recent example from Anna: I like to call my parents once per week, but hadn't done it in a couple of weeks. My brain said, "I shouldn't call now because I'm busy today." My other brain replied, "Outside view, is this really an unusually busy day and will we actually be less busy tomorrow?")

Comments (178)

Comment author: Kaj_Sotala 07 November 2012 01:26:39PM 27 points [-]

Very nice list! I feel like this one in particular is one of the most important ones:

I try not to treat myself as if I have magic free will; I try to set up influences (habits, situations, etc.) on the way I behave, not just rely on my will to make it so. (Example from Alicorn: I avoid learning politicians’ positions on gun control, because I have strong emotional reactions to the subject which I don’t endorse.) (Recent example from Anna: I bribed Carl to get me to write in my journal every night.)

To give my own example: I try to be vegetarian, but occasionally the temptation of meat gets the better of me. At some point I realized that whenever I walked past a certain hamburger place - which was something that I typically did on each working day - there was a high risk of me succumbing. Obvious solution: modify my daily routine to take a slightly longer route which avoided any hamburger places. Modifying your environment so that you can completely avoid the need to use willpower is ridiculously useful.

Comment author: Swimmer963 07 November 2012 03:51:42PM 10 points [-]

Modifying your environment so that you can completely avoid the need to use willpower is ridiculously useful.

My personal example: arranging to go exercise on the way to or from somewhere else will drastically increase the probability that I'll actually go. There's a pool a 5 minute bike ride from my house, which is also on the way home from most of the places I would be biking from. Even though the extra 10 minutes round trip is pretty negligable (and counts as exercise itself), I'm probably 2x as likely to go if I have my swim stuff with me and stop off on the way home. The effect is even more drastic for my taekwondo class: it's a 45 minute bike ride from home and about a 15 minute bike ride from the campus where I have most of my classes. Even if I finish class at 3:30 pm and taekwondo is at 7 pm, it still makes more sense for me to stay on campus for the interim–if I do, there's nearly 100% likelihood that I'll make it to taekwondo, but if I go home and get comfy, that drops to less than 50%.

Comment author: RomeoStevens 07 November 2012 10:58:58PM *  6 points [-]

For me this was the biggest insight that dramatically improved my ability to form habits. I don't actually decide things most of the time. Agency is something that only occurs intermittently. Therefore I use my agency on changing what sorts of things I am surrounded by rather than on the tasks themselves. This works because the default state is to simply be the average of what I am surrounded by.

Cliche example: not having junk food in the house improves my diet by making it take additional work to go out and get it.

Comment author: incariol 11 November 2012 12:55:37AM 1 point [-]

Another example: as I don't feel like getting in a relationship for the foreseeable future, I try to avoid circumstances with lots of pretty girls around, e.g. not going to certain parties, taking walks in those parts of the forest where I don't expect to meet any, and in general, trying to convince other parts of my brain that the only girl I could possibly be with exists somewhere in the distant future or not at all (if she can't do a spell or two and talk to dragons, she won't do ;-)).

It also helps being focused on math, programming and abstract philosophy - and spending time on LW, it seems. :)

Comment author: army1987 15 November 2012 05:24:17PM *  5 points [-]

I don't think you'd be likely to find yourself in a relationship despite not wanting to by going to parties with lots of pretty girls around, let alone by walking on a street where girls also walk rather than through a forest. And not developing social skills may make things much harder should you ever decide to try and get into a relationship later in your life.

Comment author: DaFranker 15 November 2012 05:33:38PM 0 points [-]

Aha, but the clever arguer could respond that you could be likely to find yourself wanting to despite not wanting to want to be in a relationship, and thus that avoidance is a twice-effective method of willpower conservation!

Of course, that the above be true and applicable to this case is unlikely. If you're to end up wanting it, and that you'll end up wanting it enough to compensate for the opportunity costs regarding other things you might want incurred by eventual willpower expenses or time spent "succumbing" and attempting to get into a relationship, then I think it trivially follows that you should already have updated towards the more reflectively coherent behavior that seems to give higher expected utility. After all, we want to win.

Comment author: apotheon 15 November 2012 05:54:33PM 1 point [-]

It's the "Lead me not into temptation, but deliver me from weevils!" tactic. Well . . . maybe not weevils, but not evil either, in this case.

Your objection to the ultimate utility of avoidance doesn't seem to take the desire to avoid distraction and wasted time even when successfully resisting the biological urges toward relationship-establishing behavior into account. Even if you (for some nonspecific definition of "you") simply find yourself waylaid for a few minutes by a pretty girl, but ultimately ready to move on, the time spent not only in those few moments but also in thinking about it later on may prove a distraction from other things, regardless of whether you allow yourself to get caught up enough to actively pursue a relationship with her.

Comment author: DaFranker 15 November 2012 06:13:28PM *  0 points [-]

Well, yeah, my objection does take it into account, but I was being unfair in my implicit assumptions because I didn't think it likely that anyone here would object.

If you're to end up wanting it, and that you'll end up wanting it enough to compensate for the opportunity costs regarding other things (...)

Basically, this is where I lumped an implicit: "For most humans, the desire and expected benefits of successfully entering a relationship are much greater in terms of evolved values than the opportunity costs incurred, and it is reasonable to expect that the gains obtained from this would free up enough mental resources to actually make faster, rather than slower, progress on other goals of interest in the case of well-motivated individuals with above-average instrumental rationality."

However, estimating the costs you mentioned for humans-on-average is difficult for me, due to lack of data. Picture me as wearing a "typical mind fallacy warning!" badge on this particular issue.

Comment author: incariol 16 November 2012 12:35:38PM 0 points [-]

Well, it has happened to me before - girls really can be pretty insistent. :) But this is not actually what concerns me - it's the distraction/wasted time induced by pretty-girl-contact event like apotheon explained below.

Comment author: inblankets 24 February 2013 04:26:33PM *  3 points [-]

I disagree with the commenters below-- I think you're fairly likely to find yourself wanting to be in a relationship if you're not careful. I'm a female, and I don't want to get married or have kids. Unfortunately, I'm 24, and some part of me/the body is really trying to marry me off and give me baybehs. So I try not to take in too much media that normalizes this vs. normalizing my goals, I don't babysit, and I am open about my intent so as not to attract invitations.

Comment author: aelephant 09 November 2012 11:31:53PM *  1 point [-]

Set Future You up for success, rather than failure.

Edit: Thought of a personal example. I know that if I scratch my head, my head will become more itchy. It is a vicious cycle. If I cut my nails short, it seems to help. In the moment, I might not want to cut my nails because there is no immediate value. But it is, in a sense, "modifying my environment" so that in the future I'll be less likely to fall into the itchy-head trap.

Comment author: JenniferRM 07 November 2012 05:14:03PM 14 points [-]

Awesome list. I'm interested in the way there are 24 questions that are grouped into 6 overarching categories. Do they empirically cluster like this in actual humans? It would be fascinating to get a few hundred responses to each question and do dimensional analysis to see if there is a small number of common core issues that can be communicated and/or adjusted more efficiently :-)

Comment author: Shmidley 08 November 2012 08:00:12PM 13 points [-]

I'd like to add "noticing when you don't know something." When someone asks you a question, its surprisingly tempting to try to be helpful and offer them an answer even when you don't have the necessary knowledge to provide an accurate answer. It can be easy to infer what the truth might be and offer that as an answer, without explaining that you're just guessing and don't actually know. (Example: I recently purchased a new television and my co-worker asked me what sort of Parental Controls it offered. I immediately started providing him an answer I had inferred from limited knowledge, and it took me a moment to realize I didn't actually know what I was talking about and instead tell him, "I don't know.")

This is essentially the problem of confabulation mentioned here; in this case its a confabulation of knowledge about the world, as opposed to confabulating knowledge about the self. In terms of the map/territory analogy, this would be a situation where someone asks you a question about a specific area of your map, and you choose to answer as if that section of your map is perfectly clear to you, even when you know that its blurry. Don't treat a blurry map as if it were clear!

Comment author: aelephant 09 November 2012 11:21:48PM 3 points [-]

Good one. I try to be very conservative with my language & preface everything I say with something that implies an amount of uncertainty.

There might be cultural differences. In China people will give you directions on the street even if they have no idea. I have yet to have someone reply to a request for help with "I don't know".

It seems like an Ego protection thing to me & it isn't helpful.

Comment author: John_Maxwell_IV 13 November 2012 06:21:18AM 2 points [-]

I like your comment, but one problem is that telling people you don't know stuff projects low status. I think most people, including me, really know very little, but if you're honest about this all the time then this can contribute to persistent low status. (I tried the "don't care about status" thing for a while, but being near the bottom of the social totem pole just doesn't seem to work for me psychologically. So lately I've decided to optimize for status everywhere at least somewhat.)

Comment author: army1987 13 November 2012 01:15:15PM *  2 points [-]

I like your comment, but one problem is that telling people you don't know stuff projects low status.

That only happens if it's credible, otherwise it's taken as counter-signalling. When I say I don't know much about something, people generally realize I'm just holding myself to a high standard and don't genuinely believe I know less than the typical person; the problem is that they also think that when I actually don't know shit about something (in the sense the typical person would use that phrase). Conversely, showing off knowledge can come across as arrogant in certain situations.

I tried the "don't care about status" thing for a while

Even if you don't care about status, I'd say that what X (e.g. “I don't know”) actually means in English is what English speakers actually mean when they say X, regardless of etymology (huh, it sounds tautological when put this way, doesn't it?), and if you're aware of this and use X to mean something else you're lying (unless your interlocutor knows you mean something else).

Comment author: handoflixue 14 November 2012 01:09:02AM 0 points [-]

"telling people you don't know stuff projects low status"

If it's a random stranger, I don't care about status. If it's a friend or a fellow "geek", it's probably a high status signal to send. That pretty much leaves work as the only area I'd potentially run in to this, and I've found "I don't know; but I can find out!" works wonders (part of this is that at work, I'm presumably expected to actually know these things)

I've found "I don't know, but isn't it fun to find out!" is a fairly successful tactic, but I'm also deliberately aiming to attract geeks and people who like that answer in my life :)

Comment author: army1987 14 November 2012 02:08:35PM 4 points [-]

“A physicist is someone who answers all questions with ‘I don't know, but I can find out.’” -- Someone (possibly Nicola Cabibbo, quoting from my memory)

Comment author: wedrifid 14 November 2012 01:22:38AM 3 points [-]

. If it's a friend or a fellow "geek", it's probably a high status signal to send.

Rarely. It is often a useful signal to send but seldom high status.

Comment author: handoflixue 14 November 2012 09:29:22PM 2 points [-]

I don't really understand the reply. Are you saying it's rarely high status even within my social circles? Or are you saying that my social circles are unusual? To the former, all I can say is that we apparently have very different experiences. To the latter... well, duh, that's WHY I specified that it was specific to THOSE groups...

Comment author: wedrifid 14 November 2012 10:15:04PM 2 points [-]

Are you saying it's rarely high status even within my social circles?

I am saying that is more likely that you are inflating the phrase "high status" to include things that are somewhat low status but overall socially rewarding than that your subculture is stretched quite that far in that (unsustainable) direction.

Comment author: handoflixue 14 November 2012 10:41:54PM 0 points [-]

How would "I don't know" being high status be unsustainable?

For that matter, what distinction are you drawing between high status and socially rewarding?

Comment author: wedrifid 15 November 2012 12:16:32AM 4 points [-]

For that matter, what distinction are you drawing between high status and socially rewarding?

Yes, "high status" being the inflated does seem to be the crux of the matter.

Socially rewarding behaviors that, ceritus paribus are low status.

  • Saying "please" or "thankyou".
  • Listening to what someone is saying. Even more if you deign to comprehend and accept their point.
  • Saluting.
  • Doing what someone asks.
  • Using careful expression to ensure you don't offend people.
Comment author: handoflixue 16 November 2012 07:33:17PM 0 points [-]

My general experience has been that "I don't know, but I'll find out", said to someone currently equal or lower status than me, clearly but mildly correlates with most of the low status behavior you mentioned. I'm not as sure how it affects people higher status than me, since I don't have as many of those relationships / data points.

So I continue my assertion that, yes, it's high status, not merely socially rewarding. I still suspect this is a weird and unusual set of experiences, and probably has to do with how I position "I don't know" relative to others.

Comment author: DaFranker 14 November 2012 10:05:39PM 0 points [-]

In some circles, perceived signal usefulness is a causal factor towards the signal's status-level.

To unbox the above: In some groups I've been with, sending compressed signals that everyone in the group understands is a high-status signal, regardless of whether it's a "low-status" or "high-status" signal in other environments.

"Hey, I have an idea but I'm not quite sure how to go about putting it in practice" is a very low status signal in meatspace for all meatspaces I've been in except one, but a very high status signal in e.g. certain online hacking communities.

Likewise for the case at hand, there are places where "I don't know" can even be the highest status signal. For the most memorable example, I've once visited a church where the people at the top were answering "I don't know" to the most questions, signaling their closeness to divinity implicitly, while the "simpletons" at the bottom of the ladder had an opinion on everything, and thus would never "not know".

Comment author: CAE_Jones 14 November 2012 01:27:56AM 1 point [-]

I've had people tell me to taboo "I don't know" because I use it so much. These being fairly average or slightly above average people who are annoyed that I don't have a strong opinion about things like "what do you want to eat tonight?" Some have made jokes about putting "I don't know" on my tombstone. Assuming that I die and am later resurrected and discover this was actually done, I will be most displeased.

Comment author: handoflixue 14 November 2012 09:30:45PM 1 point [-]

I usually interpret that context as "I don't have a preference", which I would readily agree is useful to taboo. If you genuinely don't know what you want (despite having an apparent hidden but strong preference) then ... that's a new one on me ^^;

Comment author: TheAncientGeek 19 March 2014 08:34:27PM 0 points [-]

Toss a mental coin and pretend to enthuse about the result?

Comment author: btoblake 19 March 2014 06:46:47PM 0 points [-]

Before declining to offer an opinion, it's worth considering whether you'd benefit from the decision being made. (For instance, you could get a prompt dinner.) If so, why not offer a little help? Decision making can be tiring work, and any input can make it easier.

You could: - mention any limiting factors (i.e. I have $20 or 1 hour) - Mention options that are convenient - Offer support to the person who makes the decision (particularly if you can avoid critiquing their choice).

Comment author: lucidian 07 November 2012 05:21:44PM 29 points [-]

This may be the single most useful thing I've ever read on LessWrong. Thank you very, very much for posting it.

Here's one I use all the time: When a problem seems overwhelming, break it up into manageable subproblems.

Often, when I am procrastinating, I find that the source of my procrastination is a feeling of being overwhelmed. In particular, I don't know where to begin on a task, or I do but the task feels like a huge obstacle towering over me. So when I think about the task, I feel a crushing sense of being overwhelmed; the way I escape this feeling is by procrastination (i.e. avoiding the source of the feeling altogether).

When I notice myself doing this, I try to break the problem down into a sequence of high-level subtaks, usually in the form of a to-do list. Emotionally/metaphorically, instead of having to cross the obstacle in one giant leap, I can climb a ladder over it, one step at a time. (If the subtasks continue to be intimidating, I just apply this solution recursively, making lists of subsubtasks.)

I picked this strategy up after realizing that the way I approached large programming projects (write the main function, then write each of the subroutines that it calls, etc.) could be applied to life in general. Now I'm about to apply it to the task of writing an NSF fellowship application. =)

Comment author: gwern 07 November 2012 07:11:12PM 13 points [-]

Here's one I use all the time: When a problem seems overwhelming, break it up into manageable subproblems.

It's a classic self-help technique (especially in 'Getting Things Done') for a reason: it works.

Comment author: amcknight 08 November 2012 10:43:19PM 4 points [-]

For the slightly more advanced procrastinator that also finds a large sequence of tasks daunting, it might help to instead search for the first few tasks and then ignore the rest for now. Of course, sometimes in order to find the first tasks you may need to break down the whole task, but other times you don't.

Comment author: jooyous 08 November 2012 03:37:23AM *  4 points [-]

Hello! I am procrastinating on writing the NSF fellowship! High five!

My current subproblem consists of filling in all the instances of "INSPIRATIONAL STUFF" with actual inspirational stuff, so this particular subproblem is looking pretty difficult. :(

Comment author: JulianMorrison 09 November 2012 02:54:16AM 4 points [-]

Well your task spec is broken, so no wonder your brain won't be whipped into doing it.

"inspirational stuff" is a trigger for thinking in terms of things like advertising or religious revivals that are emotional grabs which are intended to disengage (or even flimflam) the reasoning faculties. Any rationalist would flinch away.

Re-frame: visualize your audience. You are looking to simply and clearly convey whatever part of their far mode utility function is advanced by the thing you are pushing.

Comment author: Swimmer963 09 November 2012 03:27:02AM 1 point [-]

When I notice myself doing this, I try to break the problem down into a sequence of high-level subtaks, usually in the form of a to-do list. Emotionally/metaphorically, instead of having to cross the obstacle in one giant leap, I can climb a ladder over it, one step at a time. (If the subtasks continue to be intimidating, I just apply this solution recursively, making lists of subsubtasks.)

I think the most important aspect of this, for me anyway, is being able to dump most of what you're working on out of your working memory, trusting yourself that it's organized on paper, so that you can free up more brain space to do each of the sub-parts.

Comment author: lukeprog 19 November 2012 07:30:18AM 0 points [-]
Comment author: sketerpot 17 November 2012 12:43:27AM 0 points [-]

Here's one I use all the time: When a problem seems overwhelming, break it up into manageable subproblems.

This article would probably benefit from being re-read in smaller chunks over the course of several days. There are a lot of things in it that need to be thought about seriously in order to be effective, and I agree with you about its usefulness.

Comment author: roland 07 November 2012 04:30:25PM 7 points [-]

Recent example from Anna: Using grapefruit juice to keep up brain glucose, I had

The idea that will power or thinking depletes brain glucose has been debunked:

http://www.psychologytoday.com/blog/ulterior-motives/201211/is-willpower-energy-or-motivation http://lesswrong.com/r/discussion/lw/ej7/link_motivational_versus_metabolic_effects_of/

Comment author: gwern 07 November 2012 05:22:23PM 21 points [-]

But nevertheless, the suggestion of sweets will still work per your own links. A nice example of how revised theories remain consistent with old observations...

Comment author: John_Maxwell_IV 13 November 2012 06:22:53AM *  0 points [-]

Supposedly gargling sugary lemonade works: http://www.forbes.com/sites/daviddisalvo/2012/11/08/need-a-self-control-boost-gargle-with-sugar-water/

Edit: sorry, this is redundant w/ roland's links.

Comment author: aelephant 09 November 2012 11:27:46PM 0 points [-]

I missed this somehow. Thanks for posting the links.

Comment author: SPLH 08 November 2012 09:02:40AM *  11 points [-]

The example about stacks in 1.2 has a certain irony in context. This requires a small mathematical parenthese:

A stack is a certain sophisticated type of geometric structure which is increasingly used in algebraic geometry, algebraic topology (and spreading to some corners of differential geometry) to make sense of geometric intuitions and notions on "spaces" which occur "naturally" but are squarely out of the traditional geometric categories (like manifolds, schemes, etc.).

See www.ams.org/notices/200304/what-is.pdf for a very short introduction focusing on the basic example of the moduli of elliptic curves.

The upshot of this vague outlook is that in the relevant fields, everything of interest is a stack (or a more exotic beast like a derived stack), precisely because the notion has been designed to be as general and flexible as possible ! So asking someone working on stacks a good example of something which is not a stack is bound to create a short moment of confusion.

Even if you do not care for stacks (and I wouldn't hold it against you), if you are interested in open source/Internet-based scientific projects, it is worth having a look at the web page of the Stacks project (http://stacks.math.columbia.edu/), a collaborative fully hyperlinked textbook on the topic, which is steadily growing towards the 3500 pages mark.

Comment author: Qiaochu_Yuan 04 February 2013 07:42:25AM 4 points [-]

I put the checklist into an Anki deck a week or two ago that I've been reviewing (as cloze deletions). Subjectively it seems to have helped the relevant concepts come more readily to mind, although that could just be the CFAR workshop (though we didn't talk about the checklist then and some of the ideas in the checklist, like social commitment mechanisms, weren't otherwise explicitly mentioned).

Comment author: Pablo_Stafforini 26 April 2013 03:02:16PM 1 point [-]

Would you mind sharing this deck? I would be a nice addition to Anki decks by LW users.

Comment author: Qiaochu_Yuan 27 April 2013 08:00:40PM 0 points [-]

I admit I'm not entirely sure how to share a deck.

Comment author: Pablo_Stafforini 28 April 2013 01:36:09AM 2 points [-]

Ah, you are not the first! This comment by tgb taught me how to do. (I'm assuming you are using Anki 2.)

Comment author: Qiaochu_Yuan 28 April 2013 05:06:49AM 2 points [-]
Comment author: Pablo_Stafforini 28 April 2013 02:11:13PM 0 points [-]

Thanks. The deck is now listed.

Comment author: JoshuaFox 07 November 2012 08:15:25AM *  10 points [-]

he tried a reframe to avoid the status quo bias: If he was living in Silicon Valley already, would he accept a $70K pay cut to move to Santa Barbara with his college friends? (Answer: No.))

[Edit] But his utility function would predictably change under those circumstances.

I know that I have a status quo bias, hedonic treadmill, and strongly decreasing marginal utility of money (particularly when progressive taxation is factored in).

If I made 2/3 of what I do now, I'd be pretty much as happy as I am now, and want more money; if I made 3/2 of what I do now (roughly the factor described in the OP), I'd also be pretty much as happy as I am now, and want more money.

The logical conclusion is that we should lower the weight of salary increases in decisions, the opposite of the conclusion proposed here.

Comment author: JoshuaFox 08 November 2012 08:26:20AM *  5 points [-]

I realized that what bothers me is the neglect of utility-function differences in the counterfactual world.

Should you start using heroin? Let's try to reframe it in a way that will reduce, or at least switch around, the biases that might be influencing your decision. If you were a heroin addict, and had lost everything, and heroin were your only friend and consolation, would you want to stop? Maybe not. So go ahead, shoot up.

If, despite your deep desire to go into classical music as a career (which in real life you did, to your great satisfaction), you had followed the money into the financial sector, and after years of 80-hours weeks, had sunk into cynicism and no longer cared for anything but making more money to support your extravagant spending habits, would you then want to leave the financial industry for a life of music and a modest income? Probably not, so go ahead, follow the money, burn out your soul, and buy yourself a Porsche.

Comment author: handoflixue 14 November 2012 01:51:09AM 1 point [-]

I have trouble believing that in those situations, I'd actually prefer to be that sort of rock-bottom, burnt-out person rather than thinking "I wish I'd made difference choices when I was 20, oh foolish foolish me."

Having been in some rather bad situations, I've never once thought "Gosh, this is so much better than if I'd had a successful, high-paying, yet enjoyable career!"

Comment author: Omegaile 08 November 2012 10:40:02AM 0 points [-]

This method of reducing bias only works for rational decisions using your current utility. Otherwise you will be prone to circular decisions like those you describe (decisions that feed themselves).

Comment author: gwern 07 November 2012 07:08:35PM *  7 points [-]

If I made 2/3 of what I do now, I'd be pretty much as happy as I am now, and want more money; if I made 3/2 of what I do now, I'd also be pretty much as happy as I am now, and want more money.

You're burying your argument in the constants 'pretty much' there. You can repeat your argument sorites-style after you have taken the 2/3 salary cut: "Well, if I made 2/3 what I do now, I'd still be 'pretty much as happy' as I am now" and so on and so forth until you have hit sub-poverty wages.

To keep the limits of the log argument in mind, log 50k is 10.8 and log (50k+70k) is 11.69 and log 1 billion is 20.7; do you really think if someone handed you a billion dollars and you filled your world-famous days competing with Musk to reach Mars or something insanely awesome like that, you would only be twice as happy as when you were a low-status scrub-monkey making 50k?

(particularly when progressive taxation is factored in).

Here again more work is necessary. One of the chief suggestions of positive psychology is donating more and buying more fuzzies... and guess what is favored by progressive taxation? Donating.

The logical conclusion is that I should lower the weight of salary increases in decisions, the opposite of the conclusion proposed here.

Of course there are people who are surely making the mistake of over-valuing salaries; but you're going to need to do more work to show you're one of them.

Comment author: Kawoomba 07 November 2012 09:56:59PM 6 points [-]

[D]o you really think if someone handed you a billion dollars and you filled your world-famous days competing with Musk to reach Mars or something insanely awesome like that, you would only be twice as happy as when you were a low-status scrub-monkey making 50k?

Only twice as?

Adaptation level theory suggests that both contrast and habituation will operate to prevent the winning of a fortune from elevating happiness as much as might be expected. ... As predicted, lottery winners were not happier than controls

It's a well replicated phenomenon.

Comment author: gwern 20 November 2012 10:26:23PM 1 point [-]

Lottery-winners are self-selected for a number of things including innumeracy or foolishness and not having grand projects materially advanced by winnings, and the famous lottery winner examples are for relatively small sums as far as I know - most of the winners in that paper were $400k or less at a time of higher tax rates, with a serious selection issue there as well (less than half of the winners interviewed).

Comment author: Kindly 08 November 2012 04:34:05AM *  15 points [-]

To keep the limits of the log argument in mind, log 50k is 10.8 and log (50k+70k) is 11.69 and log 1 billion is 20.7

Comparing these numbers tells you pretty much nothing. First of all, taking log($50k) is not a valid operation; you should only ever take logs of a dimensionless quantity. The standard solution is to pick an arbitrary dollar value $X, and compare log($50k/$X), log($120k/$X), and log($10^9/$X). This is equivalent to comparing 10.8 + C, 11.69 + C, and 20.7 + C, where C is an arbitrary constant.

This shouldn't be a surprise, because under the standard definition, utility functions are translation-invariant. They are only compared in cases such as "is U1 better than U2?" or "is U1 better than a 50/50 chance of U2 and U3?" The answer to this question doesn't change if we add a constant to U1, U2, and U3.

In particular, it's invalid to say "U1 is twice as good as U2". For that matter, even if you don't like utility functions, this is suspicious in general: what does it mean to say "I would be twice as happy if I had a million dollars"?

It would make sense to say, if your utility for money is logarithmic and you currently have $50k, that you're indifferent between a 100% chance of an extra $70k and a 8.8% chance of an extra $10^9 -- that being the probability for which the expected utilities are the same. If you think logarithmic utilities are bad, this is the claim you should be refuting.

Comment author: jmmcd 08 November 2012 11:24:02PM 6 points [-]

you should only ever take logs of a dimensionless quantity

Goddammit I have a degree in mathematics and no-one ever told me that and I never figured it out for myself.

I see the beginnings of an explanation here [http://physics.stackexchange.com/questions/7668/fundamental-question-about-dimensional-analysis]. Any pointer to better explanation?

Comment author: aronwall 17 December 2012 02:19:58AM *  9 points [-]

Taking logs of a dimensionful quantity is possible, if you know what you're doing. (In math, we make up our own rules: no one is allowed to tell us what we can and cannot do. Whether or not it's useful is another question.) Here's the real scoop:

In physics, we only really and truly care about dimensionless quantities. These are the quantities which do not change when we change the system of units, i.e. they are "invariant". Anything which is not invariant is a purely arbitrary human convention, which doesn't really tell me anything about the world. For example, if I want to know if I fit through a door, I'm only interested in the ratio between my height and the height of the door. I don't really care about how the door compares to some standard meter somewhere, except as an intermediate step in some calculation.

Nevertheless, for practical purposes it is convenient to also consider quantities which transform in a particularly simple way under a change of units systems. Borrowing some terminology from general relativity, we can say that a quantity X is "covariant" if it transforms like X --> (unit1 / unit2 )^p X when we change from unit1 to unit2. Here p is a real number which indicates the dimension of the unit. These things aren't invariant under a change of units, so we don't care about them in a fundamental way. But they're extremely useful nevertheless, because you can construct invariant quantities out of covariant ones by multiplying or dividing them in such a way that the units cancel out. (In the concrete example above, this allows us to measure the door and me separately, and wait until later to combine the results.)

Once you're willing to accept numbers which depend on arbitrary human convention, nothing prevents you from taking logs or sines or whatever of these quantities (in the naive way, by just punching the number sans units into your calculator). What you end up with is a number which depends in a particularly complicated way on your system of units. Conceptually, that's not really any worse. But remember, we only care if we can find a way to construct invariant quantities out of them. Practically speaking, our exprience as physicists is that quantities like this are rarely useful.

But there may be exceptions. And logs aren't really that bad, since as Kindly points out, you can still extract invariant quantities by adding them together. As a working physicist I've done calculations where it was useful to think about logs of dimensionful quantities (keywords: "entanglement entropy", "conformal field theory"). Sines are a lot worse since they aren't even monotonic functions: I can't imagine any application where taking the sine of a dimensionful quantity would be useful.

Comment author: Eliezer_Yudkowsky 17 December 2012 06:00:53AM 1 point [-]

I think it'd be obvious how to take the log of a dimensional quantity.

e^(log apple) = apple

Comment author: Qiaochu_Yuan 17 December 2012 06:46:56AM *  7 points [-]

Right, but then log (2 apple) = log 2 + log apple and so forth. This is a perfectly sensible way to think about things as long as you (not you specifically, but the general you) remember that "log apple" transforms additively instead of multiplicatively under a change of coordinates.

Comment author: MagnetoHydroDynamics 17 December 2012 01:00:31PM 0 points [-]

Isn't the argument to a sine by default a quantity of angle, that is Radians in SI? (I know radians are epiphenomenal/w/e, but still)

Comment author: RichardKennaway 17 December 2012 12:01:51PM 0 points [-]

I can't imagine any application where taking the sine of a dimensionful quantity would be useful.

Machine learning methods will go right ahead and apply whatever collection of functions they're given in whatever way works to get empirically accurate predictions from the data. E.g. add the patient's temperature to their pulse rate and divide by the cotangent of their age in decades, or whatever.

So it can certainly be useful. Whether it is meaningful is another matter, and touches on this conundrum again. What and whence is "understanding" in an AGI?

Eliezer wrote somewhere about hypothetically being able to deduce special relativity from seeing an apple fall. What sort of mechanism could do that? Where might it get the idea that adding temperature to pulse may be useful for making empirical predictions, but useless for "understanding what is happening", and what does that quoted phrase mean, in terms that one could program into an AGI?

Comment author: shminux 17 December 2012 08:15:07AM 0 points [-]

"units are a useful error-checking homomorphism"

Comment author: Qiaochu_Yuan 17 December 2012 09:42:51AM *  2 points [-]

I don't think "homomorphism" is quite the right word here. Keeping track of units means keeping track of various scaling actions on the things you're interested in; in other words, it means keeping track of certain symmetries. The reason you can use this for error-checking is that if two things are equal, then any relevant symmetries have to act on them in the same way. But the units themselves aren't a homomorphism, they're just a shorthand to indicate that you're working with things that transform in some nontrivial way under some symmetry.

Comment author: shminux 17 December 2012 05:39:15PM *  0 points [-]

I don't think "homomorphism" is quite the right word here.

The map from dimensional quantities to units is structure-preserving, so yes, it is a homomorphism between something like rings. For example, all distances in SI are mapped into the element "meter", and all time intervals into the element "second". Addition and subtraction is trivial under the map (e.g. m+m=m), and so is multiplication by a dimensionless quantity, while multiplication and division by a dimensional quantity generates new elements (e.g. meter per second).

Converting between different measurement systems (e.g. SI and CGS) adds various scale factors, thus enlarging the codomain of the map.

Comment author: Kindly 09 November 2012 12:07:02AM 3 points [-]

I don't know of any good explanations; this seems relevant but requires a subscription to access. Unfortunately, no-one's ever explained this to me either, so I've had to figure it out by myself.

What I'd add to the discussion you linked to is that in actual practice, logarithms appear in equations with units in them when you solve differential equations, and ultimately when you take integrals. In the simplest case, when we're integrating 1/x, x can have any units whatsoever. However, if you have bounds A and B, you'll get log(B) - log(A), which can be rewritten as log(B/A). There's no way A and B can have different units, so B/A will be dimensionless.

Of course, often people are sloppy and will just keep doing things with log(B) and log(A), even though these don't make sense by themselves. This is perfectly all right because the logs will have to cancel eventually. In fact, at this point, it's even okay to drop the units on A and B, because log(10 ft) - log(5 ft) and log(10 m) - log(5 m) represent the same quantity.

Comment author: satt 10 November 2012 02:41:05PM *  0 points [-]

I don't know of any good explanations; this seems relevant but requires a subscription to access.

Most of that paper is the authors rebutting what other people have said about the issue, but there are two bits that try to explain why one can't take logs of dimensional things.

Page 68 notes that , which "precludes the association of any physical dimension to any of the three variables b, x, and y".

And on pages 69-70:

The reason for the necessity of including only dimensionless real numbers in the arguments of transcendental function is not due to the [alleged] dimensional nonhomogeneity of the Taylor expansion, but rather to the lack of physical meaning of including dimensions and units in the arguments of these function. This distinction must be clearly made to students of physical sciences early in their undergraduate education.

That second snippet is too vague for me. But I'm still thinking about the first one.

[Edited to fix the LaTeX.]

Comment author: KnaveOfAllTrades 13 November 2012 03:02:34AM *  0 points [-]

The (say) real sine function is defined such that its domain and codomain are (subsets of) the reals. The reals are usually characterized as the complete ordered field. I have never come across units that--taken alone--satisfy the axioms of a complete ordered field, and having several units introduces problems such as how we would impose a meaningful order. So a sine function over unit-ed quantities is sufficiently non-obvious as to require a clarification of what would be meant by sin($1). For example--switching over now to logarithms--if we treat $1 as the real multiplicative identity (i.e. the real number, unity) unit-multiplied by the unit $, and extrapolate one of the fundamental properties of logarithms--that log(ab)=loga+logb, we find that log($1)=log($)+log(1)=log($) (assuming we keep that log(1)=0). How are we to interpret log($)? Moreover, log($^2)=2log($). So if I log the square of a dollar, I obtain twice the log of a dollar. How are we to interpret this in the above context of utility? Or an example from trigonometric functions: One characterization of the cosine and sine stipulates that cos^2+sin^2=1, so we would have that cos^2($1)+sin^2($1)=1. If this is the real unity, does this mean that the cosine function on dollars outputs a real number? Or if the RHS is $1, does this mean that the cosine function on dollars outputs a dollar^(1/2) value? Then consider that double, triple, etc. angles in the standard cosine function can be written as polynomials in the single-angle cosine. How would this translate?

So this is a case where the 'burden of meaningfulness' lies with proposing a meaningful interpretation (which now seems rather difficult), even though at first it seems obvious that there is a single reasonable way forward. The context of the functions needs to be considered; the sine function originated with plane geometry and was extended to the reals and then the complex numbers. Each of these was motivated by an (analytic) continuation into a bigger 'domain' that fit perfectly with existing understanding of that bigger domain; this doesn't seem to be the case here.

Comment author: army1987 13 November 2012 08:40:24AM *  1 point [-]

How are we to interpret [the logarithm of one dollar] in the above context of utility? 

You pick an arbitrary constant A of dimension "amount of money", and use log(x/A) as an utility function. Changing A amounts to adding a constant to the utility (and changing the base of the logarithms amounts to multiplying it by a constant), which doesn't affect expected utility maximization. EDIT: And once it's clear that the choice of A is immaterial, you can abuse notation and just write “log(x)”, as Kindly says.

Comment author: shminux 08 November 2012 11:57:11PM *  2 points [-]

You can only add, subtract and compare like quantities, but log(50000*1dollar)=log(50000)+log(1 dollar), which is a meaningless expression. What's the logarithm of a dollar?

Comment author: Thomas 10 November 2012 03:53:08PM -1 points [-]

What's the logarithm of a dollar?

What do you need to "exponate" to get a dollar?

That, whatever that might be, is the logarithm of a dollar.

Comment author: army1987 10 November 2012 03:42:37PM 0 points [-]

What's the logarithm of a dollar?

An arbitrary additive constant. See the last paragraph of Kindly's comment.

Comment author: jmmcd 09 November 2012 08:25:48AM -2 points [-]

Well, we could choose factorise it as log(50000 dollars) = log(50000 dollar^0.5 * 1 dollar^0.5) = log(50000 dollar^0.5) + log(1 dollar^0.5). That does keep the units of the addition operands the same. Now we only have to figure out what the log of a root-dollar is...

the logarithm of a dollar

It's really just the same question again -- why can't I write log(1 dollar) = 0 (or maybe 0 dollar^0.5), the same as I would write log(1) = 0.

Comment author: satt 10 November 2012 01:17:06PM 0 points [-]

It's really just the same question again -- why can't I write log(1 dollar) = 0 (or maybe 0 dollar^0.5), the same as I would write log(1) = 0.

$1 = 100¢. Now try logging both sides by stripping off the currency units first!

Comment author: gwern 20 November 2012 09:41:48PM 1 point [-]

This is equivalent to comparing 10.8 + C, 11.69 + C, and 20.7 + C, where C is an arbitrary constant.

This is what I did, without the pedantry of the C.

In particular, it's invalid to say "U1 is twice as good as U2". For that matter, even if you don't like utility functions, this is suspicious in general: what does it mean to say "I would be twice as happy if I had a million dollars"?

I don't follow at all. How can utilities not be comparable in terms of multiplication? This falls out pretty much exactly from your classic cardinal utility function! You seem to be assuming ordinal utilities but I don't see why you would talk about something I did not draw on nor would accept.

Comment author: Kindly 20 November 2012 10:34:29PM 1 point [-]

This is what I did, without the pedantry of the C.

The point is that because the constant is there, saying that utility grows logarithmically in money underspecifies the actual function. By ignoring C, you are implicitly using $1 as a point of comparison.

A generous interpretation of your claim would be to say that to someone who currently only has $1, having a billion dollars is twice as good as having $50000 -- in the sense, for example, that a 50% chance of the former is just as good as a 100% chance of the latter. This doesn't seem outright implausible (having $50000 means you jump from "starving in the street" to "being more financially secure than I currently am", which solves a lot of the problems that the $1 person has). However, it's also irrelevant to someone who is guaranteed $50000 in all outcomes under consideration.

Comment author: gwern 20 November 2012 11:00:14PM 1 point [-]

However, it's also irrelevant to someone who is guaranteed $50000 in all outcomes under consideration.

Then how do you suggest the person under discussion evaluate their working patterns if log utilities are only useful for expected values?

Comment author: Kindly 20 November 2012 11:14:11PM 3 points [-]

By comparing changes in utility as opposed to absolute values.

To the person with $50000, a change to $70000 would have a log utility of 0.336, and a change to $1 billion would have a log utility of 9.903. A change to $1 would have a log utility of -10.819.

Comment author: gwern 20 November 2012 11:33:43PM 1 point [-]

I see, thanks.

Comment author: The_Duck 20 November 2012 10:19:36PM *  1 point [-]

How can utilities not be comparable in terms of multiplication?

"The utility of A is twice the utility of B" is not a statement that remains true if we add the same constant to both utilities, so it's not an obviously meaningful statement. We can make the ratio come out however we want by performing an overall shift of the utility function. The fact that we think of utilities as cardinal numbers doesn't mean we assign any meaning to ratios of utilities. But it seemed that you were trying to say that a person with a logarithmic utility function assesses $10^9 as having twice the utility of $50k.

Comment author: gwern 20 November 2012 11:05:53PM 0 points [-]

The fact that we think of utilities as cardinal numbers doesn't mean we assign any meaning to ratios of utilities.

Kindly says the ratios do have relevance to considering bets or risks.

But it seemed that you were trying to say that a person with a logarithmic utility function assesses $10^9 as having twice the utility of $50k.

Yes, I think I see my error now, but I think the force of the numbers is clear: log utility in money may be more extreme than most people would intuitively expect.

Comment author: army1987 08 November 2012 05:12:50PM 1 point [-]

In particular, it's invalid to say "U1 is twice as good as U2". For that matter, even if you don't like utility functions, this is suspicious in general: what does it mean to say "I would be twice as happy if I had a million dollars"?

This is what I immediately thought when I first read about the Repugnant Conclusion on Wikipedia, years ago before having ever heard of the VNM axioms or anything like that.

Comment author: army1987 08 November 2012 08:59:56AM 3 points [-]

 One of the chief suggestions of positive psychology is donating more and buying more fuzzies... and guess what is favored by progressive taxation? Donating.

You don't get to decide where most of your tax money goes, which I guess means that for a large fraction of people taxes don't count as fuzzy-buying donations.

Comment author: scav 08 November 2012 06:24:45PM 0 points [-]

Which is a failure mode of most people's thinking about taxes. Most of your tax money goes to boring things you don't want to concern yourself with and which you don't have any expertise in, such that you deciding exactly where the money went would be disastrous. Someone with the required expertise is doing their best to make sure the limited available money is spent carefully on those things, in most cases.

I like to think that in general, taxes are my subscription fee for living in a civilisation rather than a feudal plutocracy.

There are some specific things my taxes are spent on that I actively resent, but the response to that is to oppose those specific things, and I accept democracy and debate as the means to (slowly and unreliably) improve the situation.

Comment author: army1987 09 November 2012 01:17:01PM *  4 points [-]

I think of taxes as a “subscription fee for living in a civilisation”, too, but I think you're overestimating how useful what most of the tax money is spent on is to most of the population and underestimating the extent to which present-day First World countries are plutocracies.

Comment author: scav 09 November 2012 10:14:49PM 0 points [-]

Well, neither of us have quantified our estimates for the usefulness of government spending, or broken it down by sector or demographics. So, how much am I overestimating it, and in what specific ways? :)

I live in Scotland. I consider it to be a civilised country mostly. It has good free education and health care, and businesses are regulated as to employment law, health and safety, and environmental impact. I don't claim more expertise in how all that gets arranged than the people who arrange it, and I would be sceptical if you did, without seeing evidence.

The civilisation of the USA has some existential risk for feudal plutocracy, but I think it narrowly avoided one of the risk factors this week and I hold out some hope for steady improvement if it can stop shitting its pants over imaginary terrorist threats and start taking human rights seriously again. But even if I'm wrong about that, I never said that taxes were sufficient to prevent social breakdown. Just necessary.

Comment author: army1987 10 November 2012 11:25:34AM *  -1 points [-]

I don't claim more expertise in how all that gets arranged than the people who arrange it, and I would be sceptical if you did, without seeing evidence.

I'm not questioning their expertise, I'm questioning their goals. I usually try to apply Hanlon's razor to single individuals, but I'm reluctant to apply it to entire governments. I'm pretty sure that spending on defence an amount comparable to (or, in certain countries, even greater than) that spent on research has a point, I just don't think it's to benefit most of the population.

The civilisation of the USA has some existential risk for feudal plutocracy, but I think it narrowly avoided one of the risk factors this week

In terms of what he's actually done, as opposed to what he says, Obama's economic policy isn't that different to Republicans'. Or do “issues like peace, immigration, gay and women's rights, prayers in school”¹ (to quote the article linked) suffice to make a government not count as a plutocracy?

Anyway, how much have you heard about lobbying, associations such as the Bilderberg Group or the Trilateral Commission, etc.? (Unfortunately, the people who talk about those things also tend to spew out lots of nonsense about Reptilians and whatnot, but I have my own hypothesis about why they do that.)


  1. When I posted that article on Facebook, the only comment was from a gay friend of mine pointing out that with one president gay rights would go back to the 1800s and with the other they might be allowed to marry.
Comment author: scav 12 November 2012 11:12:09AM 1 point [-]

This is wandering away from the topic a bit. I doubt anyone could make a good case for any of:

  • taxes are inherently harmful and always misspent
  • taxes are always spent wisely
  • there exists any political system under which immensely rich people couldn't wield a lot of political power to try to further enrich themselves.
  • the immensely rich bother to conspire for any other purpose or actually care about politics much beyond what it can get them personally
  • there is literally nothing a democratically elected government can or will do to limit the political power of the immensely rich in any way.
Comment author: MugaSofer 12 November 2012 05:00:26PM 3 points [-]

there exists any political system under which immensely rich people couldn't wield a lot of political power to try to further enrich themselves.

Sure there does. A military dictatorship, for one.

Comment author: scav 12 November 2012 07:10:31PM 1 point [-]

Name one where the dictator and his cronies were not also embezzling the wealth of the country and living it up with their rich buddies. That's what they grab power for.

Even if the guy at the top has ideological principles that forbid such behaviour (rare) and isn't a hypocrite about them (super rare), there is always someone high up in the hierarchy who is in the market for favours, and due to the nature of a dictatorial hierarchy, essentially untouchable.

Comment author: FAWS 12 November 2012 05:47:35PM 1 point [-]

Do you have an example of a military dictatorship where the immensely rich were allowed to keep their wealth, but couldn't use it to exert political influence?

Comment author: army1987 12 November 2012 11:16:55AM *  0 points [-]

I ADBOC with the negation of those statements (provided “there exists” in the third one means “there has existed so far” rather than “there could ever exist in principle”).

Comment author: gwern 08 November 2012 02:06:31PM 0 points [-]

That wasn't what I meant to imply.

Comment author: CarlShulman 08 November 2012 01:29:03AM 0 points [-]

to keep the limits of the log argument in mind, log 50k is 10.8 and log (50k+70k) is 11.69 and log 1 billion is 20.7

Ln $100 is 4.6, at which point it's doubtful that you can survive.

Comment author: gwern 08 November 2012 03:17:40AM 0 points [-]

Ah, but suppose subsistence wages plummeted as in Hanson's em hell scenario? Ln $100 merely shows that 'the poor also smile' and the utility-maximizing thing is quadrillions of impoverished minds!

Comment author: CarlShulman 08 November 2012 05:11:29AM *  2 points [-]

If we continue to use Utility=ln($) then utilities go infinitely negative as you approach zero :).

Comment author: johnlawrenceaspden 09 November 2012 03:06:20PM 2 points [-]

Allowing us to refute the repugnant conclusion. Quadrillions of minds with $(1+e). We should start a campaign to use very large currency units in preparation for the Singularity.

Comment author: Vaniver 07 November 2012 09:45:22PM 0 points [-]

guess what is favored by progressive taxation? Donating.

Sort of? I mean, the primary work here is being done by the deduction of charity donations by income. Progressive taxation helps in that charitable donations are cheaper the richer you are (each dollar given away only costs 70 cents, instead of 100 if there were no deduction / you were paying no income taxes), but that's shaping the incentive, not making it.

Comment author: JoshuaFox 07 November 2012 09:16:41PM *  0 points [-]

... 50k ... a billion dollars...

Sure, that's why I said 2/3 and 3/2 rather than more significant multipliers.

Also: Sometimes you settle yourself into a local maximum, and even if it is not a global maximum, not switching may be OK if the local is not too much lower than the global maximum.

favored by progressive taxation? Donating

Yes, I agree that using your tax deduction gives an extra boost to donating.

Comment author: NancyLebovitz 08 November 2012 03:28:26AM 2 points [-]

Shouldn't we include the costs of moving? Even if the social costs are held as negligible (they probably shouldn't be), there's the time spent and the monetary costs of moving.

Comment author: katydee 07 November 2012 09:11:08AM 0 points [-]
Comment author: JoshuaFox 07 November 2012 10:06:50AM *  3 points [-]

Sure, one of the things I most like about having more money is being able to donate more. However, the main consideration of her brother and others in these circumstances is, I strongly suspect, not maximizing their donation capacity, but rather a more generic personal utility calculation.

Comment author: therufs 28 November 2012 07:15:25PM 2 points [-]

It's much less pretty than the PDF, but if anyone else wants a spreadsheet with write-in-able blanks, I have made a Google doc.

Comment author: MaoShan 14 November 2012 03:17:34AM 2 points [-]

There are some good ideas here that I can pick up on. Among the things that I already successfully implement, it may sound stupid, but I think of my different brain modules as different people, and have different names for them. That way I can compliment or admonish them without thinking, "Oh..kay, I'm talking to myself?" That makes it easier to remember that I'm not the only one reacting and making the sole decisions, but avoids turning everything into similar-sounding entities (me, myself, I, my brain, my mind, etc.) Example: This morning, I kept getting the feeling that something was not quite right, I felt lighter for some reason. I recognized that feeling as Jeffery trying to tell me something, so I had to stop and evaluate what I had done that morning so far. I realized that I was still wearing my slippers, and probably would not have realized it until I retracted my kickstand to leave for work. I gave credit where credit is due, and thought (without speaking) "Good catch, Jeffery!" (Jeffery [spelled that way because I "mistyped" it both times just now, before deciding that that's how he wants to spell it] is the one who handles the autopilot functions of my daily life, and while he does his best in unfamiliar situations, usually does not consult and does foolish things unless I have programmed him with routines. He is named after the anthropomorphic half chicken/half goat/half man protector of the "Deadly Maze" in Chowder. I interpreted the Deadly Maze as an allegory for the subconscious mind.)

Comment author: aleksiL 18 November 2012 12:04:21PM 1 point [-]

Interesting, I've occasionally experimented with something similar but never thought of contacting Autopilot this way. Yeah, that's what I'll call him.

I get the feeling that this might be useful in breaking out of some of my procrastination patterns: just call Autopilot and tell him which routine to start. Not tested yet, as then I'd forget about writing this reply.

Comment author: MaoShan 19 November 2012 03:41:42AM 0 points [-]

It's as if your own body is a guy that does his job if you train him right, but makes stupid decisions when something unexpected happens. I just take a more literal approach with the interaction. I also refer to him as "my answering machine" when I am woken up in the middle of the night. It took my wife a while to realize that the person she was talking to was "not me". My answering machine can make perfectly normal-sounding replies to normal questions, but is unable to come up with creative answers to unusual questions, and I have no memory of the events. Another unnamed, possibly separate module runs when my body is alarmed, but I am not yet conscious. It constantly asks for data, verbally questioning other humans nearby, "What is happening? What is going on? What time is it?" Unlike situations with the answering machine, I retain conscious memory of the occurrence, but not from a first-person perspective, more like I remember somebody telling me about what happened, but in this case that person was (allegedly) me.

Comment author: Michelle_Z 24 December 2012 12:51:17AM 0 points [-]

Funny. I do something similar- Except I call mine "Planner," "Want," "Bum," and "Cynic." I never really considered my autopilot mode anything particular. Usually I just do this when I am struggling with motivation, and usually those four concepts are the main issue- Planning to do something, then wanting to do something else, feeling like not doing anything, and realizing I'm not going to do it so why bother anyway... and reminding myself that they're learned habits and I can get rid of it if I bring in new habits.

Comment author: Jakeness 08 November 2012 05:04:10AM 2 points [-]

Thanks for posting this. I always enjoy these "in-practice" oriented posts, as I feel they help me check if I truly understand the concepts I learn here, in a similar way that example problems in textbooks check if I know how to correctly apply the material I just read.

Comment author: aceofspades 09 November 2012 02:00:33AM 3 points [-]

I have read this post and have not been persuaded that people who follow these steps will lead longer or happier lives (or will cause others to live longer or happier lives). I therefore will make no conscious effort to pay much of any regard to this post, though it is plausible it will have at least a small unconscious effect. I am posting this to fight groupthink and sampling biases, though this post actually does very little against them.

Comment author: Swimmer963 09 November 2012 02:09:15AM *  1 point [-]

Longer? Probably not. Happier? Possible, depending on that person's baseline, since we don't know our own desires and acquiring these skills might help, but given the hedonic treadmill effect, unlikely. Achieving more of their interim goals? Possible if not probable. There are a lot of possible goals aside from living longer and being happier.

Comment author: aceofspades 09 November 2012 02:48:55AM *  2 points [-]

I have decided that maximizing the integral of happiness with respect to time is my selfish supergoal and that maximizing the double integral of happiness with respect to time and with respect to number of people is my altruistic supergoal. All other goals are only relevant insofar as they affect the supergoals. I have yet to be convinced this is a bad system, though previous experience suggests I probably will make modifications at some point. I also need to decide what weight to place on the selfish/altruistic components.

But despite my finding such an abstract way of characterizing my actions interesting, the actual determining of the weights and the actual function I'm maximizing are just determined by what I actually end up doing. In fact constructing this abstract system does not seem to convincingly help me further its purported goal, and I therefore cease all serious conversation about it.

Comment author: Swimmer963 09 November 2012 03:24:35AM 3 points [-]

In fact constructing this abstract system does not seem to convincingly help me further its purported goal

I think this is a common problem. That doesn't mean you have to give up on having your second-order desires agree with your first-order desires. It is possible to use your abstract models to change your day-to-day behaviour, and it's definitely possible to build a more accurate model of yourself and then use that model to make yourself do the things you endorse yourself doing (i.e. avoiding having to use willpower by making what you want to want to do the "default.")

As for me, I've decided that happiness is too elusive of a goal–I'm bad at predicting what will make me happier-than-baseline, the process of explicitly pursuing happiness seems to make it harder to achieve, and the hedonic treadmill effect means that even if I did, I would have to keep working at it constantly to stay in the same place. Instead, I default to a number of proxy measures: I want to be physically fit, so I endorse myself exercising and preferably enjoying exercise; I want to have enough money to satisfy my needs; I want to finish school with good grades; I want to read interesting books; I want to have a social life; I want to be a good friend. Taken all together, these are at least the building blocks of happiness, which happens by itself unless my brain chemistry gets too wacked out.

Comment author: aceofspades 11 November 2012 12:35:26AM 0 points [-]

So the normal chain of events here would just be that I argue those are still all subgoals of increasing happiness and we would go back and forth about that. But this is just arguing by definition, so I won't continue along that line.

To the extent I understand the first paragraph in terms of what it actually says at the level of real-world experience, I have never seen evidence supporting its truth. The second paragraph seems to say what I intended the second paragraph of my previous comment to mean. So really it doesn't seem that we disagree about anything important.

Comment author: Swimmer963 11 November 2012 01:40:37AM 0 points [-]

But this is just arguing by definition, so I won't continue along that line.

Agreed. I find it practical to define my goals as all of those subgoals and not make happiness an explicit node, because it's easy to evaluate my subgoals and measure how well I'm achieving them. But maybe you find it simpler to have only one mental construct, "happiness", instead of lots.

The second paragraph seems to say what I intended the second paragraph of my previous comment to mean.

I guess I explicitly don't allow myself to have abstract systems with no measurable components and/or clear practical implications–my concrete goals take up enough mental space. So my automatic reaction was "you're doing it wrong," but it's possible that having an unconnected mental system doesn't sabotage your motivation the same way it does mine. Also, "what I actually end up doing" doesn't, to me, have to connotation of "choosing and achieving subgoals", it has the connotation of not having goals. But it sounds like that's not what it means to you.

Comment author: chaosmosis 10 November 2012 01:54:56AM 1 point [-]

I would argue that the altruism should be part of the selfish utility function. The reason that you care about other people is because you value other people. If you did not value other people there is no reason they should be in your utility function.

Comment author: wedrifid 10 November 2012 02:10:34AM 1 point [-]

I would argue that the altruism should be part of the selfish utility function.

Excellent! This nuance of what "selfish" means is something I find myself reiterating all too frequently. (Where the latter means I've done it at least three times that I can recall.)

Comment author: aceofspades 11 November 2012 12:28:10AM -3 points [-]

This is reaching the point of just arguing about definitions, so I reject this line of discussion as well.

Comment author: chaosmosis 11 November 2012 12:36:44AM *  -1 points [-]

It's not an argument about definitions, it's an argument about logical priority. Altruistic impulses are logically a subset of selfish ones because all impulses are selfish because they're only experienced internally. (I'm using impulse as roughly synonymous with an action taken because of values). Altruism is only relevant to your morality insofar as you value altruistic actions. Altruism can only be justified on somewhat selfish grounds. (To clarify, it can be justified on other grounds but I don't think those grounds make sense.)

Comment author: Swimmer963 25 November 2012 02:40:56AM 0 points [-]

all impulses are selfish because they're only experienced internally.

I think defining "selfish" as "anything experienced internally" is very limiting definition that makes it a pretty useless word. The concept of 'selfishness' can only be applied to human behaviour/motivations–physical-world phenomena like storms can't be selfish or unselfish, it's a mind-level concept. Thus, if you pre-define all human behaviour/motivations as selfish, you're ruling out the opposite of selfishness existing at all. Which means you might as well not bother with using the word "selfish" at all, since there's nothing that isn't selfish.

There's also the argument of common usage–it doesn't matter how you define a word in your head, communication is with other people, who have their own definition of that word in their heads, and most people's definitions are likely to be the common usage of the word, since how else would they learn what the word means? Most people define "selfishness" such that some impulses are selfish (i.e. Sally taking the last piece of cake because she likes cake) and some are not selfish (Sally giving Jack the last piece of cake, even though she wants it, because Jack hasn't had any cake yet and she already had a piece.) Obviously both of those reactions are the result of impulses bouncing around between neurons, but since we don't have introspective access to our neurons firing, it's meaningful for most people to use selfishness or unselfishness as labels.

Comment author: chaosmosis 25 November 2012 06:34:05AM *  0 points [-]

Sally doesn't give Jack the cake because Jack hasn't had any, rather, Sally gives Jack the cake because she wants to. That's why explicitly calling the motivation selfish is useful, because it clarifies that obligations are still subjective and rooted in individual values (it also clarifies that obligations don't mandate sacrifice or asceticism or any other similar nonsense). You say that it's obvious that all actions occur from internally motivated states as a result of neurons firing, but it's not obvious to most people, which is why pointing out that the action stems from the internal desires of Sally is still useful.

Comment author: Swimmer963 25 November 2012 07:29:06AM 0 points [-]

Why not just specify to people that motivations or obligations are "subjective and rooted in individual values"? Then you don't have to bring in the word "selfish", with all its common-usage connotations.

Comment author: chaosmosis 26 November 2012 07:59:40PM *  0 points [-]

I want those common-usage connotations brought in because I want to eradicate the taboo around those common-usage connotations, I guess. I think that people are vilified for being selfish in lots of situations where being selfish is a good thing, at least from that person's perspective. I don't think that people should ever get mad at defectors in Prisoner's Dilemmas, for example, and I think that saying that all of morality is selfish is a good way to fix this kind of problem.

Comment author: TorqueDrifter 26 November 2012 08:25:23PM -1 points [-]

To comment on the linguistic issue, yes this particular argument is silly, but I do think it is legitimate to define a word and then later discover it points out something trivial or nonexistent. Like if we discovered that everyone would wirehead rather than actually help other people in every case, then we might say "welp, guess all drives are selfish" or something.

Comment author: aceofspades 14 November 2012 01:13:36AM -1 points [-]

This line of discussion says nothing on the object level. The words "altruistic" and "selfish" in this conversation have ceased to mean anything that anyone could use to meaningfully alter his or her real world behavior.

Comment author: chaosmosis 14 November 2012 05:23:25AM *  -1 points [-]

Altruistic behavior is usually thought of as motivated by compassion or caring for others, so I think you are wrong. You are the one arguing about definitions in order to trivialize my point, if anything.

Comment author: aceofspades 19 November 2012 08:08:36PM 0 points [-]

The reason I rejected the utility function and why I rejected this argument is that I judged them useless.

What would you recommend people do, in general? I think this is a question that is actually valuable. At the least I would benefit from considering other people's answers to this question.

Comment author: chaosmosis 20 November 2012 10:10:18PM *  0 points [-]

I don't understand how your reply is responsive.

I recommend that people act in accordance with their (selfish) values because no other values are situated so as to be motivational. Motivation and values are brute facts, chemical processes that happen in individual brains, but that actually gives them an influence beyond that of mere reason, which could never produce obligations. My system also offers a solution to the paralysis brought on by infinitarian ethics - it's not the aggregate amount of well being that matters, it's only mine.

Because I believe this, recognizing that altruism is a subset of egoism is important for my system of ethics. I still believe in altruistic behavior, but only that which is motivated by empathy as opposed to some abstract sense of duty or fear of God's wrath or something.

Does my position make more sense now?

Comment author: army1987 07 November 2012 04:16:57PM 2 points [-]

This is awesome. I might remove the examples, print down the rest of the list, and read it every morning when I get up and every night before going to sleep. OTOH I have a few quibbles with some examples:

Recent example from Anna: Jumping off the Stratosphere Hotel in Las Vegas in a wire-guided fall. I knew it was safe based on 40,000 data points of people doing it without significant injury, but to persuade my brain I had to visualize 2 times the population of my college jumping off and surviving. Also, my brain sometimes seems much more pessimistic, especially about social things, than I am, and is almost always wrong.

For some reason my brain is more comfortable working with numbers that with visualizations, instead. That can be bad for signalling: a few years ago there was a terrorist attack in London which affected IIRC about 300 people; my mother told me “you should call [your friend who's there] and ask him if he's all right”, and I answered “there are 10 million people in London, so the probability that he was involved is about 1 in 30,000, which is less than the probability that he would die naturally in...”; my mother called me heartless before I even finished the sentence.

Recent example from Anna's brother: Trying to decide whether to move to Silicon Valley and look for a higher-paying programming job, he tried a reframe to avoid the status quo bias: If he was living in Silicon Valley already, would he accept a $70K pay cut to move to Santa Barbara with his college friends? (Answer: No.)

There's a huge difference: someone living in Silicon Valley on $70K + x and considering whether to stay there or move to Santa Barbara and earn x would be used to living on $70K + x; whereas someone living in Santa Barbara on x and considering whether to move to Silicon Valley and earn x + $70K or stay there would be used to living on x. This would affect how much each of them would enjoy a given amount of money. Also, the former would already have a social circle in Silicon Valley, and the latter wouldn't.

Recent example from Anna: I noticed that every time I hit 'Send' on an email, I was visualizing all the ways the recipient might respond poorly or something else might go wrong, negatively reinforcing the behavior of sending emails. I've (a) stopped doing that (b) installed a habit of smiling each time I hit 'Send' (which provides my brain a jolt of positive reinforcement). This has resulted in strongly reduced procrastination about emails.

Huh, no. If they are likely to respond badly, I want to believe they are likely to respond badly. If they aren't likely to respond badly, I want to believe they aren't likely to respond badly. What is true is already so, owning it up doesn't make it worse. The solution to that problem is to think twice and re-read the email and think about ways to make it less likely for it to be interpreted in an unintended way before hitting Send.

Comment author: Vaniver 08 November 2012 04:08:44AM 16 points [-]

my mother told me “you should call [your friend who's there] and ask him if he's all right”, and I answered “there are 10 million people in London, so the probability that he was involved is about 1 in 30,000, which is less than the probability that he would die naturally in...”; my mother called me heartless before I even finished the sentence.

Your math is right but your mother has the right interpretation of the situation. If your friend is dead, calling him does neither of you any good! This is a 29,999 out of 30,000 chance to earn brownie points.

Comment author: DaFranker 08 November 2012 03:49:30PM 3 points [-]

A different approach might be to do the math on how likely it is that someone the friend knows was involved in the incident. Or maybe just call to discuss the possible repercussions and the probable overreactions that the local government will have.

However, for most of my own friends, if I did call them in exactly such a situation, they'd tell me almost exactly what army1987 said to their mother. Unless they happened to be dead or lost a friend to the event or something.

Comment author: DaFranker 07 November 2012 04:26:26PM *  11 points [-]

Huh, no. If they are likely to respond badly, I want to believe they are likely to respond badly. If they aren't likely to respond badly, I want to believe they aren't likely to respond badly. What is true is already so, owning it up doesn't make it worse. The solution to that problem is to think twice and re-read the email and think about ways to make it less likely for it to be interpreted in an unintended way before hitting Send.

The thing is, it seems quite clear that the problem wasn't about how likely they are to respond badly, but that Anna (?) would visualize and anticipate the negative response beforehand based on no evidence that they would respond poorly, simply as a programmed mental habit. This would end up creating a vicious circle where each time the negatives from past times make it even more likely that this time it feels bad, regardless of the actual reactions.

The tactic of smiling reinforces the action of sending emails instead of terrorizing yourself into never sending emails anymore (which I infer from context would be a bad thing), and once you're rid of the looming vicious circle you can then base your predictions of the reaction on the content of the email, rather than have it be predetermined by your own feelings.

(Obligatory nitpicker's note: I agree with pretty much everything you said, I just didn't think that the real event in that example had a bad decision as you seemed to imply.)

Comment author: apophenia 07 November 2012 10:05:51PM *  5 points [-]

This is awesome. I might remove the examples, print down the rest of the list, and read it every morning when I get up and every night before going to sleep.

Interesting you should say that. About a week ago I simplified this into a more literal checklist designed to be used as part of a nightly wind-down, to see if it could maintain or instill habits. I designed the checklist based largely on empirical results from NASA's review of the factors for effectiveness of pre-flight safety checklists used by pilots, although I chased down a number of other checklist-related resources. I'm currently actively testing effects on myself and others, both trying to test to make sure it would actually be used, and getting the time down to the minimum possible (it's hovering around two minutes).

P.S. I'm not associated with CFAR but the checklist is an experiment on their request.

If you were to test your suggestion for two weeks, I would be interested to hear the results. My prediction (with 80% certainty) is: Lbh jvyy trg cbfvgvir erfhygf sbe n avtug be gjb. Jvguva gra qnlf, lbh jvyy svaq gur yvfg nirefvir / gbb zhpu jbex naq fgbc ernqvat vg, ortva gb tynapr bire vg jvgubhg cebprffvat nalguvat, be npgviryl fgbc gb svk bar bs gur nobir ceboyrzf. (Gur nezl anzr znxrf zr yrff pregnva guna hfhny--zl fgrerbglcr fnlf lbh znl or oberq naq/be qvfpvcyvarq.)

Comment author: Metus 08 November 2012 12:17:11AM 3 points [-]

Can you point us to the more interesting checklist resources?

Comment author: apophenia 18 November 2012 12:45:23AM *  1 point [-]

Absolutely. I can give better resources if you can be more specific as to what you're looking for.

I recommend The Checklist Manifesto first as an overview, as well as a basic understanding of akrasia, and trying and failing to make and use some checklists yourself.

The resources I spent most of my time with were very specific to what I was working on, and so I wouldn't recommend them. However, just in case someone finds it useful, Human Factors of Flight-Deck Checklists: The Normal Checklist draws attention to some common failure modes of checklists outside the checklist itself.

Comment author: army1987 01 December 2012 01:56:30AM 0 points [-]

Lbh jvyy trg cbfvgvir erfhygf sbe n avtug be gjb. Jvguva gra qnlf, lbh jvyy svaq gur yvfg nirefvir / gbb zhpu jbex naq fgbc ernqvat vg, ortva gb tynapr bire vg jvgubhg cebprffvat nalguvat, be npgviryl fgbc gb svk bar bs gur nobir ceboyrzf.

That's indeed what happened.

(Gur nezl anzr znxrf zr yrff pregnva guna hfhny--zl fgrerbglcr fnlf lbh znl or oberq naq/be qvfpvcyvarq.)

That's just a hypocorism for my first name. I have never been in the armed forces. (I regret picking this nickname because it has generated confusion several times, but I've used it on the Internet ever since I was 12 and I'm kind of used to it.)

Comment author: army1987 08 November 2012 03:22:28PM 0 points [-]

This sounds interesting. I wasn't entirely serious, but I'm going to do this for real now. (I haven't decoded the rot13ed part, of course.)

Comment author: BrassLion 12 November 2012 08:48:37PM 1 point [-]

You have the right conclusion but the wrong reason. Most people would appreciate being thought of in a disaster, so calling him if he's alive would be good - except that the phone networks, particularly cell networks, tend to be crippled by overuse in sudden disasters. Staying off the phones if you don't need to make a call helps with this.

Comment author: ialdabaoth 11 November 2012 07:46:12AM *  1 point [-]

I'm currently trying to evaluate how to adjust some of these for problems related to mental illness. For example, 4.3:

If I find my thoughts circling around a particular word, I try to taboo the word, i.e., think without using that word or any of its synonyms or equivalent concepts. (E.g. wondering whether you're "smart enough", whether your partner is "inconsiderate", or if you're "trying to do the right thing".)

Whenever I taboo words, I start developing pressured speech, and begin mumbling the tabooed words subconsciously. If I continue to try to force the taboo, this eventually develops into self-harming behavior.

Another example is 5.2:

I quantify consequences—how often, how long, how intense.

Whenever I attempt to quantify consequences, I have to push through absurd imaginings - if I believe someone is angry at me, even if they're a good friend, my imagination tends to produce vivid imagery of them dismembering, raping and torturing me while simultaneously performing actions to keep me alive longer, even if I know they don't even possess the skills necessary to perform the acts I'm imagining. It takes an extraordinary amount of mental effort and energy to push through that to actually quantify consequences.

Another example is 6.2:

I talk to my friends or deliberately use other social commitment mechanisms on myself.

I tend to not have very many friends that I can commit to, and when I do, I tend to only use commitment to perform a self-shaming and self-punishment cycle, rather than to actually goad me to perform the desired behavior.

Comment author: aelephant 12 November 2012 11:30:31AM 1 point [-]

Is your mental illness being treated? Are you seeing someone trained & experienced in managing mental illness? I would put much, much more emphasis on getting to a place where you aren't self-harming than on trying to develop rationality habits, especially if the latter seems to be interfering with the former.

Comment author: ialdabaoth 13 November 2012 12:52:18AM *  2 points [-]

No, because I'm currently not good at keeping a job, and equally not good at navigating the bureaucracies necessary to suckle on the government's teat. "Getting to a place where I'm not self-harming" is a nice pipe dream, but as it is, we optimise for those goals which we can actually stand a reasonable chance of accomplishing.

Put another way, if P(n)-sub-t is my probability of getting into therapy after expending n units of resource on getting into therapy, and U(n)-sub-t is the utility of getting therapy after spending n units of resource on getting into therapy, and P(n)-sub-r is the probability of becoming more rational after spending n units trying to become rational, and U(n)-sub-r is the utility of becoming more rational after spending n units becoming rational, and I only have n resource units available, then if P(n)-sub-t * U(n)-sub-t < P(n)-sub-r * U(n)-sub-r, then I know what to spend those n resource units on, no matter how much P(n+delta)-sub-t * U(n+delta)-sub-t > P(n+delta)-sub-r * U(n+delta)-sub-r, because I don't have that extra delta worth of resource units.

Sometimes poor people make what looks like bad choices from the outside because it's the best choice they have.

Comment author: aelephant 13 November 2012 10:30:10AM 1 point [-]

I'm not much for suckling on the government's teat either. How much of a chance do you think you'd have of keeping a job if you put your mind to it?

There could be other options aside from therapy. A lot of people that I respect have recommend Nathaniel Branden's books. I have heard some about Internal Family Systems (IFS) as well, which as far as I know can be done by yourself. I'm by no means an expert, but maybe these can act as leads for you to get started on your own (presuming you haven't already looked into them).

Comment author: ialdabaoth 13 November 2012 07:50:36PM 2 points [-]

How much of a chance do you think you'd have of keeping a job if you put your mind to it?

Empirically, a very poor one. Or rather, more accurately: I either have a very poor chance of keeping a job if I put my mind to it, OR I have a very poor chance of putting my mind to it. I'm not sure how to tell which is actually the case, right now, but maybe I could tell if I actually put my mind to it (heh).

Unfortunately, since "putting my mind to things" is a big part of what's actually broken, I'm not sure where to proceed - or even whether I should proceed. Often times, my strongest impulse leans towards slapping a big "DEFECTIVE" label on my forehead and tossing myself in the recycle bin.

Comment author: TimS 13 November 2012 08:00:10PM 1 point [-]

I urge you to strongly consider the possibility that your mind is telling you that you don't like this kind of work. At best, defective is a circular label, not an analytical result of your personality.

That may not be the most useful information, economically speaking. But it may help you avoid generalizing your experiences at the current job on to future jobs. In short, you aren't lazy, you just haven't found situations that put you in a position to succeed (by ensuring sufficient appropriate motivation).

Comment author: ialdabaoth 13 November 2012 08:32:47PM *  3 points [-]

I used to think that way. The frustrating thing is, I used to LOVE work of all kind. What I hated was people with arbitrary power over me deliberately sabotaging my work, mostly (it seemed) because they were angry that I enjoyed it so much. One of the most powerful lessons I ever learned was that people at my socioeconomic level don't GET to "enjoy" their work. Even by accident.

I never really learned diplomacy and power politics, primarily due to being taught a form of "learned helplessness" about it when I was very young (I was not in a socioeconomic class where it was appropriate to display the amount of enthusiasm, talent and intelligence that I had, and I didn't know how to hide it).

Unfortunately, this led to making a lot of really, really bad political mistakes, each of which slowly eroded my enthusiasm at doing... well, at this point, at doing anything.

After a few years of being out of practice, I now find that I can't even bring myself to get out of bed in the morning and work on something interesting, because "what's the point?"

To me, there is NO difference between "lazy" and "haven't found situations that put you in a position to succeed". They are IDENTICAL. If society doesn't put you in positions to succeed, it has decided that you are lazy, and that means you ARE lazy. Agency has nothing to do with culpability, only blame.

Comment author: TimS 13 November 2012 09:11:22PM 2 points [-]

Your rules seemed designed to sabotage you by making you feel miserable. The impulse to create scripts of how interactions are supposed to go is a good one, but the point of these scripts is to prepare you to succeed.

You need a new social environment. If none of the people you hang out with is really your friend, stop spending time with them. Particularly if they aren't emotionally safe.

We talked about boardgaming as one possible new environment. What about charitable volunteering. If you find the right charity, the organizations are desperate for your help.

Regardless of what specific thing you do, find something to succeed at. Don't set the bar ridiculously high - if what you can do is show up, then find something where showing up is success. You are absolutely worth it. Your negative feelings are a habit that you can break.

Where do you live? Maybe I can help? (Private message if you prefer).

Comment author: ialdabaoth 13 November 2012 09:19:58PM 2 points [-]

This post is being made while repressing a massive array of scripted responses, so if it bounces around or seems incoherent, it's because only a VERY small portion of my brainpower is currently available for rational analysis.

  1. I tend to sabotage friendships, due to being inherently distrustful / untrustworthy (my cynical disposition has led me to believe that these are ultimately the same thing). Thus, your offer to help personally is admirable, but I have a very high threshold to pass before I can trust it as actually helpful. Does this make sense?

  2. I've performed actions of charitable volunteering, but over the past few years I've had very little energy for anything. I tend to have less than half an hour's worth of useful energy per day for anything that involves leaving my little hovel, and by the end of that half an hour I tend to start socially self-destructing.

  3. It's not as much a problem that friends aren't emotionally safe for me, as that I am not emotionally safe for me. Actual friends tend to actually empathize, which means that they quickly become freaked out and leave when they realize how helpless they are to do anything but watch me self-harm. This provides a filter that ensures that when I DO absolutely need emotional interaction with other human beings, the only ones who are left are the ones who don't care as much about the waves of misery I'm exuding.

Comment author: TimS 13 November 2012 09:27:00PM *  1 point [-]

Thus, your offer to help personally is admirable, but I have a very high threshold to pass before I can trust it as actually helpful. Does this make sense?

Makes sense. Whether you believe it or not, I'm not doing this for my benefit. I care about you, and so does everyone else who is offering you advice.

This post is being made while repressing a massive array of scripted responses.

Do you think these scripts make you happier? Are there changes to the scripts that you can imagine that would cause them to make you happier?

More generally, is there any change you could make in your life that you think you would really make that would lead to any increased happiness? If there are reasons to not make that change, do you think the reasons are realistic in likelihood and it magnitude?

My experience with anxiety is that the feelings never went away, I just got better at doing what I thought needed doing, even with the anxious feelings.

Comment author: TimS 13 November 2012 09:18:01PM 0 points [-]

Also, this (warning, quite emotionally raw).

Comment author: ialdabaoth 13 November 2012 09:23:38PM 3 points [-]

Heh. Believe it or not, that's not as much of a problem. I've lived with constant suicidal ideation for almost 27 years now, since I was 12. I've become almost completely inured to it, and I've performed enough unsuccessful attempts that my mid-brain has learned very well not to bother. It's amusing to think that learned helplessness can be turned into a tool to combat suicidal ideation, but there it is. (I imagine this is why so many anti-depressants increase the risk of suicide - the learned helplessness is a tighter cycle, so it gets lifted faster, at which point the ideation hasn't faded yet and suddenly you imagine the possibility of something actually working, and it all finally being over for real.)

Comment author: incariol 11 November 2012 11:42:09PM 1 point [-]

What about "when faced with a hard problem, close your eyes, clear your mind and focus your attention for a few minutes to the issue at hand"?

It sounds so very simple, that I routinely fail to do it when, e.g. I try to solve some project euler problem or another, and I don't see a solution in the first few seconds, do something else for a while, until I finally get a handle on my slippery mind, sit down and solve the bloody thing.

Comment author: Steven_Bukal 10 November 2012 07:34:30PM 1 point [-]

Looks like a very useful list. One comment: I found the example in 2(a) a bit complicated and very difficult to parse.

Comment author: MarkL 08 November 2012 06:34:17PM *  1 point [-]

Something to add: allocating attention in the correct order: 1. emotions 2. felt meaning 3. verbal thoughts

Otherwise you have the failure mode of avoiding painful emotions (even if they're being triggered erroneously) and then all sorts of bad things happen. So check in with (1) before (2) and (3). And check in with (2) before applying (3), because otherwise you're using cached thoughts.

Comment author: jooyous 08 November 2012 03:33:50AM 1 point [-]

At some point I started feeling like my bf is more interested in telling me things than having a conversation with me. So I started trying to flag the instances where he did it and the instances where he didn't, and it kinda felt like it matched my feeling since I had several more examples of one than the other. But I didn't document then carefully or anything, so how do I know I'm not falling into the confirmation bias trap? Or is this just the wrong way to handle something that started out as a ... feeling?

Comment author: TheOtherDave 08 November 2012 02:55:35PM 2 points [-]

In your position, I would do a few different things.

One is what you describe: actually count instances and see if the pattern conforms to my expectations.

But also, I would try to articulate more clearly what the choices are. That is, what do I look for when I want to see if he is interested in having a conversation? Am I looking for him to listen to what I have to say? To ask questions about it? To not challenge it when he disagrees? To look directly at me and not do other things while I'm talking? To allow me to pause in the middle of what I'm saying with out treating that as an opportunity to change the subject? Something else? All of the above?

Also, I would ask myself what would follow if it turned out that I was overcounting confirmations? That is, let's say I conclude that one thing that makes me feel like my boyfriend isn't interested in having a conversation with me is when he interrupts me. I might ask myself, suppose I start actually counting instances and I conclude that he only interrupts me one conversation out of ten, when I had estimated it was nine conversations out of ten. It is likely, then, that I'd succumbed to confirmation bias.

But... what follows from that?

One possibility is "Oh... well, 10% interruptions isn't that big a deal. I should get over it."
Another possibility is "Clearly, 10% interruptions is enough to upset me. We should try for a lower rate."

Knowing how I would go about making that choice for a measured probability once I have it is, IME, an important part of actually improving the system. Otherwise I'm just making measurements.

Comment author: jooyous 08 November 2012 06:27:44PM 0 points [-]

Yeah, I think this is the hardest part because in some cases, examining the actual facts does make me feel better. But in this case, if it does turn out to be 10% but the bad feeling doesn't go away, I'm going to feel like a jerk. Also, it's impossible to compare to the past at this point, which is when it felt like we had more real conversations, but I have no data from it because back then I didn't have any reason to track it.

Comment author: TheOtherDave 08 November 2012 06:45:52PM 0 points [-]

if it does turn out to be 10% but the bad feeling doesn't go away, I'm going to feel like a jerk

Why?

Comment author: Decius 08 November 2012 07:00:53AM 2 points [-]

To break confirmation bias, you need an objective log. Write down every time you recognize a confirming event, as well as every time you recognize an even which is nonconfirming. Then, estimate the likelihood that you would recognize and write down a confirming event, and the likelihood that you would recognize and write down a nonconforming event. Use your surprise that a nonconfirming event just occurred, as well as your surprise that you noticed it and made a note of it to form that estimate.

If you find yourself more surprised that you made a not of a nonconfirming event than that it happened, it probably happens much more often than you note it.

Comment author: Manfred 08 November 2012 09:59:42AM 1 point [-]

This seems tricky. What is (I would guess) important about your situation is that you want to have more conversations with him. So hey, if you want to have more conversations, do things that will result in that happening.

If your number of conversations changes noticeably and that feeling doesn't go away, or you get the same feeling about something else instead, then yeah, maybe the root cause is something else. (It's like when I'm procrastinating and I feel like I really want to visit website X, and then I feel I really want to read book Y, but the feeling is really just "procrastination-feeling" from not wanting to start chore Z.)

Comment author: jtmedley21 19 November 2012 10:36:12PM *  0 points [-]

Great list. My guide post for rationality and related issues has been the works of Carl Sagan, as he had many books and good advice for thinking critically. His works are an absolute must read (or watch) for anybody wanting to wade through the mass of misdirection that exists in the world.

Comment author: PaulingL 17 November 2012 01:50:31AM 0 points [-]

This all sounds quite groovy, but are there any suggestions on how I could go about implementing them into my daily pattern of thought? I wonder if perhaps an Anki deck would have any merit whatsoever in accomplishing this...

Comment author: Iabalka 14 November 2012 02:56:19PM *  0 points [-]

Why are these rationality habits? Based on what? All the examples are personal. Isn't it possible to give (also) a scientific examples for each habit : study ..... shows that .... hence 1) the habit is useful for dealing with this bias 2) it doesn't create or reinforce other biases.

Comment author: John_Maxwell_IV 13 November 2012 06:27:49AM *  0 points [-]

Another one: You see a way to do things that in theory might work better that what everyone else is doing, but in practice no one seems to use. Do you investigate it and consider exploiting it?

Example: You're trying to get karma on reddit. You notice that http://www.reddit.com/r/randomization/ has almost a million subscribers but no new submissions in the past two months. Do you think "hm, that's weird" and keep looking for a subreddit to submit your link in, or do you think "oh wow, karma feast!"

Comment author: Transfuturist 21 September 2013 07:39:36PM 1 point [-]

Third option: turn the subreddit's style off (if you have RES), or subscribe yourself and see what happens to the number to discover what they've been doing.

Comment author: shminux 07 November 2012 08:00:22PM 0 points [-]

For each item, you might ask yourself: did you last use this habit...

Maybe it's worth a poll, if someone feels like creating one. I'm not sure how to make a multi-level poll and it probably would be too presumptuous of me to create 24 replies with one poll in each.

Comment author: AnnaSalamon 08 November 2012 12:23:42AM 1 point [-]

It's easy to make a checklist by going to Google docs / Google drive, clicking "create", and choosing "form".

Comment author: NancyLebovitz 08 November 2012 03:47:56AM 7 points [-]

The Checklist Manifesto is very interesting about what goes into an excellent checklist rather than a casually constructed checklist. It's about institutional checklists rather than personal checklists, though.

Comment author: Hawisher 07 November 2012 08:03:59PM 0 points [-]

You can't do multi-response polls? As in, check all that apply?

Comment author: shminux 07 November 2012 08:15:28PM 0 points [-]

There are 24 separate subquestions with 6 answer options each.

Comment author: FiftyTwo 08 November 2012 12:19:09AM 0 points [-]

The PDF version is very nice looking and very readable, thanks for making it. I think people on here often underestimate the benefits of low hanging aesthetic fruit.