As you may know, the Center for Applied Rationality has run several workshops, each teaching content similar to that in the core sequences, but made more practical, and more into fine-grained habits.

Below is the checklist of rationality habits we have been using in the minicamps' opening session.  It was co-written by Eliezer, myself, and a number of others at CFAR.  As mentioned below, the goal is not to assess how "rational" you are, but, rather, to develop a personal shopping list of habits to consider developing.  We generated it by asking ourselves, not what rationality content it's useful to understand, but what rationality-related actions (or thinking habits) it's useful to actually do.

I hope you find it useful; I certainly have.  Comments and suggestions are most welcome; it remains a work in progress. (It's also available as a pdf.) 

---

This checklist is meant for your personal use so you can have a wish-list of rationality habits, and so that you can see if you're acquiring good habits over the next year—it's not meant to be a way to get a 'how rational are you?' score, but, rather, a way to notice specific habits you might want to develop.  For each item, you might ask yourself: did you last use this habit...
  • Never
  • Today/yesterday
  • Last week
  • Last month
  • Last year
  • Before the last year

  1. Reacting to evidence / surprises / arguments you haven't heard before; flagging beliefs for examination.
    1. When I see something odd - something that doesn't fit with what I'd ordinarily expect, given my other beliefs - I successfully notice, promote it to conscious attention and think "I notice that I am confused" or some equivalent thereof. (Example: You think that your flight is scheduled to depart on Thursday. On Tuesday, you get an email from Travelocity advising you to prepare for your flight “tomorrow”, which seems wrong. Do you successfully raise this anomaly to the level of conscious attention? (Based on the experience of an actual LWer who failed to notice confusion at this point and missed their plane flight.)

    2. When somebody says something that isn't quite clear enough for me to visualize, I notice this and ask for examples. (Recent example from Eliezer: A mathematics student said they were studying "stacks". I asked for an example of a stack. They said that the integers could form a stack. I asked for an example of something that was not a stack.) (Recent example from Anna: Cat said that her boyfriend was very competitive. I asked her for an example of "very competitive." She said that when he’s driving and the person next to him revs their engine, he must be the one to leave the intersection first—and when he’s the passenger he gets mad at the driver when they don’t react similarly.)


    3. I notice when my mind is arguing for a side (instead of evaluating which side to choose), and flag this as an error mode. (Recent example from Anna: Noticed myself explaining to myself why outsourcing my clothes shopping does make sense, rather than evaluating whether to do it.)


    4. I notice my mind flinching away from a thought; and when I notice, I flag that area as requiring more deliberate exploration. (Recent example from Anna: I have a failure mode where, when I feel socially uncomfortable, I try to make others feel mistaken so that I will feel less vulnerable. Pulling this thought into words required repeated conscious effort, as my mind kept wanting to just drop the subject.)


    5. I consciously attempt to welcome bad news, or at least not push it away. (Recent example from Eliezer: At a brainstorming session for future Singularity Summits, one issue raised was that we hadn't really been asking for money at previous ones. My brain was offering resistance, so I applied the "bad news is good news" pattern to rephrase this as, "This point doesn't change the fixed amount of money we raised in past years, so it is good news because it implies that we can fix the strategy and do better next year.")


  2. Questioning and analyzing beliefs (after they come to your attention).
    1. I notice when I'm not being curious. (Recent example from Anna: Whenever someone criticizes me, I usually find myself thinking defensively at first, and have to visualize the world in which the criticism is true, and the world in which it's false, to convince myself that I actually want to know. For example, someone criticized us for providing inadequate prior info on what statistics we'd gather for the Rationality Minicamp; and I had to visualize the consequences of [explaining to myself, internally, why I couldn’t have done any better given everything else I had to do], vs. the possible consequences of [visualizing how it might've been done better, so as to update my action-patterns for next time], to snap my brain out of defensive-mode and into should-we-do-that-differently mode.)


    2. I look for the actual, historical causes of my beliefs, emotions, and habits; and when doing so, I can suppress my mind's search for justifications, or set aside justifications that weren't the actual, historical causes of my thoughts. (Recent example from Anna: When it turned out that we couldn't rent the Minicamp location I thought I was going to get, I found lots and lots of reasons to blame the person who was supposed to get it; but realized that most of my emotion came from the fear of being blamed myself for a cost overrun.)


    3. I try to think of a concrete example that I can use to follow abstract arguments or proof steps. (Classic example: Richard Feynman being disturbed that Brazilian physics students didn't know that a "material with an index" meant a material such as water. If someone talks about a proof over all integers, do you try it with the number 17? If your thoughts are circling around your roommate being messy, do you try checking your reasoning against the specifics of a particular occasion when they were messy?)


    4. When I'm trying to distinguish between two (or more) hypotheses using a piece of evidence, I visualize the world where hypothesis #1 holds, and try to consider the prior probability I'd have assigned to the evidence in that world, then visualize the world where hypothesis #2 holds; and see if the evidence seems more likely or more specifically predicted in one world than the other (Historical example: During the Amanda Knox murder case, after many hours of police interrogation, Amanda Knox turned some cartwheels in her cell. The prosecutor argued that she was celebrating the murder. Would you, confronted with this argument, try to come up with a way to make the same evidence fit her innocence? Or would you first try visualizing an innocent detainee, then a guilty detainee, to ask with what frequency you think such people turn cartwheels during detention, to see if the likelihoods were skewed in one direction or the other?)


    5. I try to consciously assess prior probabilities and compare them to the apparent strength of evidence. (Recent example from Eliezer: Used it in a conversation about apparent evidence for parapsychology, saying that for this I wanted p < 0.0001, like they use in physics, rather than p < 0.05, before I started paying attention at all.)


    6. When I encounter evidence that's insufficient to make me "change my mind" (substantially change beliefs/policies), but is still more likely to occur in world X than world Y, I try to update my probabilities at least a little. (Recent example from Anna: Realized I should somewhat update my beliefs about being a good driver after someone else knocked off my side mirror, even though it was legally and probably actually their fault—even so, the accident is still more likely to occur in worlds where my bad-driver parameter is higher.)


  3. Handling inner conflicts; when different parts of you are pulling in different directions, you want different things that seem incompatible; responses to stress.
    1. I notice when I and my brain seem to believe different things (a belief-vs-anticipation divergence), and when this happens I pause and ask which of us is right. (Recent example from Anna: Jumping off the Stratosphere Hotel in Las Vegas in a wire-guided fall. I knew it was safe based on 40,000 data points of people doing it without significant injury, but to persuade my brain I had to visualize 2 times the population of my college jumping off and surviving. Also, my brain sometimes seems much more pessimistic, especially about social things, than I am, and is almost always wrong.)


    2. When facing a difficult decision, I try to reframe it in a way that will reduce, or at least switch around, the biases that might be influencing it. (Recent example from Anna's brother: Trying to decide whether to move to Silicon Valley and look for a higher-paying programming job, he tried a reframe to avoid the status quo bias: If he was living in Silicon Valley already, would he accept a $70K pay cut to move to Santa Barbara with his college friends? (Answer: No.))


    3. When facing a difficult decision, I check which considerations are consequentialist - which considerations are actually about future consequences. (Recent example from Eliezer: I bought a $1400 mattress in my quest for sleep, over the Internet hence much cheaper than the mattress I tried in the store, but non-returnable. When the new mattress didn't seem to work too well once I actually tried sleeping nights on it, this was making me reluctant to spend even more money trying another mattress. I reminded myself that the $1400 was a sunk cost rather than a future consequence, and didn't change the importance and scope of future better sleep at stake (occurring once per day and a large effect size each day).


  4. What you do when you find your thoughts, or an argument, going in circles or not getting anywhere.
    1. I try to find a concrete prediction that the different beliefs, or different people, definitely disagree about, just to make sure the disagreement is real/empirical. (Recent example from Michael Smith: Someone was worried that rationality training might be "fake", and I asked if they could think of a particular prediction they'd make about the results of running the rationality units, that was different from mine, given that it was "fake".)


    2. I try to come up with an experimental test, whose possible results would either satisfy me (if it's an internal argument) or that my friends can agree on (if it's a group discussion). (This is how we settled the running argument over what to call the Center for Applied Rationality—Julia went out and tested alternate names on around 120 people.)


    3. If I find my thoughts circling around a particular word, I try to taboo the word, i.e., think without using that word or any of its synonyms or equivalent concepts. (E.g. wondering whether you're "smart enough", whether your partner is "inconsiderate", or if you're "trying to do the right thing".) (Recent example from Anna: Advised someone to stop spending so much time wondering if they or other people were justified; was told that they were trying to do the right thing; and asked them to taboo the word 'trying' and talk about how their thought-patterns were actually behaving.)


  5. Noticing and flagging behaviors (habits, strategies) for review and revision.
    1. I consciously think about information-value when deciding whether to try something new, or investigate something that I'm doubtful about. (Recent example from Eliezer: Ordering a $20 exercise ball to see if sitting on it would improve my alertness and/or back muscle strain.) (Non-recent example from Eliezer: After several months of procrastination, and due to Anna nagging me about the value of information, finally trying out what happens when I write with a paired partner; and finding that my writing productivity went up by a factor of four, literally, measured in words per day.)


    2. I quantify consequences—how often, how long, how intense. (Recent example from Anna: When we had Julia take on the task of figuring out the Center's name, I worried that a certain person would be offended by not being in control of the loop, and had to consciously evaluate how improbable this was, how little he'd probably be offended, and how short the offense would probably last, to get my brain to stop worrying.) (Plus 3 real cases we've observed in the last year: Someone switching careers is afraid of what a parent will think, and has to consciously evaluate how much emotional pain the parent will experience, for how long before they acclimate, to realize that this shouldn't be a dominant consideration.)


  6. Revising strategies, forming new habits, implementing new behavior patterns.
    1. I notice when something is negatively reinforcing a behavior I want to repeat. (Recent example from Anna: I noticed that every time I hit 'Send' on an email, I was visualizing all the ways the recipient might respond poorly or something else might go wrong, negatively reinforcing the behavior of sending emails. I've (a) stopped doing that (b) installed a habit of smiling each time I hit 'Send' (which provides my brain a jolt of positive reinforcement). This has resulted in strongly reduced procrastination about emails.)


    2. I talk to my friends or deliberately use other social commitment mechanisms on myself. (Recent example from Anna: Using grapefruit juice to keep up brain glucose, I had some juice left over when work was done. I looked at Michael Smith and jokingly said, "But if I don't drink this now, it will have been wasted!" to prevent the sunk cost fallacy.) (Example from Eliezer: When I was having trouble getting to sleep, I (a) talked to Anna about the dumb reasoning my brain was using for staying up later, and (b) set up a system with Luke where I put a + in my daily work log every night I showered by my target time for getting to sleep on schedule, and a — every time I didn't.)


    3. To establish a new habit, I reward my inner pigeon for executing the habit. (Example from Eliezer: Multiple observers reported a long-term increase in my warmth / niceness several months after... 3 repeats of 4-hour writing sessions during which, in passing, I was rewarded with an M&M (and smiles) each time I complimented someone, i.e., remembered to say out loud a nice thing I thought.) (Recent example from Anna: Yesterday I rewarded myself using a smile and happy gesture for noticing that I was doing a string of low-priority tasks without doing the metacognition for putting the top priorities on top. Noticing a mistake is a good habit, which I’ve been training myself to reward, instead of just feeling bad.)


    4. I try not to treat myself as if I have magic free will; I try to set up influences (habits, situations, etc.) on the way I behave, not just rely on my will to make it so. (Example from Alicorn: I avoid learning politicians’ positions on gun control, because I have strong emotional reactions to the subject which I don’t endorse.) (Recent example from Anna: I bribed Carl to get me to write in my journal every night.)


    5. I use the outside view on myself. (Recent example from Anna: I like to call my parents once per week, but hadn't done it in a couple of weeks. My brain said, "I shouldn't call now because I'm busy today." My other brain replied, "Outside view, is this really an unusually busy day and will we actually be less busy tomorrow?")

New Comment
189 comments, sorted by Click to highlight new comments since: Today at 3:24 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

This may be the single most useful thing I've ever read on LessWrong. Thank you very, very much for posting it.

Here's one I use all the time: When a problem seems overwhelming, break it up into manageable subproblems.

Often, when I am procrastinating, I find that the source of my procrastination is a feeling of being overwhelmed. In particular, I don't know where to begin on a task, or I do but the task feels like a huge obstacle towering over me. So when I think about the task, I feel a crushing sense of being overwhelmed; the way I escape this feeling is by procrastination (i.e. avoiding the source of the feeling altogether).

When I notice myself doing this, I try to break the problem down into a sequence of high-level subtaks, usually in the form of a to-do list. Emotionally/metaphorically, instead of having to cross the obstacle in one giant leap, I can climb a ladder over it, one step at a time. (If the subtasks continue to be intimidating, I just apply this solution recursively, making lists of subsubtasks.)

I picked this strategy up after realizing that the way I approached large programming projects (write the main function, then write each of the subroutines that it calls, etc.) could be applied to life in general. Now I'm about to apply it to the task of writing an NSF fellowship application. =)

Here's one I use all the time: When a problem seems overwhelming, break it up into manageable subproblems.

It's a classic self-help technique (especially in 'Getting Things Done') for a reason: it works.

5jooyous11y
Hello! I am procrastinating on writing the NSF fellowship! High five! My current subproblem consists of filling in all the instances of "INSPIRATIONAL STUFF" with actual inspirational stuff, so this particular subproblem is looking pretty difficult. :(
9JulianMorrison11y
Well your task spec is broken, so no wonder your brain won't be whipped into doing it. "inspirational stuff" is a trigger for thinking in terms of things like advertising or religious revivals that are emotional grabs which are intended to disengage (or even flimflam) the reasoning faculties. Any rationalist would flinch away. Re-frame: visualize your audience. You are looking to simply and clearly convey whatever part of their far mode utility function is advanced by the thing you are pushing.
4amcknight11y
For the slightly more advanced procrastinator that also finds a large sequence of tasks daunting, it might help to instead search for the first few tasks and then ignore the rest for now. Of course, sometimes in order to find the first tasks you may need to break down the whole task, but other times you don't.
1sketerpot11y
This article would probably benefit from being re-read in smaller chunks over the course of several days. There are a lot of things in it that need to be thought about seriously in order to be effective, and I agree with you about its usefulness.
1Swimmer963 (Miranda Dixon-Luinenburg) 11y
I think the most important aspect of this, for me anyway, is being able to dump most of what you're working on out of your working memory, trusting yourself that it's organized on paper, so that you can free up more brain space to do each of the sub-parts.
0lukeprog11y
See: How and Why to Granularize.

Very nice list! I feel like this one in particular is one of the most important ones:

I try not to treat myself as if I have magic free will; I try to set up influences (habits, situations, etc.) on the way I behave, not just rely on my will to make it so. (Example from Alicorn: I avoid learning politicians’ positions on gun control, because I have strong emotional reactions to the subject which I don’t endorse.) (Recent example from Anna: I bribed Carl to get me to write in my journal every night.)

To give my own example: I try to be vegetarian, but occasionally the temptation of meat gets the better of me. At some point I realized that whenever I walked past a certain hamburger place - which was something that I typically did on each working day - there was a high risk of me succumbing. Obvious solution: modify my daily routine to take a slightly longer route which avoided any hamburger places. Modifying your environment so that you can completely avoid the need to use willpower is ridiculously useful.

Modifying your environment so that you can completely avoid the need to use willpower is ridiculously useful.

My personal example: arranging to go exercise on the way to or from somewhere else will drastically increase the probability that I'll actually go. There's a pool a 5 minute bike ride from my house, which is also on the way home from most of the places I would be biking from. Even though the extra 10 minutes round trip is pretty negligable (and counts as exercise itself), I'm probably 2x as likely to go if I have my swim stuff with me and stop off on the way home. The effect is even more drastic for my taekwondo class: it's a 45 minute bike ride from home and about a 15 minute bike ride from the campus where I have most of my classes. Even if I finish class at 3:30 pm and taekwondo is at 7 pm, it still makes more sense for me to stay on campus for the interim–if I do, there's nearly 100% likelihood that I'll make it to taekwondo, but if I go home and get comfy, that drops to less than 50%.

For me this was the biggest insight that dramatically improved my ability to form habits. I don't actually decide things most of the time. Agency is something that only occurs intermittently. Therefore I use my agency on changing what sorts of things I am surrounded by rather than on the tasks themselves. This works because the default state is to simply be the average of what I am surrounded by.

Cliche example: not having junk food in the house improves my diet by making it take additional work to go out and get it.

5incariol11y
Another example: as I don't feel like getting in a relationship for the foreseeable future, I try to avoid circumstances with lots of pretty girls around, e.g. not going to certain parties, taking walks in those parts of the forest where I don't expect to meet any, and in general, trying to convince other parts of my brain that the only girl I could possibly be with exists somewhere in the distant future or not at all (if she can't do a spell or two and talk to dragons, she won't do ;-)). It also helps being focused on math, programming and abstract philosophy - and spending time on LW, it seems. :)
9A1987dM11y
I don't think you'd be likely to find yourself in a relationship despite not wanting to by going to parties with lots of pretty girls around, let alone by walking on a street where girls also walk rather than through a forest. And not developing social skills may make things much harder should you ever decide to try and get into a relationship later in your life.
1DaFranker11y
Aha, but the clever arguer could respond that you could be likely to find yourself wanting to despite not wanting to want to be in a relationship, and thus that avoidance is a twice-effective method of willpower conservation! Of course, that the above be true and applicable to this case is unlikely. If you're to end up wanting it, and that you'll end up wanting it enough to compensate for the opportunity costs regarding other things you might want incurred by eventual willpower expenses or time spent "succumbing" and attempting to get into a relationship, then I think it trivially follows that you should already have updated towards the more reflectively coherent behavior that seems to give higher expected utility. After all, we want to win.
3apotheon11y
It's the "Lead me not into temptation, but deliver me from weevils!" tactic. Well . . . maybe not weevils, but not evil either, in this case. Your objection to the ultimate utility of avoidance doesn't seem to take the desire to avoid distraction and wasted time even when successfully resisting the biological urges toward relationship-establishing behavior into account. Even if you (for some nonspecific definition of "you") simply find yourself waylaid for a few minutes by a pretty girl, but ultimately ready to move on, the time spent not only in those few moments but also in thinking about it later on may prove a distraction from other things, regardless of whether you allow yourself to get caught up enough to actively pursue a relationship with her.
2DaFranker11y
Well, yeah, my objection does take it into account, but I was being unfair in my implicit assumptions because I didn't think it likely that anyone here would object. Basically, this is where I lumped an implicit: "For most humans, the desire and expected benefits of successfully entering a relationship are much greater in terms of evolved values than the opportunity costs incurred, and it is reasonable to expect that the gains obtained from this would free up enough mental resources to actually make faster, rather than slower, progress on other goals of interest in the case of well-motivated individuals with above-average instrumental rationality." However, estimating the costs you mentioned for humans-on-average is difficult for me, due to lack of data. Picture me as wearing a "typical mind fallacy warning!" badge on this particular issue.
-1incariol11y
Well, it has happened to me before - girls really can be pretty insistent. :) But this is not actually what concerns me - it's the distraction/wasted time induced by pretty-girl-contact event like apotheon explained below.
6inblankets11y
I disagree with the commenters below-- I think you're fairly likely to find yourself wanting to be in a relationship if you're not careful. I'm a female, and I don't want to get married or have kids. Unfortunately, I'm 24, and some part of me/the body is really trying to marry me off and give me baybehs. So I try not to take in too much media that normalizes this vs. normalizing my goals, I don't babysit, and I am open about my intent so as not to attract invitations.
1aelephant11y
Set Future You up for success, rather than failure. Edit: Thought of a personal example. I know that if I scratch my head, my head will become more itchy. It is a vicious cycle. If I cut my nails short, it seems to help. In the moment, I might not want to cut my nails because there is no immediate value. But it is, in a sense, "modifying my environment" so that in the future I'll be less likely to fall into the itchy-head trap.

Awesome list. I'm interested in the way there are 24 questions that are grouped into 6 overarching categories. Do they empirically cluster like this in actual humans? It would be fascinating to get a few hundred responses to each question and do dimensional analysis to see if there is a small number of common core issues that can be communicated and/or adjusted more efficiently :-)

I'd like to add "noticing when you don't know something." When someone asks you a question, its surprisingly tempting to try to be helpful and offer them an answer even when you don't have the necessary knowledge to provide an accurate answer. It can be easy to infer what the truth might be and offer that as an answer, without explaining that you're just guessing and don't actually know. (Example: I recently purchased a new television and my co-worker asked me what sort of Parental Controls it offered. I immediately started providing him an answer I had inferred from limited knowledge, and it took me a moment to realize I didn't actually know what I was talking about and instead tell him, "I don't know.")

This is essentially the problem of confabulation mentioned here; in this case its a confabulation of knowledge about the world, as opposed to confabulating knowledge about the self. In terms of the map/territory analogy, this would be a situation where someone asks you a question about a specific area of your map, and you choose to answer as if that section of your map is perfectly clear to you, even when you know that its blurry. Don't treat a blurry map as if it were clear!

3John_Maxwell11y
I like your comment, but one problem is that telling people you don't know stuff projects low status. I think most people, including me, really know very little, but if you're honest about this all the time then this can contribute to persistent low status. (I tried the "don't care about status" thing for a while, but being near the bottom of the social totem pole just doesn't seem to work for me psychologically. So lately I've decided to optimize for status everywhere at least somewhat.)
4A1987dM11y
That only happens if it's credible, otherwise it's taken as counter-signalling. When I say I don't know much about something, people generally realize I'm just holding myself to a high standard and don't genuinely believe I know less than the typical person; the problem is that they also think that when I actually don't know shit about something (in the sense the typical person would use that phrase). Conversely, showing off knowledge can come across as arrogant in certain situations. Even if you don't care about status, I'd say that what X (e.g. “I don't know”) actually means in English is what English speakers actually mean when they say X, regardless of etymology (huh, it sounds tautological when put this way, doesn't it?), and if you're aware of this and use X to mean something else you're lying (unless your interlocutor knows you mean something else).
0handoflixue11y
"telling people you don't know stuff projects low status" If it's a random stranger, I don't care about status. If it's a friend or a fellow "geek", it's probably a high status signal to send. That pretty much leaves work as the only area I'd potentially run in to this, and I've found "I don't know; but I can find out!" works wonders (part of this is that at work, I'm presumably expected to actually know these things) I've found "I don't know, but isn't it fun to find out!" is a fairly successful tactic, but I'm also deliberately aiming to attract geeks and people who like that answer in my life :)
6A1987dM11y
“A physicist is someone who answers all questions with ‘I don't know, but I can find out.’” -- Someone (possibly Nicola Cabibbo, quoting from my memory)
4wedrifid11y
Rarely. It is often a useful signal to send but seldom high status.
3handoflixue11y
I don't really understand the reply. Are you saying it's rarely high status even within my social circles? Or are you saying that my social circles are unusual? To the former, all I can say is that we apparently have very different experiences. To the latter... well, duh, that's WHY I specified that it was specific to THOSE groups...
3wedrifid11y
I am saying that is more likely that you are inflating the phrase "high status" to include things that are somewhat low status but overall socially rewarding than that your subculture is stretched quite that far in that (unsustainable) direction.
-1handoflixue11y
How would "I don't know" being high status be unsustainable? For that matter, what distinction are you drawing between high status and socially rewarding?
7wedrifid11y
Yes, "high status" being the inflated does seem to be the crux of the matter. Socially rewarding behaviors that, ceritus paribus are low status. * Saying "please" or "thankyou". * Listening to what someone is saying. Even more if you deign to comprehend and accept their point. * Saluting. * Doing what someone asks. * Using careful expression to ensure you don't offend people.
0handoflixue11y
My general experience has been that "I don't know, but I'll find out", said to someone currently equal or lower status than me, clearly but mildly correlates with most of the low status behavior you mentioned. I'm not as sure how it affects people higher status than me, since I don't have as many of those relationships / data points. So I continue my assertion that, yes, it's high status, not merely socially rewarding. I still suspect this is a weird and unusual set of experiences, and probably has to do with how I position "I don't know" relative to others.
0DaFranker11y
In some circles, perceived signal usefulness is a causal factor towards the signal's status-level. To unbox the above: In some groups I've been with, sending compressed signals that everyone in the group understands is a high-status signal, regardless of whether it's a "low-status" or "high-status" signal in other environments. "Hey, I have an idea but I'm not quite sure how to go about putting it in practice" is a very low status signal in meatspace for all meatspaces I've been in except one, but a very high status signal in e.g. certain online hacking communities. Likewise for the case at hand, there are places where "I don't know" can even be the highest status signal. For the most memorable example, I've once visited a church where the people at the top were answering "I don't know" to the most questions, signaling their closeness to divinity implicitly, while the "simpletons" at the bottom of the ladder had an opinion on everything, and thus would never "not know".
1CAE_Jones11y
I've had people tell me to taboo "I don't know" because I use it so much. These being fairly average or slightly above average people who are annoyed that I don't have a strong opinion about things like "what do you want to eat tonight?" Some have made jokes about putting "I don't know" on my tombstone. Assuming that I die and am later resurrected and discover this was actually done, I will be most displeased.
2handoflixue11y
I usually interpret that context as "I don't have a preference", which I would readily agree is useful to taboo. If you genuinely don't know what you want (despite having an apparent hidden but strong preference) then ... that's a new one on me ^^;
0TheAncientGeek10y
Toss a mental coin and pretend to enthuse about the result?
0btoblake10y
Before declining to offer an opinion, it's worth considering whether you'd benefit from the decision being made. (For instance, you could get a prompt dinner.) If so, why not offer a little help? Decision making can be tiring work, and any input can make it easier. You could: * mention any limiting factors (i.e. I have $20 or 1 hour) * Mention options that are convenient * Offer support to the person who makes the decision (particularly if you can avoid critiquing their choice).
3aelephant11y
Good one. I try to be very conservative with my language & preface everything I say with something that implies an amount of uncertainty. There might be cultural differences. In China people will give you directions on the street even if they have no idea. I have yet to have someone reply to a request for help with "I don't know". It seems like an Ego protection thing to me & it isn't helpful.

The example about stacks in 1.2 has a certain irony in context. This requires a small mathematical parenthese:

A stack is a certain sophisticated type of geometric structure which is increasingly used in algebraic geometry, algebraic topology (and spreading to some corners of differential geometry) to make sense of geometric intuitions and notions on "spaces" which occur "naturally" but are squarely out of the traditional geometric categories (like manifolds, schemes, etc.).

See www.ams.org/notices/200304/what-is.pdf for a very short introduction focusing on the basic example of the moduli of elliptic curves.

The upshot of this vague outlook is that in the relevant fields, everything of interest is a stack (or a more exotic beast like a derived stack), precisely because the notion has been designed to be as general and flexible as possible ! So asking someone working on stacks a good example of something which is not a stack is bound to create a short moment of confusion.

Even if you do not care for stacks (and I wouldn't hold it against you), if you are interested in open source/Internet-based scientific projects, it is worth having a look at the web page of the Stacks project (http://stacks.math.columbia.edu/), a collaborative fully hyperlinked textbook on the topic, which is steadily growing towards the 3500 pages mark.

he tried a reframe to avoid the status quo bias: If he was living in Silicon Valley already, would he accept a $70K pay cut to move to Santa Barbara with his college friends? (Answer: No.))

[Edit] But his utility function would predictably change under those circumstances.

I know that I have a status quo bias, hedonic treadmill, and strongly decreasing marginal utility of money (particularly when progressive taxation is factored in).

If I made 2/3 of what I do now, I'd be pretty much as happy as I am now, and want more money; if I made 3/2 of what I do now (roughly the factor described in the OP), I'd also be pretty much as happy as I am now, and want more money.

The logical conclusion is that we should lower the weight of salary increases in decisions, the opposite of the conclusion proposed here.

If I made 2/3 of what I do now, I'd be pretty much as happy as I am now, and want more money; if I made 3/2 of what I do now, I'd also be pretty much as happy as I am now, and want more money.

You're burying your argument in the constants 'pretty much' there. You can repeat your argument sorites-style after you have taken the 2/3 salary cut: "Well, if I made 2/3 what I do now, I'd still be 'pretty much as happy' as I am now" and so on and so forth until you have hit sub-poverty wages.

To keep the limits of the log argument in mind, log 50k is 10.8 and log (50k+70k) is 11.69 and log 1 billion is 20.7; do you really think if someone handed you a billion dollars and you filled your world-famous days competing with Musk to reach Mars or something insanely awesome like that, you would only be twice as happy as when you were a low-status scrub-monkey making 50k?

(particularly when progressive taxation is factored in).

Here again more work is necessary. One of the chief suggestions of positive psychology is donating more and buying more fuzzies... and guess what is favored by progressive taxation? Donating.

The logical conclusion is that I should lower the weight of salary increases in decisions, the opposite of the conclusion proposed here.

Of course there are people who are surely making the mistake of over-valuing salaries; but you're going to need to do more work to show you're one of them.

To keep the limits of the log argument in mind, log 50k is 10.8 and log (50k+70k) is 11.69 and log 1 billion is 20.7

Comparing these numbers tells you pretty much nothing. First of all, taking log($50k) is not a valid operation; you should only ever take logs of a dimensionless quantity. The standard solution is to pick an arbitrary dollar value $X, and compare log($50k/$X), log($120k/$X), and log($10^9/$X). This is equivalent to comparing 10.8 + C, 11.69 + C, and 20.7 + C, where C is an arbitrary constant.

This shouldn't be a surprise, because under the standard definition, utility functions are translation-invariant. They are only compared in cases such as "is U1 better than U2?" or "is U1 better than a 50/50 chance of U2 and U3?" The answer to this question doesn't change if we add a constant to U1, U2, and U3.

In particular, it's invalid to say "U1 is twice as good as U2". For that matter, even if you don't like utility functions, this is suspicious in general: what does it mean to say "I would be twice as happy if I had a million dollars"?

It would make sense to say, if your utility for money is logarithmic and you currently have $50k, that you're indifferent between a 100% chance of an extra $70k and a 8.8% chance of an extra $10^9 -- that being the probability for which the expected utilities are the same. If you think logarithmic utilities are bad, this is the claim you should be refuting.

9jmmcd11y
Goddammit I have a degree in mathematics and no-one ever told me that and I never figured it out for myself. I see the beginnings of an explanation here [http://physics.stackexchange.com/questions/7668/fundamental-question-about-dimensional-analysis]. Any pointer to better explanation?

Taking logs of a dimensionful quantity is possible, if you know what you're doing. (In math, we make up our own rules: no one is allowed to tell us what we can and cannot do. Whether or not it's useful is another question.) Here's the real scoop:

In physics, we only really and truly care about dimensionless quantities. These are the quantities which do not change when we change the system of units, i.e. they are "invariant". Anything which is not invariant is a purely arbitrary human convention, which doesn't really tell me anything about the world. For example, if I want to know if I fit through a door, I'm only interested in the ratio between my height and the height of the door. I don't really care about how the door compares to some standard meter somewhere, except as an intermediate step in some calculation.

Nevertheless, for practical purposes it is convenient to also consider quantities which transform in a particularly simple way under a change of units systems. Borrowing some terminology from general relativity, we can say that a quantity X is "covariant" if it transforms like X --> (unit1 / unit2 )^p X when we change from unit1 to unit2. Here... (read more)

2Eliezer Yudkowsky11y
I think it'd be obvious how to take the log of a dimensional quantity. e^(log apple) = apple

Right, but then log (2 apple) = log 2 + log apple and so forth. This is a perfectly sensible way to think about things as long as you (not you specifically, but the general you) remember that "log apple" transforms additively instead of multiplicatively under a change of coordinates.

0[anonymous]11y
Isn't the argument to a sine by default a quantity of angle, that is Radians in SI? (I know radians are epiphenomenal/w/e, but still)
0Richard_Kennaway11y
Machine learning methods will go right ahead and apply whatever collection of functions they're given in whatever way works to get empirically accurate predictions from the data. E.g. add the patient's temperature to their pulse rate and divide by the cotangent of their age in decades, or whatever. So it can certainly be useful. Whether it is meaningful is another matter, and touches on this conundrum again. What and whence is "understanding" in an AGI? Eliezer wrote somewhere about hypothetically being able to deduce special relativity from seeing an apple fall. What sort of mechanism could do that? Where might it get the idea that adding temperature to pulse may be useful for making empirical predictions, but useless for "understanding what is happening", and what does that quoted phrase mean, in terms that one could program into an AGI?
0shminux11y
"units are a useful error-checking homomorphism"
3Qiaochu_Yuan11y
I don't think "homomorphism" is quite the right word here. Keeping track of units means keeping track of various scaling actions on the things you're interested in; in other words, it means keeping track of certain symmetries. The reason you can use this for error-checking is that if two things are equal, then any relevant symmetries have to act on them in the same way. But the units themselves aren't a homomorphism, they're just a shorthand to indicate that you're working with things that transform in some nontrivial way under some symmetry.
0shminux11y
The map from dimensional quantities to units is structure-preserving, so yes, it is a homomorphism between something like rings. For example, all distances in SI are mapped into the element "meter", and all time intervals into the element "second". Addition and subtraction is trivial under the map (e.g. m+m=m), and so is multiplication by a dimensionless quantity, while multiplication and division by a dimensional quantity generates new elements (e.g. meter per second). Converting between different measurement systems (e.g. SI and CGS) adds various scale factors, thus enlarging the codomain of the map.
4Kindly11y
I don't know of any good explanations; this seems relevant but requires a subscription to access. Unfortunately, no-one's ever explained this to me either, so I've had to figure it out by myself. What I'd add to the discussion you linked to is that in actual practice, logarithms appear in equations with units in them when you solve differential equations, and ultimately when you take integrals. In the simplest case, when we're integrating 1/x, x can have any units whatsoever. However, if you have bounds A and B, you'll get log(B) - log(A), which can be rewritten as log(B/A). There's no way A and B can have different units, so B/A will be dimensionless. Of course, often people are sloppy and will just keep doing things with log(B) and log(A), even though these don't make sense by themselves. This is perfectly all right because the logs will have to cancel eventually. In fact, at this point, it's even okay to drop the units on A and B, because log(10 ft) - log(5 ft) and log(10 m) - log(5 m) represent the same quantity.
0satt11y
Most of that paper is the authors rebutting what other people have said about the issue, but there are two bits that try to explain why one can't take logs of dimensional things. Page 68 notes that , which "precludes the association of any physical dimension to any of the three variables b, x, and y". And on pages 69-70: That second snippet is too vague for me. But I'm still thinking about the first one. [Edited to fix the LaTeX.]
0KnaveOfAllTrades11y
The (say) real sine function is defined such that its domain and codomain are (subsets of) the reals. The reals are usually characterized as the complete ordered field. I have never come across units that--taken alone--satisfy the axioms of a complete ordered field, and having several units introduces problems such as how we would impose a meaningful order. So a sine function over unit-ed quantities is sufficiently non-obvious as to require a clarification of what would be meant by sin($1). For example--switching over now to logarithms--if we treat $1 as the real multiplicative identity (i.e. the real number, unity) unit-multiplied by the unit $, and extrapolate one of the fundamental properties of logarithms--that log(ab)=loga+logb, we find that log($1)=log($)+log(1)=log($) (assuming we keep that log(1)=0). How are we to interpret log($)? Moreover, log($^2)=2log($). So if I log the square of a dollar, I obtain twice the log of a dollar. How are we to interpret this in the above context of utility? Or an example from trigonometric functions: One characterization of the cosine and sine stipulates that cos^2+sin^2=1, so we would have that cos^2($1)+sin^2($1)=1. If this is the real unity, does this mean that the cosine function on dollars outputs a real number? Or if the RHS is $1, does this mean that the cosine function on dollars outputs a dollar^(1/2) value? Then consider that double, triple, etc. angles in the standard cosine function can be written as polynomials in the single-angle cosine. How would this translate? So this is a case where the 'burden of meaningfulness' lies with proposing a meaningful interpretation (which now seems rather difficult), even though at first it seems obvious that there is a single reasonable way forward. The context of the functions needs to be considered; the sine function originated with plane geometry and was extended to the reals and then the complex numbers. Each of these was motivated by an (analytic) continuation into a bigg
1A1987dM11y
You pick an arbitrary constant A of dimension "amount of money", and use log(x/A) as an utility function. Changing A amounts to adding a constant to the utility (and changing the base of the logarithms amounts to multiplying it by a constant), which doesn't affect expected utility maximization. EDIT: And once it's clear that the choice of A is immaterial, you can abuse notation and just write “log(x)”, as Kindly says.
3shminux11y
You can only add, subtract and compare like quantities, but log(50000*1dollar)=log(50000)+log(1 dollar), which is a meaningless expression. What's the logarithm of a dollar?
0A1987dM11y
An arbitrary additive constant. See the last paragraph of Kindly's comment.
-2Thomas11y
What do you need to "exponate" to get a dollar? That, whatever that might be, is the logarithm of a dollar.
-3jmmcd11y
Well, we could choose factorise it as log(50000 dollars) = log(50000 dollar^0.5 * 1 dollar^0.5) = log(50000 dollar^0.5) + log(1 dollar^0.5). That does keep the units of the addition operands the same. Now we only have to figure out what the log of a root-dollar is... It's really just the same question again -- why can't I write log(1 dollar) = 0 (or maybe 0 dollar^0.5), the same as I would write log(1) = 0.
0satt11y
$1 = 100¢. Now try logging both sides by stripping off the currency units first!
1gwern11y
This is what I did, without the pedantry of the C. I don't follow at all. How can utilities not be comparable in terms of multiplication? This falls out pretty much exactly from your classic cardinal utility function! You seem to be assuming ordinal utilities but I don't see why you would talk about something I did not draw on nor would accept.
3Kindly11y
The point is that because the constant is there, saying that utility grows logarithmically in money underspecifies the actual function. By ignoring C, you are implicitly using $1 as a point of comparison. A generous interpretation of your claim would be to say that to someone who currently only has $1, having a billion dollars is twice as good as having $50000 -- in the sense, for example, that a 50% chance of the former is just as good as a 100% chance of the latter. This doesn't seem outright implausible (having $50000 means you jump from "starving in the street" to "being more financially secure than I currently am", which solves a lot of the problems that the $1 person has). However, it's also irrelevant to someone who is guaranteed $50000 in all outcomes under consideration.
1gwern11y
Then how do you suggest the person under discussion evaluate their working patterns if log utilities are only useful for expected values?
6Kindly11y
By comparing changes in utility as opposed to absolute values. To the person with $50000, a change to $70000 would have a log utility of 0.336, and a change to $1 billion would have a log utility of 9.903. A change to $1 would have a log utility of -10.819.
1gwern11y
I see, thanks.
1The_Duck11y
"The utility of A is twice the utility of B" is not a statement that remains true if we add the same constant to both utilities, so it's not an obviously meaningful statement. We can make the ratio come out however we want by performing an overall shift of the utility function. The fact that we think of utilities as cardinal numbers doesn't mean we assign any meaning to ratios of utilities. But it seemed that you were trying to say that a person with a logarithmic utility function assesses $10^9 as having twice the utility of $50k.
0gwern11y
Kindly says the ratios do have relevance to considering bets or risks. Yes, I think I see my error now, but I think the force of the numbers is clear: log utility in money may be more extreme than most people would intuitively expect.
1A1987dM11y
This is what I immediately thought when I first read about the Repugnant Conclusion on Wikipedia, years ago before having ever heard of the VNM axioms or anything like that.
9Kawoomba11y
Only twice as? Adaptation level theory suggests that both contrast and habituation will operate to prevent the winning of a fortune from elevating happiness as much as might be expected. ... As predicted, lottery winners were not happier than controls It's a well replicated phenomenon.
3gwern11y
Lottery-winners are self-selected for a number of things including innumeracy or foolishness and not having grand projects materially advanced by winnings, and the famous lottery winner examples are for relatively small sums as far as I know - most of the winners in that paper were $400k or less at a time of higher tax rates, with a serious selection issue there as well (less than half of the winners interviewed).
2A1987dM11y
You don't get to decide where most of your tax money goes, which I guess means that for a large fraction of people taxes don't count as fuzzy-buying donations.
4scav11y
Which is a failure mode of most people's thinking about taxes. Most of your tax money goes to boring things you don't want to concern yourself with and which you don't have any expertise in, such that you deciding exactly where the money went would be disastrous. Someone with the required expertise is doing their best to make sure the limited available money is spent carefully on those things, in most cases. I like to think that in general, taxes are my subscription fee for living in a civilisation rather than a feudal plutocracy. There are some specific things my taxes are spent on that I actively resent, but the response to that is to oppose those specific things, and I accept democracy and debate as the means to (slowly and unreliably) improve the situation.
7A1987dM11y
I think of taxes as a “subscription fee for living in a civilisation”, too, but I think you're overestimating how useful what most of the tax money is spent on is to most of the population and underestimating the extent to which present-day First World countries are plutocracies.
0scav11y
Well, neither of us have quantified our estimates for the usefulness of government spending, or broken it down by sector or demographics. So, how much am I overestimating it, and in what specific ways? :) I live in Scotland. I consider it to be a civilised country mostly. It has good free education and health care, and businesses are regulated as to employment law, health and safety, and environmental impact. I don't claim more expertise in how all that gets arranged than the people who arrange it, and I would be sceptical if you did, without seeing evidence. The civilisation of the USA has some existential risk for feudal plutocracy, but I think it narrowly avoided one of the risk factors this week and I hold out some hope for steady improvement if it can stop shitting its pants over imaginary terrorist threats and start taking human rights seriously again. But even if I'm wrong about that, I never said that taxes were sufficient to prevent social breakdown. Just necessary.
0A1987dM11y
I'm not questioning their expertise, I'm questioning their goals. I usually try to apply Hanlon's razor to single individuals, but I'm reluctant to apply it to entire governments. I'm pretty sure that spending on defence an amount comparable to (or, in certain countries, even greater than) that spent on research has a point, I just don't think it's to benefit most of the population. In terms of what he's actually done, as opposed to what he says, Obama's economic policy isn't that different to Republicans'. Or do “issues like peace, immigration, gay and women's rights, prayers in school”¹ (to quote the article linked) suffice to make a government not count as a plutocracy? Anyway, how much have you heard about lobbying, associations such as the Bilderberg Group or the Trilateral Commission, etc.? (Unfortunately, the people who talk about those things also tend to spew out lots of nonsense about Reptilians and whatnot, but I have my own hypothesis about why they do that.) ---------------------------------------- 1. When I posted that article on Facebook, the only comment was from a gay friend of mine pointing out that with one president gay rights would go back to the 1800s and with the other they might be allowed to marry.
4scav11y
This is wandering away from the topic a bit. I doubt anyone could make a good case for any of: * taxes are inherently harmful and always misspent * taxes are always spent wisely * there exists any political system under which immensely rich people couldn't wield a lot of political power to try to further enrich themselves. * the immensely rich bother to conspire for any other purpose or actually care about politics much beyond what it can get them personally * there is literally nothing a democratically elected government can or will do to limit the political power of the immensely rich in any way.
6MugaSofer11y
Sure there does. A military dictatorship, for one.
4scav11y
Name one where the dictator and his cronies were not also embezzling the wealth of the country and living it up with their rich buddies. That's what they grab power for. Even if the guy at the top has ideological principles that forbid such behaviour (rare) and isn't a hypocrite about them (super rare), there is always someone high up in the hierarchy who is in the market for favours, and due to the nature of a dictatorial hierarchy, essentially untouchable.

You're describing a situation in which politically powerful people become rich, not one in which rich people become politically powerful.

-7scav11y
2FAWS11y
Do you have an example of a military dictatorship where the immensely rich were allowed to keep their wealth, but couldn't use it to exert political influence?
2MugaSofer11y
Well, no. Not offhand, anyway. But people can become rich after the revolution, and I can't think of any examples of people gaining "a lot of political power to try to further enrich themselves" this way. Of course, those who already have such power (due to corruption or whatever) do tend to use it to acquire wealth... EDIT: Put much better here.
0A1987dM11y
I ADBOC with the negation of those statements (provided “there exists” in the third one means “there has existed so far” rather than “there could ever exist in principle”).
0gwern11y
That wasn't what I meant to imply.
0CarlShulman11y
Ln $100 is 4.6, at which point it's doubtful that you can survive.
0gwern11y
Ah, but suppose subsistence wages plummeted as in Hanson's em hell scenario? Ln $100 merely shows that 'the poor also smile' and the utility-maximizing thing is quadrillions of impoverished minds!
3CarlShulman11y
If we continue to use Utility=ln($) then utilities go infinitely negative as you approach zero :).
2johnlawrenceaspden11y
Allowing us to refute the repugnant conclusion. Quadrillions of minds with $(1+e). We should start a campaign to use very large currency units in preparation for the Singularity.
0Vaniver11y
Sort of? I mean, the primary work here is being done by the deduction of charity donations by income. Progressive taxation helps in that charitable donations are cheaper the richer you are (each dollar given away only costs 70 cents, instead of 100 if there were no deduction / you were paying no income taxes), but that's shaping the incentive, not making it.
0JoshuaFox11y
Sure, that's why I said 2/3 and 3/2 rather than more significant multipliers. Also: Sometimes you settle yourself into a local maximum, and even if it is not a global maximum, not switching may be OK if the local is not too much lower than the global maximum. Yes, I agree that using your tax deduction gives an extra boost to donating.
8JoshuaFox11y
I realized that what bothers me is the neglect of utility-function differences in the counterfactual world. Should you start using heroin? Let's try to reframe it in a way that will reduce, or at least switch around, the biases that might be influencing your decision. If you were a heroin addict, and had lost everything, and heroin were your only friend and consolation, would you want to stop? Maybe not. So go ahead, shoot up. If, despite your deep desire to go into classical music as a career (which in real life you did, to your great satisfaction), you had followed the money into the financial sector, and after years of 80-hours weeks, had sunk into cynicism and no longer cared for anything but making more money to support your extravagant spending habits, would you then want to leave the financial industry for a life of music and a modest income? Probably not, so go ahead, follow the money, burn out your soul, and buy yourself a Porsche.
2handoflixue11y
I have trouble believing that in those situations, I'd actually prefer to be that sort of rock-bottom, burnt-out person rather than thinking "I wish I'd made difference choices when I was 20, oh foolish foolish me." Having been in some rather bad situations, I've never once thought "Gosh, this is so much better than if I'd had a successful, high-paying, yet enjoyable career!"
0Omegaile11y
This method of reducing bias only works for rational decisions using your current utility. Otherwise you will be prone to circular decisions like those you describe (decisions that feed themselves).
4NancyLebovitz11y
Shouldn't we include the costs of moving? Even if the social costs are held as negligible (they probably shouldn't be), there's the time spent and the monetary costs of moving.
0katydee11y
Yes, but money isn't just about being happy.
5JoshuaFox11y
Sure, one of the things I most like about having more money is being able to donate more. However, the main consideration of her brother and others in these circumstances is, I strongly suspect, not maximizing their donation capacity, but rather a more generic personal utility calculation.

Recent example from Anna: Using grapefruit juice to keep up brain glucose, I had

The idea that will power or thinking depletes brain glucose has been debunked:

http://www.psychologytoday.com/blog/ulterior-motives/201211/is-willpower-energy-or-motivation http://lesswrong.com/r/discussion/lw/ej7/link_motivational_versus_metabolic_effects_of/

But nevertheless, the suggestion of sweets will still work per your own links. A nice example of how revised theories remain consistent with old observations...

0John_Maxwell11y
Supposedly gargling sugary lemonade works: http://www.forbes.com/sites/daviddisalvo/2012/11/08/need-a-self-control-boost-gargle-with-sugar-water/ Edit: sorry, this is redundant w/ roland's links.
0aelephant11y
I missed this somehow. Thanks for posting the links.

I put the checklist into an Anki deck a week or two ago that I've been reviewing (as cloze deletions). Subjectively it seems to have helped the relevant concepts come more readily to mind, although that could just be the CFAR workshop (though we didn't talk about the checklist then and some of the ideas in the checklist, like social commitment mechanisms, weren't otherwise explicitly mentioned).

2Pablo11y
Would you mind sharing this deck? I would be a nice addition to Anki decks by LW users.
0Qiaochu_Yuan11y
I admit I'm not entirely sure how to share a deck.
4Pablo11y
Ah, you are not the first! This comment by tgb taught me how to do. (I'm assuming you are using Anki 2.)
2Qiaochu_Yuan11y
Cool. Here it is!
0Pablo11y
Thanks. The deck is now listed.

This is awesome. I might remove the examples, print down the rest of the list, and read it every morning when I get up and every night before going to sleep. OTOH I have a few quibbles with some examples:

Recent example from Anna: Jumping off the Stratosphere Hotel in Las Vegas in a wire-guided fall. I knew it was safe based on 40,000 data points of people doing it without significant injury, but to persuade my brain I had to visualize 2 times the population of my college jumping off and surviving. Also, my brain sometimes seems much more pessimistic, esp

... (read more)

my mother told me “you should call [your friend who's there] and ask him if he's all right”, and I answered “there are 10 million people in London, so the probability that he was involved is about 1 in 30,000, which is less than the probability that he would die naturally in...”; my mother called me heartless before I even finished the sentence.

Your math is right but your mother has the right interpretation of the situation. If your friend is dead, calling him does neither of you any good! This is a 29,999 out of 30,000 chance to earn brownie points.

5DaFranker11y
A different approach might be to do the math on how likely it is that someone the friend knows was involved in the incident. Or maybe just call to discuss the possible repercussions and the probable overreactions that the local government will have. However, for most of my own friends, if I did call them in exactly such a situation, they'd tell me almost exactly what army1987 said to their mother. Unless they happened to be dead or lost a friend to the event or something.

Huh, no. If they are likely to respond badly, I want to believe they are likely to respond badly. If they aren't likely to respond badly, I want to believe they aren't likely to respond badly. What is true is already so, owning it up doesn't make it worse. The solution to that problem is to think twice and re-read the email and think about ways to make it less likely for it to be interpreted in an unintended way before hitting Send.

The thing is, it seems quite clear that the problem wasn't about how likely they are to respond badly, but that Anna (?) would visualize and anticipate the negative response beforehand based on no evidence that they would respond poorly, simply as a programmed mental habit. This would end up creating a vicious circle where each time the negatives from past times make it even more likely that this time it feels bad, regardless of the actual reactions.

The tactic of smiling reinforces the action of sending emails instead of terrorizing yourself into never sending emails anymore (which I infer from context would be a bad thing), and once you're rid of the looming vicious circle you can then base your predictions of the reaction on the content of the email, rather than have it be predetermined by your own feelings.

(Obligatory nitpicker's note: I agree with pretty much everything you said, I just didn't think that the real event in that example had a bad decision as you seemed to imply.)

9apophenia11y
Interesting you should say that. About a week ago I simplified this into a more literal checklist designed to be used as part of a nightly wind-down, to see if it could maintain or instill habits. I designed the checklist based largely on empirical results from NASA's review of the factors for effectiveness of pre-flight safety checklists used by pilots, although I chased down a number of other checklist-related resources. I'm currently actively testing effects on myself and others, both trying to test to make sure it would actually be used, and getting the time down to the minimum possible (it's hovering around two minutes). P.S. I'm not associated with CFAR but the checklist is an experiment on their request. If you were to test your suggestion for two weeks, I would be interested to hear the results. My prediction (with 80% certainty) is: Lbh jvyy trg cbfvgvir erfhygf sbe n avtug be gjb. Jvguva gra qnlf, lbh jvyy svaq gur yvfg nirefvir / gbb zhpu jbex naq fgbc ernqvat vg, ortva gb tynapr bire vg jvgubhg cebprffvat nalguvat, be npgviryl fgbc gb svk bar bs gur nobir ceboyrzf. (Gur nezl anzr znxrf zr yrff pregnva guna hfhny--zl fgrerbglcr fnlf lbh znl or oberq naq/be qvfpvcyvarq.)
4Metus11y
Can you point us to the more interesting checklist resources?
1apophenia11y
Absolutely. I can give better resources if you can be more specific as to what you're looking for. I recommend The Checklist Manifesto first as an overview, as well as a basic understanding of akrasia, and trying and failing to make and use some checklists yourself. The resources I spent most of my time with were very specific to what I was working on, and so I wouldn't recommend them. However, just in case someone finds it useful, Human Factors of Flight-Deck Checklists: The Normal Checklist draws attention to some common failure modes of checklists outside the checklist itself.
0A1987dM11y
That's indeed what happened. That's just a hypocorism for my first name. I have never been in the armed forces. (I regret picking this nickname because it has generated confusion several times, but I've used it on the Internet ever since I was 12 and I'm kind of used to it.)
0A1987dM11y
This sounds interesting. I wasn't entirely serious, but I'm going to do this for real now. (I haven't decoded the rot13ed part, of course.)
2BrassLion11y
You have the right conclusion but the wrong reason. Most people would appreciate being thought of in a disaster, so calling him if he's alive would be good - except that the phone networks, particularly cell networks, tend to be crippled by overuse in sudden disasters. Staying off the phones if you don't need to make a call helps with this.

It's much less pretty than the PDF, but if anyone else wants a spreadsheet with write-in-able blanks, I have made a Google doc.

I have read this post and have not been persuaded that people who follow these steps will lead longer or happier lives (or will cause others to live longer or happier lives). I therefore will make no conscious effort to pay much of any regard to this post, though it is plausible it will have at least a small unconscious effect. I am posting this to fight groupthink and sampling biases, though this post actually does very little against them.

2Swimmer963 (Miranda Dixon-Luinenburg) 11y
Longer? Probably not. Happier? Possible, depending on that person's baseline, since we don't know our own desires and acquiring these skills might help, but given the hedonic treadmill effect, unlikely. Achieving more of their interim goals? Possible if not probable. There are a lot of possible goals aside from living longer and being happier.
3aceofspades11y
I have decided that maximizing the integral of happiness with respect to time is my selfish supergoal and that maximizing the double integral of happiness with respect to time and with respect to number of people is my altruistic supergoal. All other goals are only relevant insofar as they affect the supergoals. I have yet to be convinced this is a bad system, though previous experience suggests I probably will make modifications at some point. I also need to decide what weight to place on the selfish/altruistic components. But despite my finding such an abstract way of characterizing my actions interesting, the actual determining of the weights and the actual function I'm maximizing are just determined by what I actually end up doing. In fact constructing this abstract system does not seem to convincingly help me further its purported goal, and I therefore cease all serious conversation about it.
5Swimmer963 (Miranda Dixon-Luinenburg) 11y
I think this is a common problem. That doesn't mean you have to give up on having your second-order desires agree with your first-order desires. It is possible to use your abstract models to change your day-to-day behaviour, and it's definitely possible to build a more accurate model of yourself and then use that model to make yourself do the things you endorse yourself doing (i.e. avoiding having to use willpower by making what you want to want to do the "default.") As for me, I've decided that happiness is too elusive of a goal–I'm bad at predicting what will make me happier-than-baseline, the process of explicitly pursuing happiness seems to make it harder to achieve, and the hedonic treadmill effect means that even if I did, I would have to keep working at it constantly to stay in the same place. Instead, I default to a number of proxy measures: I want to be physically fit, so I endorse myself exercising and preferably enjoying exercise; I want to have enough money to satisfy my needs; I want to finish school with good grades; I want to read interesting books; I want to have a social life; I want to be a good friend. Taken all together, these are at least the building blocks of happiness, which happens by itself unless my brain chemistry gets too wacked out.
0aceofspades11y
So the normal chain of events here would just be that I argue those are still all subgoals of increasing happiness and we would go back and forth about that. But this is just arguing by definition, so I won't continue along that line. To the extent I understand the first paragraph in terms of what it actually says at the level of real-world experience, I have never seen evidence supporting its truth. The second paragraph seems to say what I intended the second paragraph of my previous comment to mean. So really it doesn't seem that we disagree about anything important.
0Swimmer963 (Miranda Dixon-Luinenburg) 11y
Agreed. I find it practical to define my goals as all of those subgoals and not make happiness an explicit node, because it's easy to evaluate my subgoals and measure how well I'm achieving them. But maybe you find it simpler to have only one mental construct, "happiness", instead of lots. I guess I explicitly don't allow myself to have abstract systems with no measurable components and/or clear practical implications–my concrete goals take up enough mental space. So my automatic reaction was "you're doing it wrong," but it's possible that having an unconnected mental system doesn't sabotage your motivation the same way it does mine. Also, "what I actually end up doing" doesn't, to me, have to connotation of "choosing and achieving subgoals", it has the connotation of not having goals. But it sounds like that's not what it means to you.
2chaosmosis11y
I would argue that the altruism should be part of the selfish utility function. The reason that you care about other people is because you value other people. If you did not value other people there is no reason they should be in your utility function.
1wedrifid11y
Excellent! This nuance of what "selfish" means is something I find myself reiterating all too frequently. (Where the latter means I've done it at least three times that I can recall.)
-5aceofspades11y

Thanks for posting this. I always enjoy these "in-practice" oriented posts, as I feel they help me check if I truly understand the concepts I learn here, in a similar way that example problems in textbooks check if I know how to correctly apply the material I just read.

I would be interested in an updated checklist. This seems potentially quite useful for a single post.

4Raemon6y
I'm not 100% sure how different it is, but CFAR's website has what is presumably the most up to date version.

There are some good ideas here that I can pick up on. Among the things that I already successfully implement, it may sound stupid, but I think of my different brain modules as different people, and have different names for them. That way I can compliment or admonish them without thinking, "Oh..kay, I'm talking to myself?" That makes it easier to remember that I'm not the only one reacting and making the sole decisions, but avoids turning everything into similar-sounding entities (me, myself, I, my brain, my mind, etc.) Example: This morning, I ke... (read more)

2aleksiL11y
Interesting, I've occasionally experimented with something similar but never thought of contacting Autopilot this way. Yeah, that's what I'll call him. I get the feeling that this might be useful in breaking out of some of my procrastination patterns: just call Autopilot and tell him which routine to start. Not tested yet, as then I'd forget about writing this reply.
1MaoShan11y
It's as if your own body is a guy that does his job if you train him right, but makes stupid decisions when something unexpected happens. I just take a more literal approach with the interaction. I also refer to him as "my answering machine" when I am woken up in the middle of the night. It took my wife a while to realize that the person she was talking to was "not me". My answering machine can make perfectly normal-sounding replies to normal questions, but is unable to come up with creative answers to unusual questions, and I have no memory of the events. Another unnamed, possibly separate module runs when my body is alarmed, but I am not yet conscious. It constantly asks for data, verbally questioning other humans nearby, "What is happening? What is going on? What time is it?" Unlike situations with the answering machine, I retain conscious memory of the occurrence, but not from a first-person perspective, more like I remember somebody telling me about what happened, but in this case that person was (allegedly) me.
1Michelle_Z11y
Funny. I do something similar- Except I call mine "Planner," "Want," "Bum," and "Cynic." I never really considered my autopilot mode anything particular. Usually I just do this when I am struggling with motivation, and usually those four concepts are the main issue- Planning to do something, then wanting to do something else, feeling like not doing anything, and realizing I'm not going to do it so why bother anyway... and reminding myself that they're learned habits and I can get rid of it if I bring in new habits.
0Kenny9y
This is basicaly Internal Family Systems Model tho its focus is therapy, i.e. improving dysfunctional behavior. But your point of regularly communicating with your various 'parts' seems like a really good idea. How well have you maintained this as a habit since your comment?

I'm currently trying to evaluate how to adjust some of these for problems related to mental illness. For example, 4.3:

If I find my thoughts circling around a particular word, I try to taboo the word, i.e., think without using that word or any of its synonyms or equivalent concepts. (E.g. wondering whether you're "smart enough", whether your partner is "inconsiderate", or if you're "trying to do the right thing".)

Whenever I taboo words, I start developing pressured speech, and begin mumbling the tabooed words subconsciously... (read more)

2aelephant11y
Is your mental illness being treated? Are you seeing someone trained & experienced in managing mental illness? I would put much, much more emphasis on getting to a place where you aren't self-harming than on trying to develop rationality habits, especially if the latter seems to be interfering with the former.
5ialdabaoth11y
No, because I'm currently not good at keeping a job, and equally not good at navigating the bureaucracies necessary to suckle on the government's teat. "Getting to a place where I'm not self-harming" is a nice pipe dream, but as it is, we optimise for those goals which we can actually stand a reasonable chance of accomplishing. Put another way, if P(n)-sub-t is my probability of getting into therapy after expending n units of resource on getting into therapy, and U(n)-sub-t is the utility of getting therapy after spending n units of resource on getting into therapy, and P(n)-sub-r is the probability of becoming more rational after spending n units trying to become rational, and U(n)-sub-r is the utility of becoming more rational after spending n units becoming rational, and I only have n resource units available, then if P(n)-sub-t U(n)-sub-t < P(n)-sub-r U(n)-sub-r, then I know what to spend those n resource units on, no matter how much P(n+delta)-sub-t U(n+delta)-sub-t > P(n+delta)-sub-r U(n+delta)-sub-r, because I don't have that extra delta worth of resource units. Sometimes poor people make what looks like bad choices from the outside because it's the best choice they have.
2aelephant11y
I'm not much for suckling on the government's teat either. How much of a chance do you think you'd have of keeping a job if you put your mind to it? There could be other options aside from therapy. A lot of people that I respect have recommend Nathaniel Branden's books. I have heard some about Internal Family Systems (IFS) as well, which as far as I know can be done by yourself. I'm by no means an expert, but maybe these can act as leads for you to get started on your own (presuming you haven't already looked into them).
5ialdabaoth11y
Empirically, a very poor one. Or rather, more accurately: I either have a very poor chance of keeping a job if I put my mind to it, OR I have a very poor chance of putting my mind to it. I'm not sure how to tell which is actually the case, right now, but maybe I could tell if I actually put my mind to it (heh). Unfortunately, since "putting my mind to things" is a big part of what's actually broken, I'm not sure where to proceed - or even whether I should proceed. Often times, my strongest impulse leans towards slapping a big "DEFECTIVE" label on my forehead and tossing myself in the recycle bin.
2TimS11y
I urge you to strongly consider the possibility that your mind is telling you that you don't like this kind of work. At best, defective is a circular label, not an analytical result of your personality. That may not be the most useful information, economically speaking. But it may help you avoid generalizing your experiences at the current job on to future jobs. In short, you aren't lazy, you just haven't found situations that put you in a position to succeed (by ensuring sufficient appropriate motivation).
7ialdabaoth11y
I used to think that way. The frustrating thing is, I used to LOVE work of all kind. What I hated was people with arbitrary power over me deliberately sabotaging my work, mostly (it seemed) because they were angry that I enjoyed it so much. One of the most powerful lessons I ever learned was that people at my socioeconomic level don't GET to "enjoy" their work. Even by accident. I never really learned diplomacy and power politics, primarily due to being taught a form of "learned helplessness" about it when I was very young (I was not in a socioeconomic class where it was appropriate to display the amount of enthusiasm, talent and intelligence that I had, and I didn't know how to hide it). Unfortunately, this led to making a lot of really, really bad political mistakes, each of which slowly eroded my enthusiasm at doing... well, at this point, at doing anything. After a few years of being out of practice, I now find that I can't even bring myself to get out of bed in the morning and work on something interesting, because "what's the point?" To me, there is NO difference between "lazy" and "haven't found situations that put you in a position to succeed". They are IDENTICAL. If society doesn't put you in positions to succeed, it has decided that you are lazy, and that means you ARE lazy. Agency has nothing to do with culpability, only blame.
4TimS11y
Your rules seemed designed to sabotage you by making you feel miserable. The impulse to create scripts of how interactions are supposed to go is a good one, but the point of these scripts is to prepare you to succeed. You need a new social environment. If none of the people you hang out with is really your friend, stop spending time with them. Particularly if they aren't emotionally safe. We talked about boardgaming as one possible new environment. What about charitable volunteering. If you find the right charity, the organizations are desperate for your help. Regardless of what specific thing you do, find something to succeed at. Don't set the bar ridiculously high - if what you can do is show up, then find something where showing up is success. You are absolutely worth it. Your negative feelings are a habit that you can break. Where do you live? Maybe I can help? (Private message if you prefer).
5ialdabaoth11y
This post is being made while repressing a massive array of scripted responses, so if it bounces around or seems incoherent, it's because only a VERY small portion of my brainpower is currently available for rational analysis. 1. I tend to sabotage friendships, due to being inherently distrustful / untrustworthy (my cynical disposition has led me to believe that these are ultimately the same thing). Thus, your offer to help personally is admirable, but I have a very high threshold to pass before I can trust it as actually helpful. Does this make sense? 2. I've performed actions of charitable volunteering, but over the past few years I've had very little energy for anything. I tend to have less than half an hour's worth of useful energy per day for anything that involves leaving my little hovel, and by the end of that half an hour I tend to start socially self-destructing. 3. It's not as much a problem that friends aren't emotionally safe for me, as that I am not emotionally safe for me. Actual friends tend to actually empathize, which means that they quickly become freaked out and leave when they realize how helpless they are to do anything but watch me self-harm. This provides a filter that ensures that when I DO absolutely need emotional interaction with other human beings, the only ones who are left are the ones who don't care as much about the waves of misery I'm exuding.
2TimS11y
Makes sense. Whether you believe it or not, I'm not doing this for my benefit. I care about you, and so does everyone else who is offering you advice. Do you think these scripts make you happier? Are there changes to the scripts that you can imagine that would cause them to make you happier? More generally, is there any change you could make in your life that you think you would really make that would lead to any increased happiness? If there are reasons to not make that change, do you think the reasons are realistic in likelihood and it magnitude? My experience with anxiety is that the feelings never went away, I just got better at doing what I thought needed doing, even with the anxious feelings.
5ialdabaoth11y
No, but I have spent almost 30 years doing script-modification, and I be sore tired. Possibly, but the effort involved in doing more script-modification is no longer something I have the energy for. Absolutely. That's how I describe most of what people call my "super-powers". I tend to be amazingly competent in crisis situations, simply because I don't panic, I immediately assess the best plan of action, I identify everyone who is panicking, and I immediately give them short commands that are clearly identifiable as helping the situation, so they feel like they can actually do something about whatever's terrifying them. People have asked me how I manage to be completely unafraid of life-or-death situations, and I've simply explained "of course I'm completely terrified. I just do it anyways." (and then I usually go throw up, because if the situation has calmed enough that people can ask me how I pulled it off, then the situation has calmed enough that I can go throw up). The problem is, I've already tried to solve this problem by editing out "personal happiness" as a goal to seek. I spent about 5 years on this, and in the process have managed to edit out a good amount of personal identity, self-preservation, and so on. It turns out there are biological safeguards in place that keep me from going all the way with it, so what I've got is a collection of extraordinarily buggy and non-adaptive scripts, usually running in direct competition with each other and tying up all my system resources without actually accomplishing anything whatsoever. Of course, since they're using up all my system resources, I no longer have enough free processor or swap space to further modify my scripts. I'm kinda stuck without outside resources, and I'm no longer capable of generating those. But ultimately, neurological and biological systems are incredibly complex, and they all (so far as we know) break down eventually. I don't think this breakdown process is particularly extraordinary
0TimS11y
Do you think that removing personal happiness as one of your goals has helped you be more productive? What could you take to add some amount of personal happiness as one of your goals? Would that be worthwhile? Do you think it is likely that you would take those steps? If there are reasons to not make that change, do you think the reasons are realistic in likelihood and it magnitude? (I'm asking questions because I hope this will help more than other types of interactions. There's no reason that you should feel obligated to be emotionally vulnerable towards me. Without emotional vulnerability - from taking apart your personality - specific suggestions / instructions about what to change can easily be taken the wrong way. But if questions like this are coming off as passive-aggressive, I want to stop.)
0Strange711y
Have you tried being a volunteer firefighter?
5ialdabaoth11y
Actually, yes! Two years ago. I spent about 2 years beforehand getting into the best shape I had ever been in in my life - took Capoeira, spent an hour a day in the gym, ran 3 miles every morning - I set a goal that as soon as I broke 150 lbs (starting from 110), I'd go in and apply. Still didn't pass the physical.
0TimS11y
Also, this (warning, quite emotionally raw).
7ialdabaoth11y
Heh. Believe it or not, that's not as much of a problem. I've lived with constant suicidal ideation for almost 27 years now, since I was 12. I've become almost completely inured to it, and I've performed enough unsuccessful attempts that my mid-brain has learned very well not to bother. It's amusing to think that learned helplessness can be turned into a tool to combat suicidal ideation, but there it is. (I imagine this is why so many anti-depressants increase the risk of suicide - the learned helplessness is a tighter cycle, so it gets lifted faster, at which point the ideation hasn't faded yet and suddenly you imagine the possibility of something actually working, and it all finally being over for real.)

What about "when faced with a hard problem, close your eyes, clear your mind and focus your attention for a few minutes to the issue at hand"?

It sounds so very simple, that I routinely fail to do it when, e.g. I try to solve some project euler problem or another, and I don't see a solution in the first few seconds, do something else for a while, until I finally get a handle on my slippery mind, sit down and solve the bloody thing.

At some point I started feeling like my bf is more interested in telling me things than having a conversation with me. So I started trying to flag the instances where he did it and the instances where he didn't, and it kinda felt like it matched my feeling since I had several more examples of one than the other. But I didn't document then carefully or anything, so how do I know I'm not falling into the confirmation bias trap? Or is this just the wrong way to handle something that started out as a ... feeling?

3TheOtherDave11y
In your position, I would do a few different things. One is what you describe: actually count instances and see if the pattern conforms to my expectations. But also, I would try to articulate more clearly what the choices are. That is, what do I look for when I want to see if he is interested in having a conversation? Am I looking for him to listen to what I have to say? To ask questions about it? To not challenge it when he disagrees? To look directly at me and not do other things while I'm talking? To allow me to pause in the middle of what I'm saying with out treating that as an opportunity to change the subject? Something else? All of the above? Also, I would ask myself what would follow if it turned out that I was overcounting confirmations? That is, let's say I conclude that one thing that makes me feel like my boyfriend isn't interested in having a conversation with me is when he interrupts me. I might ask myself, suppose I start actually counting instances and I conclude that he only interrupts me one conversation out of ten, when I had estimated it was nine conversations out of ten. It is likely, then, that I'd succumbed to confirmation bias. But... what follows from that? One possibility is "Oh... well, 10% interruptions isn't that big a deal. I should get over it." Another possibility is "Clearly, 10% interruptions is enough to upset me. We should try for a lower rate." Knowing how I would go about making that choice for a measured probability once I have it is, IME, an important part of actually improving the system. Otherwise I'm just making measurements.
0[anonymous]11y
I'm confused why she should measure it at all. This line of reasoning seems to preclude the need for measurement.
0jooyous11y
Yeah, I think this is the hardest part because in some cases, examining the actual facts does make me feel better. But in this case, if it does turn out to be 10% but the bad feeling doesn't go away, I'm going to feel like a jerk. Also, it's impossible to compare to the past at this point, which is when it felt like we had more real conversations, but I have no data from it because back then I didn't have any reason to track it.
0TheOtherDave11y
Why?
3Decius11y
To break confirmation bias, you need an objective log. Write down every time you recognize a confirming event, as well as every time you recognize an even which is nonconfirming. Then, estimate the likelihood that you would recognize and write down a confirming event, and the likelihood that you would recognize and write down a nonconforming event. Use your surprise that a nonconfirming event just occurred, as well as your surprise that you noticed it and made a note of it to form that estimate. If you find yourself more surprised that you made a not of a nonconfirming event than that it happened, it probably happens much more often than you note it.
2Manfred11y
This seems tricky. What is (I would guess) important about your situation is that you want to have more conversations with him. So hey, if you want to have more conversations, do things that will result in that happening. If your number of conversations changes noticeably and that feeling doesn't go away, or you get the same feeling about something else instead, then yeah, maybe the root cause is something else. (It's like when I'm procrastinating and I feel like I really want to visit website X, and then I feel I really want to read book Y, but the feeling is really just "procrastination-feeling" from not wanting to start chore Z.)

Has the checklist been revisited or optimized in any way since its original formulation? (By CFAR or otherwise?)

Why are these rationality habits? Based on what? All the examples are personal. Isn't it possible to give (also) a scientific examples for each habit : study ..... shows that .... hence 1) the habit is useful for dealing with this bias 2) it doesn't create or reinforce other biases.

Looks like a very useful list. One comment: I found the example in 2(a) a bit complicated and very difficult to parse.

Something to add: allocating attention in the correct order:

  1. emotions
  2. felt meaning
  3. verbal thoughts

Otherwise you have the failure mode of avoiding painful emotions (even if they're being triggered erroneously) and then all sorts of bad things happen. So check in with (1) before (2) and (3). And check in with (2) before applying (3), because otherwise you're using cached thoughts.

The PDF version is very nice looking and very readable, thanks for making it. I think people on here often underestimate the benefits of low hanging aesthetic fruit.

I just joined the community, how can I save or mark this article so it is available for me to read at anytime?

0root8y
Bookmarks in your browser. There's also the diskette icon between the two horizontal bars that separate the article and the comment section.
8gjm8y
I think the "liked" tab on your user page displays precisely those articles that you've upvoted. So upvoting an article will make it available there in the future.
0Good_Burning_Plastic8y
And downvoting an article will add it to the "disliked" tab. But please don't vote articles solely for this purpose.

I really appreciate having the examples in parentheses and italicised. It lets me easily skip them when I know what you mean. I wish others would do this.

Great list. My guide post for rationality and related issues has been the works of Carl Sagan, as he had many books and good advice for thinking critically. His works are an absolute must read (or watch) for anybody wanting to wade through the mass of misdirection that exists in the world.

This all sounds quite groovy, but are there any suggestions on how I could go about implementing them into my daily pattern of thought? I wonder if perhaps an Anki deck would have any merit whatsoever in accomplishing this...

Another one: You see a way to do things that in theory might work better that what everyone else is doing, but in practice no one seems to use. Do you investigate it and consider exploiting it?

Example: You're trying to get karma on reddit. You notice that http://www.reddit.com/r/randomization/ has almost a million subscribers but no new submissions in the past two months. Do you think "hm, that's weird" and keep looking for a subreddit to submit your link in, or do you think "oh wow, karma feast!"

2Transfuturist10y
Third option: turn the subreddit's style off (if you have RES), or subscribe yourself and see what happens to the number to discover what they've been doing.
0Larks9y
Apparently that subreddit just lies about how many subscribers it has

For each item, you might ask yourself: did you last use this habit...

Maybe it's worth a poll, if someone feels like creating one. I'm not sure how to make a multi-level poll and it probably would be too presumptuous of me to create 24 replies with one poll in each.

1AnnaSalamon11y
It's easy to make a checklist by going to Google docs / Google drive, clicking "create", and choosing "form".

The Checklist Manifesto is very interesting about what goes into an excellent checklist rather than a casually constructed checklist. It's about institutional checklists rather than personal checklists, though.

0Hawisher11y
You can't do multi-response polls? As in, check all that apply?
0shminux11y
There are 24 separate subquestions with 6 answer options each.