Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Instrumental Rationality Sequence Finished! (w/ caveats)

5 lifelonglearner 09 September 2017 01:49AM

Hey everyone,

Back in April, I said I was going to start writing an instrumental rationality sequence.

It's...sort of done.

I ended up collecting the essays into a sort of e-book. It's mainly content that I've put here (Starting Advice, Planning 101, Habits 101, etc.), but there's also quite a bit of new content.

It clocks in at about 150 pages and 30,000 words, about 15,000 of which I wrote after the April announcement post. (Which beats my estimate of 10,000 words before burnout!!!)

However, the editor for LW 1.0 editor isn't making it easy to port the stuff here from my Google Drive.

As LW 2.0 enters actual open beta, I'll repost / edit the essays and host them there. 

In the meantime, if you want to read the whole compiled book, the direct Google Doc link is here. That's where the real-time updates will happen, so it's what I'd recommend using to read it for now.

(There's also an online version on my blog if for some reason you want to read it there.)

It's my hope that this sequence becomes a useful reference for newcomers looking to learn more about instrumental rationality, which is more specialized than The Sequences (which really are more for epistemics).

Unfortunately, I didn't manage to write the book/sequence I set out to write. The actual book as it is now is about 10% as good as what I actually wanted. There's stuff I didn't get to write, more nuances I'd have liked to cover, more pictures I wanted to make, etc.

After putting in many hours of research and writing, I think I've learned more about the sort of effort that would need to go into making the actual project I'd outlined at the start.

There'll be a postmortem essay analyzing my expectations vs reality coming soon.

As a result of this project and a few other things, I'm feeling burned out. There probably won't be any major projects from me for a little bit, while I rest up.

[Link] Habits 101: Techniques and Research

5 lifelonglearner 22 August 2017 10:54AM

[Link] Bridging the Intention-Action Gap (aka Akrasia)

1 lifelonglearner 01 August 2017 10:31PM

Rationality as A Value Decider

1 DragonGod 05 June 2017 03:21AM

Rationality As a Value Decider

A Different Concept of Instrumental Rationality

Eliezer Yudkowsky defines instrumental rationality as “systematically achieving your values” and goes on to say: “Instrumental rationality, on the other hand, is about steering reality—sending the future where you want it to go. It’s the art of choosing actions that lead to outcomes ranked higher in your preferences. I sometimes call this ‘winning.’” [1]
I agree with Yudkowsky’s concept of rationality as a method for systematised winning. It is why I decided to pursue rationality—that I may win. However, I personally disagree with the notion of “systematically achieving your values” simply because I think it is too vague. What are my values? Happiness and personal satisfaction? I find that you can maximise this by joining a religious organisation, in fact I think I was happiest in a time before I discovered the Way. But this isn’t the most relevant, maximising your values isn’t specific enough for my taste, it’s too vague for me.
“Likewise, decision theory defines what action I should take based on my beliefs. For any consistent set of beliefs and preferences I could have about Bob, there is a decision-theoretic answer to how I should then act in order to satisfy my preferences.” [2]
This implies that instrumental rationality is specific; from the above statement, I infer:
“For any decision problem to any rational agent with a specified psyche, there is only one correct choice to make.”
However, if we only seek to systematically achieve our values, I believe that instrumental rationality fails to be specific—it is possible that there’s more than one solution to a problem in which we merely seek to maximise or values. I cherish the specificity of rationality; there is a certain comfort, in knowing that there is a single correct solution to any problem, a right decision to make for any game—one merely need find it. As such, I sought a definition of rationality that I personally agree with; one that satisfies my criteria for specificity; one that satisfies my criteria for winning. The answer I arrived at was: “Rationality is systematically achieving your goals.”
I love the above definition; it is specific—gone is the vagueness and uncertainty of achieving values. It is simple—gone is the worry over whether value X should be an instrumental value or a terminal value. Above all, it is useful—I know whether or not I have achieved my goals, and I can motivate myself more to achieve them. Rather than thinking about vague values I think about my life in terms of goals:
“I have goal X how do I achieve it?”
If necessary, I can specify sub goals and sub goals for those sub goals. I find that thinking about your life in terms of goals to be achieved is a more conducive model for problem solving, a more efficient model—a useful model. I am many things, and above them all I am a utilitist—the worth of any entity is determined by its utility to me. I find the model of rationality as a goal enabler a more useful model.
Goals and values are not always aligned. For example, consider the problem below:

Jane is the captain of a boat full with 100 people. The ship is about to capsize and would, unless ten people are sacrificed. Jane’s goal is to save as many people as possible. Jane’s values hold human lives sacred. Sacrificing ten people has a 100% chance of saving 90 people, while sacrificing no one and going with plan delta has a 10% chance to save the 100, and a 90% chance for everyone to die.


The sanctity of human life is a terminal value for Jane. Jane when seeking to actualise her values, may well choose to go with plan delta, which has a 90% chance to prevent her from achieving her goals.
Values may be misaligned with goals, values may be inhibiting towards achieving our goals. Winning isn’t achieving your values; winning is achieving your goals.


I feel it is apt to define goals at this juncture, lest the definition be perverted and only goals aligned with values be considered “true/good goals”.
Goals are any objectives a self aware agent consciously assigns itself to accomplish.
There are no true goals, no false goals, no good goals, no bad goals, no worthy goals, no worthless goals; there are just goals.
I do not consider goals something that “exist to affirm/achieve values"—you may assign yourself goals that affirm your values, or goals that run contrary to them—the difference is irrelevant, we work to achieve those goals you have specified.

The Psyche

The Psyche is an objective map that describes a self-aware agent that functions as a decision maker—rational or not. The sum total of an individual’s beliefs—all knowledge is counted as belief—values and goals form their psyche. The psyche is unique to each individual. The psyche is not a subjective evaluation of an individual by themselves, but an objective evaluation of the individual as they would appear to an omniscient observer. An individual’s psyche includes the totality of their map. The psyche is— among other things—a map that describes a map so to speak.
When a decision problem is considered, the optimum solution to such a problem cannot be considered without considering the psyche of that individual. The values that individual holds, the goals they seek to achieve and their mental map of the world.
Eliezer Yudkowsky seems to believe that we have an extremely limited ability to alter our psyche. He posits, that we can’t choose to believe the sky is green at will. I never really bought this, and especially due to personal anecdotal evidence. Yet, I’ll come back to altering beliefs later.
Yudkowsky describes the human psyche as: “a lens that sees its own flaws”. [3] I personally would extend this definition; we are not merely “a lens that sees its own flaws”, we are also “a lens that corrects itself”—the self-aware AI that can alter its own code. The psyche can be altered at will—or so I argue.
I shall start with values. Values are neither permanent nor immutable. I’ve had a slew of values over the years; while Christian, I valued faith, now I adhere to Thomas Huxley’s maxim:

Scepticism is the highest of duties; blind faith the one unpardonable sin.


Another one: prior to my enlightenment I held emotional reasoning in high esteem, and could be persuaded by emotional arguments, after my enlightenment I upheld rational reasoning. Okay, that isn’t entirely true; my answer to the boat problem had always been to sacrifice the ten people, so that doesn’t exactly work, but I was more emotional then, and could be swayed by emotional arguments. Before I discovered the Way earlier this year (when I was fumbling around in the dark searching for rationality) I viewed all emotion as irrational, and my values held logic and reason above all. Back then, I was a true apath, and completely unfeeling. I later read arguments for the utility of emotions, and readjusted my values accordingly. I have readjusted my values several times along the journey of life; just recently, I repressed my values relating to pleasure from feeding—to aid my current routine of intermittent fasting. I similarly repressed my values of sexual arousal/pleasure—I felt it will make me more competent. Values can be altered, and I suspect many of us have done it at least once in our lives—we are the lens that corrects itself.

Getting back to belief (whether you can choose to believe the sky is green at will) I argue that you can, it is just a little more complicated than altering your values. Changing your beliefs—changing your actual anticipation controllers—truly redrawing the map, would require certain alterations to your psyche in order for it to retain a semblance of consistency. In order to be able to believe the sky is green, you would have to:

  • Repress your values that make you desire true beliefs.
  • Repress your values that make you give priority to empirical evidence.
  • Repress your vales that make you sceptical.  
  • Create (or grow if you already have one) a new value that supports blind faith.


  • Repress your values that support curiosity. 
  • Create (or grow if you already have one) a new value that supports ignorance.

By the time, you’ve done the ‘edits’ listed above, you would be able to freely believe that the sky is green, or snow is black, or that the earth rests on the back of a giant turtle, or a teapot floats in the asteroid belt. I’m warning you though, by the time you’ve successfully accomplished the edits above, your psyche would be completely different from now, and you will be—I argue—a different person. If any of you were worried that the happiness of stupidity was forever closed to you, then fear not; it is open to you again—if you truly desire it. Be forewarned; the “you” that would embrace it would be different from the “you” now, and not one I’m sure I’d want to associate with. The psyche is alterable; we are the masters of our own mind—the lens that corrects itself.
I do not posit, that we can alter all of our psyche (I suspect that there are aspects of cognitive machinery that are unalterable; “hardcoded” so to speak. However, my neuroscience is non-existent—as such I shall leave this issue to those more equipped to comment on it.

Values as Tools

In my conception of instrumental rationality, values are no longer put on a pedestal, they are no longer sacred; there are no more terminal values anymore—only instrumental. Values aren’t the masters anymore, they’re slaves—they’re tools.
The notion of values as tools may seem disturbing for some, but I find it to be quite a useful model, and such I shall keep it.
Take the ship problem Jane was presented with above, had Jane deleted her value which held human life as sacred, she would have been able to make the decision with the highest probability of achieving her goals. She could even add a value that suppressed empathy, to assist her in similar situations—though some might feel that is overkill. I once asked a question on a particular subreddit:
“Is altruism rational?”
My reply was a quick and dismissive:
“Rationality doesn’t tell you what values to have, it only tells you how to achieve them.”
The answer was the standard textbook reply that anyone who had read the sequences or RAZ (Rationality: From AI to Zombies) would produce; I had read neither at the time. Nonetheless, I was reading HPMOR (Harry Potter and the Methods of Rationality), and that did sound like something Harry would say. After downloading my own copy of RAZ, I found that the answer was indeed correct—as long as I accepted Yudkowsky’s conception of instrumental rationality. Now that I reject it, and consider rationality as a tool to enable goals, I have a more apt response:

What are your goals?


If your goals are to have a net positive effect on the world (do good so to speak) then altruism may be a rational value to have. If your goals are far more selfish, then altruism may only serve as a hindrance.

The utility of “Values as Tools” isn’t just that some values may harm your goals, nay it does much more. The payoff of a decision is determined by two things:

  1. How much closer it brings you to the realisation of your goals? 
  2. How much it aligns with your values?

Choosing values that are doubly correlated with your current goals (you actualise your values when you make goal conducive decisions, and you run opposite to your values when you make goal deleterious decisions) exaggerates the positive payoff of goal conducive decisions, and the negative payoff of goal deleterious decisions. This aggrandising of the payoffs of decisions serves as a strong motivator towards making goal conducive decisions— large rewards, large punishment—a perfect propulsion system so to speak.

The utility of the “Values as Tools” approach is that it serves as a strong motivator towards goal conducive decision making.


It has been brought to my attention that a life such as the one I describe may be “an unsatisfying life” and “a life not worth living”. I may reply that I do not seek to maximise happiness, but that may be dodging the issue; I first conceived rationality as a value decider when thinking about how I would design an AI—it goes without saying that humans are not computers.

I offer a suggestion: order your current values in a scale of preference. Note the value (or set thereof) utmost in your scale of preference. The value that if you had to achieve only one value, that you would choose. Pick a goal that is aligned with that value (or set thereof). That goal shall be called your “prime goal”.The moment you pick your prime goal, you fix it. From now on, you no longer change your goals to align with your values. You change your values to align with your goals. Your aim in life is to achieve your prime goal, and you pick values and subgoals that would help you achieve your prime goal.


[1] Eliezer Yudkowsky, “Rationality: From AI to Zombies”, pg 7, 2015, MIRI, California.
[2] Eliezer Yudkowsky, “Rationality: From AI to Zombies”, pg 203, 2015, MIRI, California.
[3] Eliezer Yudkowsky, “Rationality: From AI to Zombies”, pg 40, 2015, MIRI, California.

Instrumental Rationality Sequence Update (Drive Link to Drafts)

2 lifelonglearner 19 May 2017 04:01AM

Hey all,

Following my post on my planned Instrumental Rationality sequence, I thought it'd be good to give the LW community an update of where I am.

1) Currently collecting papers on habits. Planning to go through a massive sprint of the papers tomorrow. The papers I'm using are available in the Drive folder linked below.

2) I have a publicly viewable Drive folder here of all relevant articles and drafts and things related to this project, if you're curious to see what I've been writing. Feel free to peek around everywhere, but the most relevant docs are this one which is an outline of where I want to go for the sequence and this one which is the compilation of currently sorta-decent posts in a book-like format (although it's quite short right now at only 16 pages).

Anyway, yep, that's where things are at right now.


There is No Akrasia

17 lifelonglearner 30 April 2017 03:33PM

I don’t think akrasia exists.

This is a fairly strong claim. I’m also not going to try and argue it.


What I’m really here to argue are the two weaker claims that:

a) Akrasia is often treated as a “thing” by people in the rationality community, and this can lead to problems, even though akrasia a sorta-coherent concept.

b) If we want to move forward and solve the problems that fall under the akrasia-umbrella, it’s better to taboo the term akrasia altogether and instead employ a more reductionist approach that favors specificity

But that’s a lot less catchy, and I think we can 80/20 it with the statement that “akrasia doesn’t exist”, hence the title and the opening sentence.

First off, I do think that akrasia is a term that resonates with a lot of people. When I’ve described this concept to friends (n = 3), they’ve all had varying degrees of reactions along the lines of “Aha! This term perfectly encapsulates something I feel!” On LW, it seems to have garnered acceptance as a concept, evidenced by the posts / wiki on it.

It does seem, then, that this concept of “want-want vs want” or “being unable to do what you ‘want’ to do” seems to point at a phenomenologically real group of things in the world.

However, I think that this is actually bad.

Once people learn the term akrasia and what it represents, they can now pattern-match it to their own associated experiences. I think that, once you’ve reified akrasia, i.e. turned it into a “thing” inside your ontology, problems occur:

First off, treating akrasia as a real thing gives it additional weight and power over you:

Once you start to notice the patterns, it’s harder to see things again as mere apparent chaos. In the case of akrasia, I think this means that people may try less hard because they suddenly realize they’re in the grip of this terrible monster called akrasia.

I think this sort of worldview ends up reinforcing some unhelpful attitudes towards solving the problems akrasia represents. As an example, here are two paraphrased things I’ve overheard about akrasia which I think illustrate this. (Happy to remove these if you would prefer not to be mentioned.)

“Akrasia has mutant healing powers…Thus you can’t fight it, you can only keep switching tactics for a time until they stop working…”

“I have massive akrasia…so if you could just give me some more high-powered tools to defeat it, that’d be great…”  


Both of these quotes seem to have taken the akrasia hypothesis a little too far. As I’ll later argue, “akrasia” seems to be dealt with better when you see the problem as a collection of more isolated disparate failures of different parts of your ability to get things done, rather than as an umbrella term.

I think that the current akrasia framing actually makes the problem more intractable.

I see potential failure modes where people come into the community, hear about akrasia (and all the related scary stories of how hard it is to defeat), and end up using it as an excuse (perhaps not an explicit belief, but as an alief) that impacts their ability to do work.

This was certainly the case for me, where improved introspection and metacognition on certain patterns in my mental behaviors actually removed a lot of my willpower which had served me well in the past. I may be getting slightly tangential here, but my point is that giving people models, useful as they might be for things like classification, may not always be net-positive.

Having new things in your ontology can harm you.

So just giving people some of these patterns and saying, “Hey, all these pieces represent a Thing called akrasia that’s hard to defeat,” doesn’t seem like the best idea.

How can we make the akrasia problem more tractable, then?

I claimed earlier that akrasia does seem to be a real thing, as it seems to be relatable to many people. I think this may actually because akrasia maps onto too many things. It’s an umbrella term for lots of different problems in motivation and efficacy that could be quite disparate problems. The typical akrasia framing lumps problems like temporal discounting with motivation problems like internal disagreements or ugh fields, and more.


Those are all very different problems with very different-looking solutions!

In the above quotes about akrasia, I think that they’re an example of having mixed up the class with its members. Instead of treating akrasia as an abstraction that unifies a class of self-imposed problems that share the property of acting as obstacles towards our goals, we treat it as a problem onto itself.

Saying you want to “solve akrasia” makes about as much sense as directly asking for ways to “solve cognitive bias”. Clearly, cognitive biases are merely a class for a wide range of errors our brains make in our thinking. The exercises you’d go through to solve overconfidence look very different than the ones you might use to solve scope neglect, for example.

Under this framing, I think we can be less surprised when there is no direct solution to fighting akrasia—because there isn’t one.

I think the solution here is to be specific about the problem you are currently facing. It’s easy to just say you “have akrasia” and feel the smooth comfort of a catch-all term that doesn’t provide much in the way of insight. It’s another thing to go deep into your ugly problem and actually, honestly say what the problem is.

The important thing here is to identify which subset of the huge akrasia-umbrella your individual problem falls under and try to solve that specific thing instead of throwing generalized “anti-akrasia” weapons at it.

Is your problem one of remembering to do tasks? Then set up a Getting Things Done system.

Is your problem one of hyperbolic discounting, of favoring short-term gains? Then figure out a way to recalibrate the way you weigh outcomes. Maybe look into precommitting to certain courses of action.

Is your problem one of insufficient motivation to pursue things in the first place? Then look into why you care in the first place. If it turns out you really don’t care, then don’t worry about it. Else, find ways to source more motivation.

The basic (and obvious) technique I propose, then, looks like:

  1. Identify the akratic thing.

  2. Figure out what’s happening when this thing happens. Break it down into moving parts and how you’re reacting to the situation.

  3. Think of ways to solve those individual parts.

  4. Try solving them. See what happens

  5. Iterate

Potential questions to be asking yourself throughout this process:

  • What is causing your problem? (EX: Do you have the desire but just aren’t remembering? Are you lacking motivation?)

  • How does this akratic problem feel? (EX: What parts of yourself is your current approach doing a good job of satisfying? Which parts are not being satisfied?)

  • Is this really a problem? (EX: Do you actually want to do better? How realistic would it be to see the improvements you’re expecting? How much better do you think could be doing?)

Here’s an example of a reductionist approach I did:

“I suffer from akrasia.

More specifically, though, I suffer from a problem where I end up not actually having planned things out in advance. This leads me to do things like browse the internet without having a concrete plan of what I’d like to do next. In some ways, this feels good because I actually like having the novelty of a little unpredictability in life.

However, at the end of the day when I’m looking back at what I’ve done, I have a lot of regret over having not taken key opportunities to actually act on my goals. So it looks like I do care (or meta-care) about the things I do everyday, but, in the moment, it can be hard to remember.”

Now that I’ve far more clearly laid out the problem above, it seems easier to see that the problem I need to deal with is a combination of:

  • Reminding myself the stuff I would like to do (maybe via a schedule or to-do list).

  • Finding a way to shift my in-the-moment preferences a little more towards the things I’ve laid out (perhaps with a break that allows for some meditation).

I think that once you apply a reductionist viewpoint and specifically say exactly what it is that is causing your problems, the problem is already half-solved. (Having well-specified problems seems to be half the battle.)


Remember, there is no akrasia! There are only problems that have yet to be unpacked and solved!

Introducing the Instrumental Rationality Sequence

29 lifelonglearner 26 April 2017 09:53PM

What is this project?

I am going to be writing a new sequence of articles on instrumental rationality. The end goal is to have a compiled ebook of all the essays, so the articles themselves are intended to be chapters in the finalized book. There will also be pictures.

I intend for the majority of the articles to be backed by somewhat rigorous research, similar in quality to Planning 101 (with perhaps a few less citations). Broadly speaking, the plan is to introduce a topic, summarize the research on it, give some models and mechanisms, and finish off with some techniques to leverage the models.

The rest of the sequence will be interspersed with general essays on dealing with these concepts, similar to In Defense of the Obvious. Lastly, there will be a few experimental essays on my attempt to synthesize existing models into useful-but-likely-wrong models of my own, like Attractor Theory.

I will likely also recycle / cannibalize some of my older writings for this new project, but I obviously won’t post the repeated material here again as new stuff.



What topics will I cover?

Here is a broad overview of the three main topics I hope to go over:

(Ordering is not set.)

Overconfidence in Planning: I’ll be stealing stuff from Planning 101 and rewrite a bit for clarity, so not much will be changed. I’ll likely add more on the actual models of how overconfidence creeps into our plans.

Motivation: I’ll try to go over procrastination, akrasia, and behavioral economics (hyperbolic discounting, decision instability, precommitment, etc.)

Habituation: This will try to cover what habits are, conditioning, incentives, and ways to take the above areas and habituate them, i.e. actually putting instrumental rationality techniques into practice.

Other areas I may want to cover:

Assorted Object-Level Things: The Boring Advice Repository has a whole bunch of assorted ways to improve life that I think might be useful to reiterate in some fashion.

Aversions and Ugh Fields: I don’t know too much about these things from a domain knowledge perspective, but it’s my impression that being able to debug these sorts of internal sticky situations is a very powerful skill. If I were to write this section, I’d try to focus on Focusing and some assorted S1/S2 communication things. And maybe also epistemics.

Ultimately, the point here isn’t to offer polished rationality techniques people can immediately apply, but rather to give people an overview of the relevant fields with enough techniques that they get the hang of what it means to start making their own rationality.



Why am I doing this?

Niche Role: On LessWrong, there currently doesn’t appear to be a good in-depth series on instrumental rationality. Rationality: From AI to Zombies seems very strong for giving people a worldview that enables things like deeper analysis, but it leans very much into the epistemic side of things.

It’s my opinion that, aside from perhaps Nate Soares’s series on Replacing Guilt (which I would be somewhat hesitant to recommend to everyone), there is no in-depth repository/sequence that ties together these ideas of motivation, planning, procrastination, etc.

Granted, there have been many excellent posts here on several areas, but they've been fairly directed. Luke's stuff on beating procrastination, for example, is fantastic. I'm aiming for a broader overview that hits the current models and research on different things.

I think this means that creating this sequence could add a lot of value, especially to people trying to create their own techniques.

Open-Sourcing Rationality: It’s clear that work is being done on furthering rationality by groups like Leverage and CFAR. However, for various reasons, the work they do is not always available to the public. I’d like to give people who are interested but unable to directly work with these organization something they can use to jump start their own investigations.

I’d like this to become a similar Schelling Point that we could direct people to if they want to get started.

I don’t meant to imply that what I’ll produce is the same caliber, but I do think it makes sense to have some sort of pipeline to get rationalists up to speed with the areas that (in my mind) tie into figuring out instrumental rationality. When I first began looking into this field, there was a lot of information that was scattered in many places.

I’d like to create something cohesive that people can point to when newcomers want to get started with instrumental rationality that similarly gives them a high level overview of the many tools at their disposal.

Revitalizing LessWrong: It’s my impression that independent essays on instrumental rationality have slowed over the years. (But also, as I mentioned above, this doesn’t mean stuff hasn’t happened. CFAR’s been hard at work iterating their own techniques, for example.) As LW 2.0 is being talked about, this seems like an opportune time to provide some new content and help with our reorientation towards LW becoming once again a discussion hub for rationality.



Where does LW fit in?

Crowd-sourcing Content: I fully expect that many other people will have fantastic ideas that they want to contribute. I think that’s a good idea. Given some basic things like formatting / roughly consistent writing style throughout, I think it’d be great if other potential writers see this post as an invitation to start thinking about things they’d like to write / research about instrumental rationality.

Feedback: I’ll be doing all this writing on a public Google Doc with posts that feature chapters once they’re done, so hopefully there’s ample room to improve and take in constructive criticism. Feedback on LW is often high-quality, and I expect that to definitely improve what I will be writing.

Other Help: I probably can’t come through every single research paper out there, so if you see relevant information I didn’t or want to help with the research process, let me know! Likewise, if you think there are other cool ways you can contribute, feel free to either send me a PM or leave a comment below.



Why am I the best person to do this?

I’m probably not the best person to be doing this project, obviously.

But, as a student, I have a lot of time on my hands, and time appears to be a major limiting reactant in this whole process.

Additionally, I’ve been somewhat involved with CFAR, so I have some mental models about their flavor of instrumental rationality; I hope this translates into meaning I'm writing about stuff that isn't just a direct rehash of their workshop content.

Lastly, I’m very excited about this project, so you can expect me to put in about 10,000 words (~40 pages) before I take some minor breaks to reset. My short-term goals (for the next month) will be on note-taking and finding research for habits, specifically, and outlining more of the sequence.


Act into Uncertainty

6 lifelonglearner 24 March 2017 09:28PM

It’s only been recently that I’ve been thinking about epistemics in the context of figuring out my behavior and debiasing. Aside from trying to figure out how I actually behave (as opposed to what I merely profess I believe), I’ve been thinking about how to confront uncertainty—and what it feels like.


For many areas of life, I think we shy away from confronting uncertainty and instead flee into the comforting non-falsifiability of vagueness.

Consider these examples:

1) You want to get things done today. You know that writing things down can help you finish more things. However, it feels aversive to write down what you specifically want to do. So instead, you don’t write things down and instead just keep a hazy notion of “I will do things today”.

2) You try to make a confidence interval for a prediction where money is on the line. You notice yourself feeling uncomfortable, no matter what your bounds are; it feels bad to set down any number at all, which is accompanied by a dread feeling of finality.

3) You’re trying to find solutions to a complex, entangled problem. Coming up with specific solutions feels bad because none of them seem to completely solve the problem. So instead you decide to create a meta-framework that produces solutions, or argue in favor of some abstract process like a “democratized system that focuses on holistic workarounds”.

In each of the above examples, it feels like we move away from making specific claims because that opens us up to specific criticism. But instead of trying to improve the strengths of specific claims, we retreat to fuzzily-defined notions that allow us to incorporate any criticism without having to really update.

I think there’s a sense in which, in some areas of life, we’re embracing shoddy epistemology (e.g. not wanting to validate or falsify our beliefs) because of a fear of wanting to fail / put in the effort to update. I think this failure is what fuels this feeling of aversion.

It seems useful to face this feeling of badness or aversion with the understanding that this is what confronting uncertainty feels like. The best action doesn’t always feel comfortable and easy; it can just as easily feel aversive and final.

Look for situations where you might be flinching away from making specific claims and replacing them with vacuous claims that support all evidence you might see.

If you never put your beliefs to the test with specific claims, then you can never verify them in the real world. And if your beliefs don’t map well onto the real world, they don’t seem very useful to even have in the first place.

What are you surprised people pay for instead of doing themselves?

3 AspiringRationalist 13 February 2017 01:07AM

Two of the main resources people have are time and money.  The world offers many opportunities to trade one for the other, at widely varying rates.

Where do you see people trading money for time at unfavorable rates - spending too much money to save too little time?  What things should people just DIY?

See also the flip-side of this post, "what are you surprised people don't just buy?"

What are you surprised people don't just buy?

5 AspiringRationalist 13 February 2017 01:07AM

Two of the main resources people have are time and money.  The world offers many opportunities to trade one for the other, at widely varying rates.

I've often heard people recommend trading money for time in the abstract, but this advice is rarely accompanied by specific recommendations on how to do so.

How do you use money to buy time or otherwise make your life better/easier?

See also the flip-side of this post, "what are you surprised people pay for instead of doing themselves?"

Planning 101: Debiasing and Research

13 lifelonglearner 03 February 2017 03:01PM

Planning 101: Techniques and Research

<Cross-posed from my blog>

[Epistemic status: Relatively strong. There are numerous studies showing that predictions often become miscalibrated. Overconfidence in itself appears fairly robust, appearing in different situations. The actual mechanism behind the planning fallacy is less certain, though there is evidence for the inside/outside view model. The debiasing techniques are supported, but more data on their effectiveness could be good.]

Humans are often quite overconfident, and perhaps for good reason. Back on the savanna and even some places today, bluffing can be an effective strategy for winning at life. Overconfidence can scare down enemies and avoid direct conflict.

When it comes to making plans, however, overconfidence can really screw us over. You can convince everyone (including yourself) that you’ll finish that report in three days, but it might still really take you a week. Overconfidence can’t intimidate advancing deadlines.

I’m talking, of course, about the planning fallacy, our tendency to make unrealistic predictions and plans that just don’t work out.

Being a true pessimist ain’t easy.

Students are a prime example of victims to the planning fallacy:

First, students were asked to predict when they were 99% sure they’d finish a project. When the researchers followed up with them later, though, only about 45%, less than half of the students, had actually finished by their own predicted times [Buehler, Griffin, Ross, 1995].

Even more striking, students working on their psychology honors theses were asked to predict when they’d finish, “assuming everything went as poor as it possibly could.” Yet, only about 30% of students finished by their own worst-case estimate [Buehler, Griffin, Ross, 1995].

Similar overconfidence was also found in Japanese and Canadian cultures, giving evidence that this is a human (and not US-culture-based) phenomenon. Students continued to make optimistic predictions, even when they knew the task had taken them longer last time [Buehler and Griffin, 2003, Buehler et al., 2003].

As I student myself, though, I don’t mean to just pick on ourselves.

The planning fallacy affects projects across all sectors.

An overview of public transportation projects found that most of them were, on average, 20–45% above the estimated cost. In fact, research has shown that these poor predictions haven’t improved at all in the past 30 years [Flyvbjerg 2006].

And there’s no shortage of anecdotes, from the Scottish Parliament Building, which cost 10 times more than expected, or the Denver International Airport, which took over a year longer and cost several billion more.

When it comes to planning, we suffer from a major disparity between our expectations and reality. This article outlines the research behind why we screw up our predictions and gives three suggested techniques to suck less at planning.


The Mechanism:

So what’s going on in our heads when we make these predictions for planning?

On one level, we just don’t expect things to go wrong. Studies have found that we’re biased towards not looking at pessimistic scenarios [Newby-Clark et al., 2000]. We often just assume the best-case scenario when making plans.

Part of the reason may also be due to a memory bias. It seems that we might underestimate how long things take us, even in our memory [Roy, Christenfeld, and McKenzie 2005].

But by far the dominant theory in the field is the idea of an inside view and an outside view [Kahneman and Lovallo 1993]. The inside view is the information you have about your specific project (inside your head). The outside view is what someone else looking at your project (outside of the situation) might say.

Obviously you want to take the Outside View.


We seem to use inside view thinking when we make plans, and this leads to our optimistic predictions. Instead of thinking about all the things that might go wrong, we’re focused on how we can help our project go right.

Still, it’s the outside view that can give us better predictions. And it turns out we don’t even need to do any heavy-lifting in statistics to get better predictions. Just asking other people (from the outside) to predict your own performance, or even just walking through your task from a third-person point of view can improve your predictions [Buehler et al., 2010].

Basically, the difference in our predictions seems to depend on whether we’re looking at the problem in our heads (a first-person view) or outside our heads (a third-person view). Whether we’re the “actor” or the “observer” in our minds seems to be a key factor in our planning [Pronin and Ross 2006].

Debiasing Techniques:

I’ll be covering three ways to improve predictions: MurphyjitsuReference Class Forecasting (RCF), and Back-planning. In actuality, they’re all pretty much the same thing; all three techniques focus, on some level, on trying to get more of an outside view. So feel free to choose the one you think works best for you (or do all three).

For each technique, I’ll give an overview and cover the steps first and then end with the research that supports it. They might seem deceptively obvious, but do try to keep in mind that obvious advice can still be helpful!

(Remembering to breathe, for example, is obvious, but you should still do it anyway. If you don't want to suffocate.)



“Avoid Obvious Failures”

Almost as good as giving procrastination an ass-kicking.

The name Murphyjitsu comes from the infamous Murphy’s Law: “Anything that can go wrong, will go wrong.” The technique itself is from the Center for Applied Rationality (CFAR), and is designed for “bulletproofing your strategies and plans”.

Here are the basic steps:

  1. Figure out your goal. This is the thing you want to make plans to do.
  2. Write down which specific things you need to get done to make the thing happen. (Make a list.)
  3. Now imagine it’s one week (or month) later, and yet you somehow didn’t manage to get started on your goal. (The visualization part here is important.) Are you surprised?
  4. Why? (What went wrong that got in your way?)
  5. Now imagine you take steps to remove the obstacle from Step 4.
  6. Return to Step 3. Are you still surprised that you’d fail? If so, your plan is probably good enough. (Don’t fool yourself!)
  7. If failure still seems likely, go through Steps 3–6 a few more times until you “problem proof” your plan.

Murphyjitsu based off a strategy called a “premortem” or “prospective hindsight”, which basically means imagining the project has already failed and “looking backwards” to see what went wrong [Klein 2007].

It turns out that putting ourselves in the future and looking back can help identify more risks, or see where things can go wrong. Prospective hindsight has been shown to increase our predictive power so we can make adjustments to our plans — before they fail [Mitchell et al., 1989, Veinott et al., 2010].

This seems to work well, even if we’re only using our intuitions. While that might seem a little weird at first (“aren’t our intuitions pretty arbitrary?”), research has shown that our intuitions can be a good source of information in situations where experience is helpful [Klein 1999; Kahneman 2011]*.

While a premortem is usually done on an organizational level, Murphyjitsu works for individuals. Still, it’s a useful way to “failure-proof” your plans before you start them that taps into the same internal mechanisms.

Here’s what Murphyjitsu looks like in action:

“First, let’s say I decide to exercise every day. That’ll be my goal (Step 1). But I should also be more specific than that, so it’s easier to tell what “exercising” means. So I decide that I want to go running on odd days for 30 minutes and do strength training on even days for 20 minutes. And I want to do them in the evenings (Step 2).

Now, let’s imagine that it’s now one week later, and I didn’t go exercising at all! What went wrong? (Step 3) The first thing that comes to mind is that I forgot to remind myself, and it just slipped out of my mind (Step 4). Well, what if I set some phone / email reminders? Is that good enough? (Step 5)

Once again, let’s imagine it’s one week later and I made a reminder. But let’s say I still didn’t got exercising. How surprising is this? (Back to Step 3) Hmm, I can see myself getting sore and/or putting other priorities before it…(Step 4). So maybe I’ll also set aside the same time every day, so I can’t easily weasel out (Step 5).

How do I feel now? (Back to Step 3) Well, if once again I imagine it’s one week later and I once again failed, I’d be pretty surprised. My plan has two levels of fail-safes and I do want to do exercise anyway. Looks like it’s good! (Done)

Reference Class Forecasting:

“Get Accurate Estimates”

Predicting the future…using the past!

Reference class forecasting (RCF)is all about using the outside view. Our inside views tend to be very optimistic: We will see all the ways that things can go right, but none of the ways things can go wrong. By looking at past history — other people who have tried the same or similar thing as us — we can get a better idea of how long things will really take.

Here are the basic steps:

  1. Figure out what you want to do.
  2. See your records how long it took you last time 3.
  3. That’s your new prediction.
  4. If you don’t have past information, look for about how long it takes, on average, to do our thing. (This usually looks like Googling “average time to do X”.)**
  5. That’s your new prediction!

Technically, the actual process for reference class forecasting works a little differently. It involves a statistical distribution and some additional calculations, but for most everyday purposes, the above algorithm should work well enough.

In both cases, we’re trying to take an outside view, which we know improves our estimates [Buehler et al., 1994].

When you Google the average time or look at your own data, you’re forming a “reference class”, a group of related actions that can give you info about how long similar projects tend to take. Hence, the name “reference class forecasting”.

Basically, RCF works by looking only at results. This means that we can avoid any potential biases that might have cropped up if we were to think it through. We’re shortcutting right to the data. The rest of it is basic statistics; most people are close to average. So if we have an idea of what the average looks like, we can be sure we’ll be pretty close to average as well [Flyvbjerg 2006; Flyvbjerg 2008].

The main difference in our above algorithm from the standard one is that this one focuses on your own experiences, so the estimate you get tends to be more accurate than an average we’d get from an entire population.

For example, if it usually takes me about 3 hours to finish homework (I use Toggl to track my time), then I’ll predict that it will take me 3 hours today, too.

It’s obvious that RCF is incredibly simple. It literally just tells you that how long something will take you this time will be very close to how long it took you last time. But that doesn’t mean it’s ineffective! Often, the past is a good benchmark of future performance, and it’s far better than any naive prediction your brain might spit out.

RCF + Murphyjitsu Example:

For me, I’ve found that using a mixture of Reference Class Forecasting and Murphyjitsu to be helpful for reducing overconfidence in my plans.

When starting projects, I will often ask myself, “What were the reasons that I failed last time?” I then make a list of the first three or four “failure-modes” that I can recall. I now make plans to preemptively avoid those past errors.

(This can also be helpful in reverse — asking yourself, “How did I solve a similar difficult problem last time?” when facing a hard problem.)

Here’s an example:

“Say I’m writing a long post (like this one) and I want to know how what might go wrong. I’ve done several of these sorts of primers before, so I have a “reference class” of data to draw from. So what were the major reasons I fell behind for those posts?

<Cue thinking>

Hmm, it looks like I would either forget about the project, get distracted, or lose motivation. Sometimes I’d want to do something else instead, or I wouldn’t be very focused.

Okay, great. Now what are some ways that I might be able to “patch” those problems?

Well, I can definitely start by making a priority list of my action items. So I know which things I want to finish first. I can also do short 5-minute planning sessions to make sure I’m actually writing. And I can do some more introspection to try and see what’s up with my motivation.



“Calibrate Your Intuitions with Reality”

Back-planning involves, as you might expect, planning from the end. Instead of thinking about where we start and how to move forward, we imagine we’re already at our goal and go backwards.

Time-travelling inside your internal universe.

Here are the steps:

  1. Figure out the task you want to get done.
  2. Imagine you’re at the end of your task.
  3. Now move backwards, step-by-step. What is the step right before you finish?
  4. Repeat Step 3 until you get to where you are now.
  5. Write down how long you think the task will now take you.
  6. You now have a detailed plan as well as better prediction!

The experimental evidence for back-planning basically suggests that people will predict longer times to start and finish projects.

There are a few interesting hypotheses about why back-planning seems to improve predictions. The general gist of these theories is that back-planning is a weird, counterintuitive way to think about things, which means it disrupts a lot of mental processes that can lead to overconfidence [Wiese et al., 2012].

This means that back-planning can make it harder to fall into the groove of the easy “best-case” planning we default to. Instead, we need to actually look at where things might go wrong. Which is, of course, what we want.

In my own experience, I’ve found that going through a quick back-planning session can help my intuitions “warm up” to my prediction more. As in, I’ll get an estimation from RCF, but it still feels “off”. Walking through the plan through back-planning can help all the parts of me understand that it really will probably take longer.

Here’s the back-planning example:

“Right now, I want to host a talk at my school. I know that’s the end goal (Step 1). So the end goal is me actually finishing the talk and taking questions (Step 2). What happens right before that? (Step 3). Well, people would need to actually be in the room. And I would have needed a room.

Is that all? (Step 3). Also, for people to show up, I would have needed publicity. Probably also something on social media. I’d need to publicize at least a week in advance, or else it won’t be common knowledge.

And what about the actual talk? I would have needed slides, maybe memorize my talk. Also, I’d need to figure out what my talk is actually going to be on.

Huh, thinking it through like this, I’d need something like 3 weeks to get it done. One week for the actual slides, one week for publicity (at least), and one week for everything else that might go wrong.

That feels more ‘right’ than my initial estimate of ‘I can do this by next week.’”


Experimental Ideas:

Murphyjitsu, Reference Class Forecasting, and Back-planning are the three debiasing techniques that I’m fairly confident work well. This section is far more anecdotal. They’re ideas that I think are useful and interesting, but I don’t have much formal backing for them.

Decouple Predictions From Wishes:

In my own experience, I often find it hard to separate when I want to finish a task versus when I actually think I will finish a task. This is a simple distinction to keep in mind when making predictions, and I think it can help decrease optimism. The most important number, after all, is when I actually think I will finish—it’s what’ll most likely actually happen.

There’s some evidence suggesting that “wishful thinking” could actually be responsible for some poor estimates but it’s far from definitive [Buehler et al., 1997, Krizan and Windschitl].

Incentivize Correct Predictions:

Lately, I’ve been using a 4-column chart for my work. I write down the task in Column 1 and how long I think it will take me in Column 2. Then I go and do the task. After I’m done, I write down how long it actually took me in Column 3. Column 4 is the absolute value of Column 2 minus Column 3, or my “calibration score”.

The idea is to minimize my score every day. It’s simple and it’s helped me get a better sense for how long things really take.

Plan For Failure:

In my schedules, I specifically write in “distraction time”. If you aren’t doing this, you may want to consider doing this. Most of us (me included) have wandering attentions, and I know I’ll lost at least some time to silly things every day.

Double Your Estimate:

I get it. The three debiasing techniques I outlined above can sometimes take too long. In a pinch, you can probably approximate good predictions by just doubling your naive prediction.

Most people tend to be less than 2X overconfident, but I think (pessimistically) sticking to doubling is probably still better than something like 1.5X.


Working in Groups:

Obviously because groups are made of individuals, we’d expect them to be susceptible to the same overconfidence biases I covered earlier. Though some research has shown that groups are less susceptible to bias, more studies have shown that group predictions can be far more optimistic than individual predictions [Wright and Wells, Buehler et al., 2010]. “Groupthink” is term used to describe the observed failings of decision making in groups [Janis].

Groupthink (and hopefully also overconfidence), can be countered by either assigning a “Devil’s Advocate” or engaging in “dialectical inquiry” [Lunenburg 2012]:

We give out more than cookies over here

A Devil’s Advocate is a person who is actively trying to find fault with the group’s plans, looking for holes in reasoning or other objections. It’s suggested that the role rotates, and it’s associated with other positives like improved communication skills.

A dialectical inquiry is where multiple teams try to create the best plan, and then present them. Discussion then happens, and then the group selects the best parts of each plan . It’s a little like building something awesome out of lots of pieces, like a giant robot.

This is absolutely how dialectical inquiry works in practice.

For both strategies, research has shown that they lead to “higher-quality recommendations and assumptions” (compared to not doing them), although it can also reduce group satisfaction and acceptance of the final decision [Schweiger et al. 1986].

(Pretty obvious though; who’d want to keep chatting with someone hell-bent on poking holes in your plan?)



If you’re interested in learning (even) more about the planning fallacy, I’d highly recommend the paper The Planning Fallacy: Cognitive, Motivational, and Social Origins by Roger Buehler, Dale Griffin, and Johanna Peetz. Most of the material in this guide here is was taken from their paper. Do go check it out! It’s free!

Remember that everyone is overconfident (you and me included!), and that failing to plan is the norm. There are scary unknown unknowns out there that we just don’t know about!

Good luck and happy planning!



* Just don’t go and start buying lottery tickets with your gut. We’re talking about fairly “normal” things like catching a ball, where your intuitions give you accurate predictions about where the ball will land. (Instead of, say, calculating the actual projectile motion equation in your head.)

** In a pinch, you can just use your memory, but studies have shown that our memory tends to be biased too. So as often as possible, try to use actual measurements and numbers from past experience.

Works Cited:

Buehler, Roger, Dale Griffin, and Johanna Peetz. "The Planning Fallacy: Cognitive,

Motivational, and Social Origins." Advances in Experimental Social Psychology 43 (2010): 1-62. Social Science Research Network.

Buehler, Roger, Dale Griffin, and Michael Ross. "Exploring the Planning Fallacy: Why People

Underestimate their Task Completion Times." Journal of Personality and Social Psychology 67.3 (1994): 366.

Buehler, Roger, Dale Griffin, and Heather MacDonald. "The Role of Motivated Reasoning in

Optimistic Time Predictions." Personality and Social Psychology Bulletin 23.3 (1997): 238-247.

Buehler, Roger, Dale Griffin, and Michael Ross. “It’s About Time: Optimistic Predictions in

Work and Love.” European Review of Social Psychology Vol. 6, (1995): 1–32

Buehler, Roger, et al. "Perspectives on Prediction: Does Third-Person Imagery Improve Task

Completion Estimates?." Organizational Behavior and Human Decision Processes 117.1 (2012): 138-149.

Buehler, Roger, Dale Griffin, and Michael Ross. "Inside the Planning Fallacy: The Causes and

Consequences of Optimistic Time Predictions." Heuristics and Biases: The Psychology of Intuitive Judgment (2002): 250-270.

Buehler, R., & Griffin, D. (2003). Planning, Personality, and Prediction: The Role of Future

Focus in Optimistic Time Predictions. Organizational Behavior and Human Decision Processes, 92, 80–90

Flyvbjerg, Bent. "From Nobel Prize to Project Management: Getting Risks Right." Project

Management Journal 37.3 (2006): 5-15. Social Science Research Network.

Flyvbjerg, Bent. "Curbing Optimism Bias and Strategic Misrepresentation in Planning:

Reference Class Forecasting in Practice." European Planning Studies 16.1 (2008): 3-21.

Janis, Irving Lester. "Groupthink: Psychological Studies of Policy Decisions and Fiascoes."


Johnson, Dominic DP, and James H. Fowler. "The Evolution of Overconfidence." Nature

477.7364 (2011): 317-320.

Kahneman, Daniel. Thinking, Fast and Slow. Macmillan, 2011.

Kahneman, Daniel, and Dan Lovallo. “Timid Choices and Bold Forecasts: A Cognitive

Perspective on Risk Taking." Management Science 39.1 (1993): 17-31.

Klein, Gary. Sources of power: How People Make DecisionsMIT press, 1999.

Klein, Gary. "Performing a Project Premortem." Harvard Business Review 85.9 (2007): 18-19.

Krizan, Zlatan, and Paul D. Windschitl. "Wishful Thinking About the Future: Does Desire

Impact Optimism?" Social and Personality Psychology Compass 3.3 (2009): 227-243.

Lunenburg, F. "Devil’s Advocacy and Dialectical Inquiry: Antidotes to Groupthink."

International Journal of Scholarly Academic Intellectual Diversity 14 (2012): 1-9.

Mitchell, Deborah J., J. Edward Russo, and Nancy Pennington. "Back to the Future: Temporal

Perspective in the Explanation of Events." Journal of Behavioral Decision Making 2.1 (1989): 25-38.

Newby-Clark, Ian R., et al. "People focus on Optimistic Scenarios and Disregard Pessimistic

Scenarios While Predicting Task Completion Times." Journal of Experimental Psychology: Applied 6.3 (2000): 171.

Pronin, Emily, and Lee Ross. "Temporal Differences in Trait Self-Ascription: When the Self is

Seen as an Other." Journal of Personality and Social Psychology 90.2 (2006): 197.

Roy, Michael M., Nicholas JS Christenfeld, and Craig RM McKenzie. "Underestimating the

Duration of Future Events: Memory Incorrectly Used or Memory Bias?." Psychological Bulletin 131.5 (2005): 738.

Schweiger, David M., William R. Sandberg, and James W. Ragan. "Group Approaches for

Improving Strategic Decision Making: A Comparative Analysis of Dialectical Inquiry,

Devil's Advocacy, and Consensus." Academy of Management Journal 29.1 (1986): 51-71.

Veinott, Beth. "Klein, and Sterling Wiggins,“Evaluating the Effectiveness of the Premortem

Technique on Plan Confidence,”." Proceedings of the 7th International ISCRAM Conference (May, 2010).

Wiese, Jessica, Roger Buehler, and Dale Griffin. "Backward Planning: Effects of Planning

Direction on Predictions of Task Completion Time." Judgment and Decision Making 11.2

(2016): 147.

Wright, Edward F., and Gary L. Wells. "Does Group Discussion Attenuate the Dispositional

Bias?." Journal of Applied Social Psychology 15.6 (1985): 531-546.

Instrumental Rationality: Overriding Defaults

2 lifelonglearner 20 January 2017 05:14AM

[I'd previously posted this essay as a link. From now on, I'll be cross-posting blog posts here instead of linking them, to keep the discussions LW central. This is the first in an in-progress of sequence of articles that'll focus on identifying instrumental rationality techniques and cataloging my attempt to integrate them into my life with examples and insight from habit research.]

[Epistemic Status: Pretty sure. The stuff on habits being situation-response links seems fairly robust. I'll be writing something later with the actual research. I'm basically just retooling existing theory into an optimizational framework for improving life.]


   I’m interested how rationality can help us make better decisions.  

              Many of these decisions seem to involve split-second choices where it’s hard to sit down and search a handbook for the relevant bits of information—you want to quickly react in the correct way, else the moment passes and you’ve lost. On a very general level, it seems to be about reacting in the right way once the situation provides a cue.

              Consider these situation-reaction pairs:

  • ·       You are having an argument with someone. As you begin to notice the signs of yourself getting heated, you remember to calm down and talk civilly. Maybe also some deep breaths.
  • ·       You are giving yourself a deadline or making a schedule for a task, and you write down the time you expect to finish. Quickly, though, you remember to actually check if it took you that long last time, and you adjust accordingly.
  • ·       You feel yourself slipping towards doing something some part of you doesn’t want to do. Say you are reneging on a previous commitment. As you give in to temptation, you remember to pause and really let the two sides of yourself communicate.
  • ·       You think about doing something, but you feel aversive / flinch-y to it. As you shy away from the mental pain, rather than just quickly thinking about something else, you also feel curious as to why you feel that way. You query your brain and try to pick apart the “ugh” feeling,

Two things seem key to the above scenarios:

One, each situation above involves taking an action that is different from our keyed-in defaults.

Two, the situation-reaction pair paradigm is pretty much CFAR’s Trigger Action Plan (TAP) model, paired with a multi-step plan.

Also, knowing about biases isn’t enough to make good decisions. Even memorizing a mantra like “Notice signs of aversion and query them!” probably isn’t going to be clear enough to be translated into something actionable. It sounds nice enough on the conceptual level, but when, in the moment, you remember such a mantra, you still need to figure out how to “notice signs of aversion and query them”.

What we want is a series of explicit steps that turn the abstract mantra into small, actionable steps. Then, we want to quickly deploy the steps at the first sign of the situation we’re looking out for, like a new cached response.

This looks like a problem that a combination of focused habit-building and a breakdown of the 5-second level can help solve.

In short, the goal looks to be to combine triggers with clear algorithms to quickly optimize in the moment. Reference class information from habit studies can also help give good estimates on how long the whole process will take to internalize (on average 66 days, according to Lally et al)

But these Trigger Action Plan-type plans don’t seem to directly cover the willpower related problems with akrasia.

Sure, TAPs can help alert you to the presence of an internal problem, like in the above example where you notice aversion. And the actual internal conversation can probably be operationalized to some extent, like how CFAR has described the process of Double Crux.

But most of the Overriding Default Habit actions seem to be ones I’d be happy to do anytime—I just need a reminder—whereas akrasia-related problems are centrally related to me trying to debug my motivational system. For that reason, I think it helps to separate the two. Also, it makes the outside-seeming TAP algorithms complementary, rather than at odds, with the inside-seeming internal debugging techniques.

Loosely speaking, then, I think it still makes quite a bit of sense to divide the things rationality helps with into two categories:

  • Overriding Default Habits:

These are the situation-reaction pairs I’ve covered above. Here, you’re substituting a modified action instead of your “default action”. But the cue serves as mainly a reminder/trigger. It’s less about diagnosing internal disagreement.

  • Akrasia / Willpower Problems:

Here we’re talking about problems that might require you to precommit (although precommitment might not be all you need to do), perhaps because of decision instability. The “action-intention gap” caused by akrasia, where you (sort of) want to something but you don’t want to also goes in here.

Still, it’s easy to point to lots of other things that fall in the bounds of rationality that my approach doesn’t cover: epistemology, meta-levels, VNM rationality, and many other concepts are conspicuously absent. Part of this is because I’ve been focusing on instrumental rationality, while a lot of those ideas are more in the epistemic camp.

Ideas like meta-levels do seem to have some place in informing other ideas and skills. Even as declarative knowledge, they do chain together in a way that results in useful real world heuristics.  Meta-levels, for example, can help you keep track of the ultimate direction in a conversation. Then, it can help you table conversations that don’t seem immediately useful/relevant and not get sucked into the object-level discussion.

At some point, useful information about how the world works should actually help you make better decisions in the real world. For an especially pragmatic approach, it may be useful to ask yourself, each time you learn something new, “What do I see myself doing as a result of learning this information?”

There’s definitely more to mine from the related fields of learning theory, habits, and debiasing, but I think I’ll have more than enough skills to practice if I just focus on the immediately practical ones.



Jocko Podcast

9 moridinamael 06 September 2016 03:38PM

I've recently been extracting extraordinary value from the Jocko Podcast.

Jocko Willink is a retired Navy SEAL commander, jiu-jitsu black belt, management consultant and, in my opinion, master rationalist. His podcast typically consists of detailed analysis of some book on military history or strategy followed by a hands-on Q&A session. Last week's episode (#38) was particularly good and if you want to just dive in, I would start there.

As a sales pitch, I'll briefly describe some of his recurring talking points:

  • Extreme ownership. Take ownership of all outcomes. If your superior gave you "bad orders", you should have challenged the orders or adapted them better to the situation; if your subordinates failed to carry out a task, then it is your own instructions to them that were insufficient. If the failure is entirely your own, admit your mistake and humbly open yourself to feedback. By taking on this attitude you become a better leader and through modeling you promote greater ownership throughout your organization. I don't think I have to point out the similarities between this and "Heroic Morality" we talk about around here.
  • Mental toughness and discipline. Jocko's language around this topic is particularly refreshing, speaking as someone who has spent too much time around "self help" literature, in which I would partly include Less Wrong. His ideas are not particularly new, but it is valuable to have an example of somebody who reliably executes on his the philosophy of "Decide to do it, then do it." If you find that you didn't do it, then you didn't truly decide to do it. In any case, your own choice or lack thereof is the only factor. "Discipline is freedom." If you adopt this habit as your reality, it become true.
  • Decentralized command. This refers specifically to his leadership philosophy. Every subordinate needs to truly understand the leader's intent in order to execute instructions in a creative and adaptable way. Individuals within a structure need to understand the high-level goals well enough to be able to act in a almost all situations without consulting their superiors. This tightens the OODA loop on an organizational level.
  • Leadership as manipulation. Perhaps the greatest surprise to me was the subtlety of Jocko's thinking about leadership, probably because I brought in many erroneous assumptions about the nature of a SEAL commander. Jocko talks constantly about using self-awareness, detachment from one's ideas, control of one's own emotions, awareness of how one is perceived, and perspective-taking of one's subordinates and superiors. He comes off more as HPMOR!Quirrell than as a "drill sergeant".

The Q&A sessions, in which he answers questions asked by his fans on Twitter, tend to be very valuable. It's one thing to read the bullet points above, nod your head and say, "That sounds good." It's another to have Jocko walk through the tactical implementation of this ideas in a wide variety of daily situations, ranging from parenting difficulties to office misunderstandings.

For a taste of Jocko, maybe start with his appearance on the Tim Ferriss podcast or the Sam Harris podcast.

Update to the list of apps that are useful to me

6 Elo 08 April 2016 01:02AM

on the 22 August 2015, I wrote an apps list of useful apps, in the comments were a number of suggestions which I immediately tried.  This is an update.  Original can be found at this link:


I rewrite the whole list below.  

But first - my recommended list in short:


  • Get an external battery block (and own more than enough spare power cables)
  • Wunderlist
  • Ingress
  • How are you feeling?
  • Alarm clock plus
  • Twilight
  • Business calendar
  • Clipper
  • Rain alarm
  • Data monitor
  • Rescuetime
  • Powercalc
  • Es File viewer
  • WheresmyDroid?
  • Google Docs/sheets etc.
  • (possibly pushbullet and DTG GTD but I have not had them for long enough)

The bold are the top selections, but I would encourage everyone to have all the apps in the above list.

The environment
The overlays
The normals:
Quantified apps:
Twilight - Does a better job and can filter red light as well as brightness.
Not used:


Timestamp Widget. - on clicking to open it - it logs a timestamp.  Can include notes too.

Wunderlist - Recommend it - for shared shopping lists, or any kind of list of things to do.  It's not perfect but it works.

T2 mood tracker - as a second backup to my other mood tracker.  This one takes more effort to do so I only enter the data every few days.  YMMV it might be useful to you.

HOVBX - an overlay for google hangouts that sits on top of the call buttons so you don't accidentally call people (useful for groups who butt-dial each other)

Fleksy - A different keyboard - it seems faster but I am used to swiftkey so I don't use this one.

Tagtime - useful to try.  reminds you hourly or so to tag what you are currently working on.  I used it for a while to help keep me on track.  I noticed I was significantly off track and eventually stopped using it because I felt bad about it.  I feel like I spend more time on-task now but because I want to.  This was a step in the journey of deciding to do that.

Alarm clock plus - it's the best alarm clock app.  I don't use alarms often but this one does everything.

Squats/Push ups/sit ups/pull ups - Rittr labs - good at a simple exercise routine.  Just tells you what to do.  designed to get you from zero to "up to N" of an exercise (250 or 100) so gives you instruction on how many to do each day.  Worth trying.  Didn't work for me, but for other reasons about my lifestyle.

Twilight - mentioned above, replaces night mode and does what f.lux with a PC (filters to be less blue at night)

World clock - started talking to people in different time zones and this was handy.

CPU-Z - lists out all the phone's sensors and tells you their outputs.  cool for looking at gyroscopes/accelerometers.

Coffee meets bagel - dating app.  One profile per day, accept/reject.  Has a different feel to tinder

Bumble - US only; Like Tinder but the girl has to message you first or the connection disappears.

Business Calendar - Best calendar I have found so far

Clipper - Clipboard app for holding the last 20 or so things you have copied.  Also for showing you what's currently on the "copy" 

Pixlr - photo editor.  It's a good one, don't use it often

Rain Alarm - Very good app.  Tells you if it's raining anywhere nearby.  Can be enough to tell you "I should walk home sooner" but also just interesting to have a bit more awareness of your environment.

Audio Scope - Cool science app for viewing the audio scope

Spectrum analyze - Cool science app for viewing the audio spectrum

Frequensee - Fun science app for viewing audio spectrum data

PitchLab lite - Neat for understanding pitch when singing or listening to musical notes.  Another science-visualisation app

Spectralview analyser - another spectrum analyser

Pulsepoint AED - Initiative to gather a public map of all AED's worldwide.  To help; get the app and check the details of nearby AED's

FBreader - Ebook reader.  Pretty good, can control brightness and font size.

KIK - Social app like whatsapp/viber etc.  Don't use it yet, got it on a recommendation.

Wildwilderness - Reporting app for if you see suspicious wildlife trade going on anywhere in the world.  Can report anonymously, any details help.  

DGT GTD - Newly suggested by LW, have not tried to use it yet

Pushbullet - Syncs phone notifications with your PC so you can access things via PC.

I have noticed I often wish "Damn I wish someone had made an app for that" and when I search for it I can't find it.  Then I outsource the search to facebook or other people; and they can usually say - yes, its called X.  Which I can put down to an inability to know how to search for an app on my part; more than anything else.

With that in mind; I wanted to solve the problem of finding apps for other people.

The following is a list of apps that I find useful (and use often) for productive reasons:

This list is long.  The most valuable ones are the top section that I use regularly.  

Other things to mention:

Internal storage - I have a large internal memory card because I knew I would need lots of space.  So I played the "out of sight out of mind game" and tried to give myself as much space as possible by buying a large internal card.  The future of phones is to not use a microSD card and just use internal storage.  I was taking 1000 photos a month, and since having storage troubles and my phone slowing down I don't take nearly even 1 photo a day.  I would like to change that and will probably make it a future bug of mine to solve.

Battery - I use anker external battery blocks to save myself the trouble of worrying about batteries.  If prepared I leave my house with 2 days of phone charge (of 100% use).  I used to count "wins" of days I beat my phone battery (stay awake longer than it) but they are few and far between.  Also I doubled my external battery power and it sits at two days not one (28000mA + 2*460ma spare phone batteries) This is still true but those batteries don't do what they used to.  Anker have excellent service and refunded the battery that did not stay strong.  I would recommend to all phone users to have a power block.  Phones just are not made with enough battery.

Phone - I have a Samsung S4 (android Running KitKat) because it has a few features I found useful that were not found in many other phones - Cheap, Removable battery, external storage card, replaceable case. I am now on lolipop, and have made use of the external antenna port for a particularly bad low-signal location.

Screen cover - I am using the one that came with the phone still Still

I carry a spare phone case, in the beginning I used to go through one each month; now I have a harder case than before it hasn't broken. I change phone case colours for aesthetics every few months.

I also have swapped out the plastic frame that holds the phone case on as these broke, it was a few dollars on ebay and I needed a teeny screwdriver but other than that it works great now!

MicroUSB cables - I went through a lot of effort to sort this out, it's still not sorted, but its "okay for now".  The advice I have - buy several good cables (read online reviews about it), test them wherever possible, and realise that they die.  Also carry a spare or two.  I have now spent far too much time on this problem.  I am at the end of my phone's life and the MicroUSB port is dying, I have replaced it with a new one which is also not great, and I now leave my phone plugged into it's microUSB cable.  I now use Anker brand cabled which are excellent, but my phone still kills one every few weeks.  The whole idea of the MicroUSB plug is awful.  They don't work very well at all.

Restart - I restart my phone probably most days when it gets slow.  It's got programming bugs, but this solution works for now.

These sit on my screen all the time.

Data monitor - Gives an overview of bits per second upload or download. updated every second. ✓

CpuTemp - Gives an overlay of the current core temperature.  My phone is always hot, I run it hard with bluetooth, GPS and wifi blaring all the time.  I also have a lot of active apps. ✓

M̶i̶n̶d̶f̶u̶l̶n̶e̶s̶s̶ ̶b̶e̶l̶l̶ ̶-̶ ̶M̶y̶ ̶p̶h̶o̶n̶e̶ ̶m̶a̶k̶e̶s̶ ̶a̶ ̶c̶h̶i̶m̶e̶ ̶e̶v̶e̶r̶y̶ ̶h̶a̶l̶f̶ ̶h̶o̶u̶r̶ ̶t̶o̶ ̶r̶e̶m̶i̶n̶d̶ ̶m̶e̶ ̶t̶o̶ ̶c̶h̶e̶c̶k̶,̶ ̶"̶A̶m̶ ̶I̶ ̶d̶o̶i̶n̶g̶ ̶s̶o̶m̶e̶t̶h̶i̶n̶g̶ ̶o̶f̶ ̶h̶i̶g̶h̶-̶v̶a̶l̶u̶e̶ ̶r̶i̶g̶h̶t̶ ̶n̶o̶w̶?̶"̶ ̶i̶t̶ ̶s̶o̶m̶e̶t̶i̶m̶e̶s̶ ̶s̶t̶o̶p̶s̶ ̶m̶e̶ ̶f̶r̶o̶m̶ ̶d̶o̶i̶n̶g̶ ̶c̶r̶a̶p̶ ̶t̶h̶i̶n̶g̶s̶.̶ Wow that didn't last.  It was so annoying that I stopped using it.

Facebook chat heads - I often have them open, they have memory leaks and start slowing down my phone after a while, I close and reopen them when I care enough.✓ memory leaks improved but are still there.


Facebook - communicate with people.  I do this a lot.✓

Inkpad - its a note-taking app, but not an exceptionally great one; open to a better suggestion.✓

Ingress - it makes me walk; it gave me friends; it put me in a community.  Downside is that it takes up more time than you want to give it.  It's a mobile GPS game.  Join the Resistance. Highly recommend

Maps (google maps) - I use this most days; mostly for traffic assistance to places that I know how to get to.✓

Camera - I take about 1000 photos a month.  Generic phone-app one. I take significantly less photos now, my phone slowed down so the activation energy for *open the camera* is higher.  I plan to try to fix this soon

Assistive light - Generic torch app (widget) I use this daily.✓


Hello - SMS app.  I don't like it but its marginally better than the native one.✓

S̶u̶n̶r̶i̶s̶e̶ ̶c̶a̶l̶e̶n̶d̶a̶r̶ ̶-̶ ̶I̶ ̶d̶o̶n̶'̶t̶ ̶l̶i̶k̶e̶ ̶t̶h̶e̶ ̶n̶a̶t̶i̶v̶e̶ ̶c̶a̶l̶e̶n̶d̶a̶r̶;̶ ̶I̶ ̶d̶o̶n̶'̶t̶ ̶l̶i̶k̶e̶ ̶t̶h̶i̶s̶ ̶o̶r̶ ̶a̶n̶y̶ ̶o̶t̶h̶e̶r̶ ̶c̶a̶l̶e̶n̶d̶a̶r̶.̶ ̶ ̶T̶h̶i̶s̶ ̶i̶s̶ ̶t̶h̶e̶ ̶l̶e̶a̶s̶t̶ ̶b̶a̶d̶ ̶o̶n̶e̶ ̶I̶ ̶h̶a̶v̶e̶ ̶f̶o̶u̶n̶d̶.̶ ̶ ̶I̶ ̶h̶a̶v̶e̶ ̶a̶n̶ ̶a̶p̶p̶ ̶c̶a̶l̶l̶e̶d̶ ̶"̶f̶a̶c̶e̶b̶o̶o̶k̶ ̶s̶y̶n̶c̶"̶ ̶w̶h̶i̶c̶h̶ ̶h̶e̶l̶p̶s̶ ̶w̶i̶t̶h̶ ̶e̶n̶t̶e̶r̶i̶n̶g̶ ̶i̶n̶ ̶a̶ ̶f̶r̶a̶c̶t̶i̶o̶n̶ ̶o̶f̶ ̶t̶h̶e̶ ̶e̶v̶e̶n̶t̶s̶ ̶i̶n̶ ̶m̶y̶ ̶l̶i̶f̶e̶.̶

Business Calendar - works better, has a better interface than Sunrise.

Phone, address book, chrome browser.✓  I use tab sync, and recommend it for all your chrome-enabled devices.

GPS logger - I have a log of my current gps location every 5 minutes.  If google tracks me I might as well track myself.  I don't use this data yet but its free for me to track; so if I can find a use for the historic data that will be a win. I don't make use of this data and can access my google data just fine so I might stop tracking this.


Fit - google fit; here for multiple redundancy✓

S Health - Samsung health - here for multiple redundancy✓

Fitbit - I wear a flex step tracker every day, and input my weight daily manually through this app✓

Basis - I wear a B1 watch, and track my sleep like a hawk.✓

Rescuetime - I track my hours on technology and wish it would give a better breakdown. (I also paid for their premium service)✓

Voice recorder - generic phone app; I record around 1-2 hours of things I do per week.  Would like to increase that. I now use this for one hour a month or less.

Narrative - I recently acquired a life-logging device called a narrative, and don't really know how to best use the data it gives.  But its a start. I tried using the device but it has poor battery life.  I also received negative feedback when wearing it in casual settings.  This increases the activation energy to using it.  I also can't seem to wear it at the right height and would regularly take photos of the tops of people's heads.  I would come home with a photo a minute for a day (and have the battery die on it a few times) and have one use-able photo in the lot.  significantly lower than I was expecting.

How are you feeling? - Mood tracking app - this one is broken but the best one I have found, it doesn't seem to open itself after a phone restart; so it won't remind you to enter in a current mood.  I use a widget so that I can enter in the mood quickly.  The best parts of this app are the way it lets you zoom out, and having a 10 point scale.  I used to write a quick sentence about what I was feeling, but that took too much time so I stopped doing it. Highly recommend I use this every day.

Stopwatch - "hybrid stopwatch" - about once a week I time something and my phone didn't have a native one.  This app is good at being a stopwatch.✓

Callinspector - tracks ingoing or outgoing calls and gives summaries of things like, who you most frequently call, how much data you use, etc.  can also set data limits. I dont do anything with this data so I think I will stop using it and save my phone's battery life.


Powercalc - the best calculator app I could find ✓

N̶i̶g̶h̶t̶ ̶m̶o̶d̶e̶ ̶-̶ ̶f̶o̶r̶ ̶s̶a̶v̶i̶n̶g̶ ̶b̶a̶t̶t̶e̶r̶ ̶(̶i̶t̶ ̶d̶i̶m̶s̶ ̶y̶o̶u̶r̶ ̶s̶c̶r̶e̶e̶n̶)̶,̶ ̶I̶ ̶d̶o̶n̶'̶t̶ ̶u̶s̶e̶ ̶t̶h̶i̶s̶ ̶o̶f̶t̶e̶n̶ ̶b̶u̶t̶ ̶i̶t̶ ̶i̶s̶ ̶g̶o̶o̶d̶ ̶a̶t̶ ̶w̶h̶a̶t̶ ̶i̶t̶ ̶d̶o̶e̶s̶.̶ ̶ ̶I̶ ̶w̶o̶u̶l̶d̶ ̶c̶o̶n̶s̶i̶d̶e̶r̶ ̶a̶n̶ ̶a̶p̶p̶ ̶t̶h̶a̶t̶ ̶d̶i̶m̶s̶ ̶t̶h̶e̶ ̶b̶l̶u̶e̶ ̶l̶i̶g̶h̶t̶ ̶e̶m̶i̶t̶t̶e̶d̶ ̶f̶r̶o̶m̶ ̶m̶y̶ ̶s̶c̶r̶e̶e̶n̶;̶ ̶h̶o̶w̶e̶v̶e̶r̶ ̶I̶ ̶d̶o̶n̶'̶t̶ ̶n̶o̶t̶i̶c̶e̶ ̶a̶n̶y̶ ̶n̶e̶g̶a̶t̶i̶v̶e̶ ̶s̶l̶e̶e̶p̶ ̶e̶f̶f̶e̶c̶t̶s̶ ̶s̶o̶ ̶I̶ ̶h̶a̶v̶e̶ ̶b̶e̶e̶n̶ ̶p̶u̶t̶t̶i̶n̶g̶ ̶o̶f̶f̶ ̶g̶e̶t̶t̶i̶n̶g̶ ̶a̶r̶o̶u̶n̶d̶ ̶t̶o̶ ̶i̶t̶.̶ ̶

Advanced signal status - about once a month I am in a place with low phone signal - this one makes me feel better about knowing more details of what that means.✓

Ebay - To be able to buy those $5 solutions to problems on the spot is probably worth more than $5 of "impulse purchases" that they might be classified as.✓

C̶a̶l̶ ̶-̶ ̶a̶n̶o̶t̶h̶e̶r̶ ̶c̶a̶l̶e̶n̶d̶a̶r̶ ̶a̶p̶p̶ ̶t̶h̶a̶t̶ ̶s̶o̶m̶e̶t̶i̶m̶e̶s̶ ̶c̶a̶t̶c̶h̶e̶s̶ ̶e̶v̶e̶n̶t̶s̶ ̶t̶h̶a̶t̶ ̶t̶h̶e̶ ̶f̶i̶r̶s̶t̶ ̶o̶n̶e̶ ̶m̶i̶s̶s̶e̶s̶.̶ Nope just using business calendar now.

ES file explorer - for searching the guts of my phone for files that are annoying to find.  Not as used or as useful as I thought it would be but still useful.✓

Maps.Me - I went on an exploring adventure to places without signal; so I needed an offline mapping system.  This map saved my life.✓ Have not used this since then, but I will not delete it.

Wikipedia - information lookup✓

Youtube - don't use it often, but its there.✓

How are you feeling? (again) - I have this in multiple places to make it as easy as possible for me to enter in this data✓

Play store - Makes it easy to find.✓

Gallery - I take a lot of photos, but this is the native gallery and I could use a better app.✓


In no particular order;

F̶a̶c̶e̶b̶o̶o̶k̶ ̶g̶r̶o̶u̶p̶s̶ was so annoying I got rid of it, Yahoo Mail, Skype, Facebook Messenger chat heads, Whatsapp, meetup, google+, Hangouts, Slack, Viber, OKcupid, Gmail, Tinder, Chatango, CoffeeMeetsBagel, Signal.  Of which I use very little.

They do social things.  

I don't really use:  Viber, OKC, Gmail, Tinder, Chatango, CMB, Signal, whatsapp, G+.

I use: Slack, Facebook messenger, yahoo mail  every day.

(ticks here mean they are still in this category and are not used)




snapchat Deleted.

AnkiDroid - Anki memoriser app for a phone. ✓

MyFitnessPal - looks like a really good app, have not used it ✓

Fitocracy - looked good✓

I got these apps for a reason; but don't use them.


Not on my front pages:

These I don't use as often; or have not moved to my front pages (skipping the ones I didn't install or don't use)

S memo - samsung note taking thing, I rarely use, but do use once a month or so.✓

Drive, Docs, Sheets - The google package.  Its terrible to interact with documents on your phone, but I still sometimes access things from my phone.✓Useful for viewing, not effective for editing.

bubble - I don't think I have ever used this Deleted

Compass pro - gives extra details about direction. I never use it.Deleted


(ingress apps) Glypher, Agentstats, integrated timer, cram, notify Don't use them, but still there

TripView (public transport app for my city) Deleted

Convertpad - converts numbers to other numbers. Sometimes quicker than a google search.✓

ABC Iview - National TV broadcasting channel app.  Every program on this channel is uploaded to this app, I have used it once to watch a documentary since I got the app. Deleted

AnkiDroid - I don't need to memorise information in the way it is intended to be used; so I don't use it. Cram is also a flashcard app but I don't use it. Not used

First aid - I know my first aid but I have it anyway for the marginal loss of 50mb of space. Still haven't used it once.

Triangle scanner - I can scan details from NFC chips sometimes. Still haven't used it once.

MX player - does videos better than native apps. Rarely used

Zarchiver - Iunno.  Does something.  Rarely used

Pandora - Never used Deleted

Soundcloud - used once every two months, some of my friends post music online.  Deleted - They have a web interface.

Barcode scanner - never used

Diskusage - Very useful.  Visualises where data is being taken up on your phone, helps when trying to free up space.✓

Swiftkey - Better than native keyboards.  Gives more freedom, I wanted a keyboard with black background and pale keys, swiftkey has it.✓

Google calendar - don't use it, but its there to try to use.✓

Sleepbot - doesn't seem to work with my phone, also I track with other methods, and I forget to turn it on; so its entirely not useful in my life for sleep tracking. Deleted

My service provider's app.

AdobeAcrobat - use often; not via the icon though. ✓

Wheresmydroid? - seems good to have; never used.  My phone is attached to me too well for me to lose it often.  I have it open most of the waking day maybe. ✓ I actually set this up and tested if it worked.  It doesn't work from install, needs an account (which I now have) make sure you actually have an account

Uber - I don't use ubers. Deleted

Terminal emulator, AIDE, PdDroid party, Processing Android, An editor for processing, processing reference, learn C++ - programming apps for my phone, I don't use them, and I don't program much. Deleted some to make space on my phone.

Airbnb - Have not used yet, done a few searches for estimating prices of things. Deleted - Web interface better.

Heart rate - measures your heart rate using the camera/flash.  Neat, not useful other than showing off to people how its possible to do. ✓

Basis - (B1 app), - has less info available than their new app. ✓

BPM counter - Neat if you care about what a "BPM" is for music.  Don't use often. ✓

Sketch guru - fun to play with, draws things. ✓

DJ studio 5 - I did a dj thing for a friend once, used my phone.  was good. ✓

Facebook calendar Sync - as the name says. ✓

Dual N-back - I Don't use it.  I don't think it has value giving properties. Deleted

Awesome calendar - I don't use but it comes with good reccomendations. Deleted Use Business Calendar now.

Battery monitor 3 - Makes a graph of temperature and frequency of the cores.  Useful to see a few times.  Eventually its a bell curve. ✓

urbanspoon - local food places app. ✓use google mostly now.

Gumtree - Australian Ebay (also ebay owns it now) ✓

Printer app to go with my printer ✓

Car Roadside assistance app to go with my insurance ✓

Virgin air entertainment app - you can use your phone while on the plane and download entertainment from their in-flight system. ✓

Two things now;

What am I missing? Was this useful?  Ask me to elaborate on any app and why I used it.  If I get time I will do that anyway. 

P.S. this took 1.5 hours to review and rewrite.

P.P.S - I was intending to make, keep and maintain a list of useful apps, that is not what this document is.  If there are enough suggestions that it's time to make and keep a list; I will do that.

My table of contents links to my other writings

Purposeful Anti-Rush

4 Elo 08 March 2016 07:34AM

Why do we rush?

Things happen; Life gets in the way, and suddenly we find ourselves trying to get to somewhere with less time than it's possible to actually get there in.  So in the intention to get there sooner; to somehow compensate ourselves for not being on time; we rush.  We run; we get clumsy, we drop things; we forget things; we make mistakes; we scribble instead of writing, we scramble and we slip up.

I am today telling you to stop that.  Don't do that.  It's literally the opposite of what you want to do.  This is a bug I have.

Rushing has a tendency to do the opposite of what I want it to do.  I rush with the key in the lock; I rush on slippery surfaces and I fall over, I rush with coins and I drop them.  NO!  BAD!  Stop that.  This is one of my bugs.

What you (or I) really want when we are rushing is to get there sooner, to get things done faster.  

Instrumental experiment: Next time you are rushing I want you to experiment and pay attention; try to figure out what you end up doing that takes longer than it otherwise would if you weren't rushing.

The time after that when you are rushing; instead try slowing down, and this time observe to see if you get there faster.

Run as many experiments as you like.

Experimenter’s note: Maybe you are really good at rushing and really bad at slowing down.  Maybe you don't need to try this.  Maybe slowing down and being nervous about being late together are entirely unhelpful for you.  Report back.

When you are rushing, purposefully slow down. (or at least try it)

Meta: Time to write 20mins

My Table of contents contains other things I have written.

Feedback welcome.

Study partner matching thread

5 AspiringRationalist 25 January 2016 04:25AM

Nate Soares recommends pairing up when studying, so I figured it would be useful to facilitate that.

If you are looking for a study partner, please post a top-level comment saying:


  • What you want to study
  • Your level of relevant background knowledge
  • If you have sources in mind (MOOCs, textbooks, etc), what those are
  • Your time zone


Proposal for increasing instrumental rationality value of the LessWrong community

19 harcisis 28 October 2015 03:18PM

There were some concerns here (http://lesswrong.com/lw/2po/selfimprovement_or_shiny_distraction_why_less/) regarding value of LessWrong community from the perspective of instrumental rationality. 

In the discussion on the relevant topic I've seen the story about how community can help  http://lesswrong.com/lw/2p5/humans_are_not_automatically_strategic/2l73 from this perspective.

And I think It's a great thing that local community can help people in various ways to achieve their goals. Also it's not the first time I hear about how this kind of community is helpful as a way of achieving personal goals.

Local LessWrong meetups and communities are great, but they have kind of different focus. And a lot of people live in places where there are no local community or it's not active/regular.

So I propose to form small groups (4-8 people). Initially, groups would meet (using whatever means that are convenient for a particular group), discuss the goals of each participant in a long and in a short term (life/year/month/etc). They would collectively analyze proposed strategies for achieving these goals. Discuss how short term goals align with long term goals. And determine whether the particular tactics for achieving stated goal is optimal. And is there any way to improve on it?

Afterwards, the group would meet weekly to:

Set their short term goals, retrospect on the goals set for previous period. Discuss how successfully they were achieved, what problems people encountered and what alterations to overall strategy follows. And they will also analyze how newly set short-term goals coincide with long-term goals. 

In this way, each member of the group would receive helpful feedback on his goals and on his approach to attaining them. And also he will fill accountable, in a way, for goals, he have stated before the group and this could be an additional boost to productivity.

I also expect that group would be helpful from the perspective of overcoming different kind of fallacies and gaining more accurate beliefs about the world. Because it's easier for people to spot errors in the beliefs/judgment of others. I hope that group's would be able to develop friendly environment and so it would be easier for people to get to know about their errors and change their mind. Truth springs from argument amongst friends.

Group will reflect on it's effectiveness and procedures every month(?) and will incrementally improve itself. Obviously if somebody have some great idea about group proceedings it makes sense to discuss it after usual meeting and implement it right away. But I think regular in-depth retrospective on internal workings is also important.

If there are several groups available - groups will be able to share insights, things group have learned during it's operation. (I'm not sure how much of this kind of insights would be generated, but maybe it would make sense to once in a while publish post that would sum up groups collective insights.)

There are some things that I'm not sure about: 


  • I think it would be worth to discuss possibility of shuffling group members (or at least exchanging members in some manner) once in a while to provide fresh insight on goals/problems that people are facing and make the flow of ideas between groups more agile.
  • How the groups should be initially formed? Just random assignment or it's reasonable to devise some criteria? (Goals alignment/Diversity/Geography/etc?)


I think initial reglament of the group should be developed by the group, though I guess it's reasonable to discuss some general recommendations.

So what do you think? 

If you interested - fill up this google form:



Instrumental Rationality Questions Thread

6 AspiringRationalist 27 September 2015 09:22PM

Previous thread: http://lesswrong.com/lw/mnq/instrumental_rationality_questions_thread/

This thread is for asking the rationalist community for practical advice.  It's inspired by the stupid questions series, but with an explicit focus on instrumental rationality.

Questions ranging from easy ("this is probably trivial for half the people on this site") to hard ("maybe someone here has a good answer, but probably not") are welcome.  However, please stick to problems that you actually face or anticipate facing soon, not hypotheticals.

As with the stupid questions thread, don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better, and please be respectful of other people's admitting ignorance and don't mock them for it, as they're doing a noble thing.

(See also the Boring Advice Repository)

Notes on Actually Trying

14 AspiringRationalist 23 September 2015 02:53AM

These ideas came out of a recent discussion on actually trying at Citadel, Boston's Less Wrong house.

What does "Actually Trying" mean?

Actually Trying means applying the combination of effort and optimization power needed to accomplish a difficult but feasible goal. The effort and optimization power are both necessary.

Failure Modes that can Resemble Actually Trying

Pretending to try

Pretending to try means doing things that superficially resemble actually trying but are missing a key piece. You could, for example, make a plan related to your goal and diligently carry it out but never stop to notice that the plan was optimized for convenience or sounding good or gaming a measurement rather than achieving the goal. Alternatively, you could have a truly great plan and put effort into carrying it out until it gets difficult.

Trying to Try

Trying to try is when you throw a lot of time and perhaps mental anguish at a task but not actually do the task. Writer's block is the classic example of this.


Sphexing is the act of carrying out a plan or behavior repeatedly despite it not working.

The Two Modes Model of Actually Trying

Actually Trying requires a combination of optimization power and effort, but each of those is done with a very different way of thinking, so it's helpful to do the two separately. In the first way of thinking, Optimizing Mode, you think hard about the problem you are trying to solve, develop a plan, look carefully at whether it's actually well-suited to solving the problem (as opposed to pretending to try) and perhaps Murphy-jitsu it. In Executing Mode, you carry out the plan.

Executing Mode breaks down when you reach an obstacle that you either don't know how to overcome or where the solution is something you don't want to do. In my personal experience, this is where things tend to get derailed. There are a few ways to respond to this situation:

  • Return to Optimizing Mode to figure out how to overcome the obstacle / improve your plan (good),
  • Ask for help / consult a relevant expert (good),
  • Take a break, which could lead to a eureka moment, lead to Optimizing Mode or lead to derailing (ok),
  • Sphex (bad),
  • Derail / procrastinate (bad), or
  • Punt / give up (ok if the obstacle is insurmountable).

The key is to respond constructively to obstacles. This usually means getting back to Optimizing Mode, either directly or after a break.  The failure modes here are derailing immediately, a "break" that turns into a derailment, and sphexing.  In our discussion, we shared a few techniques we had used to get back to Optimizing Mode.  These techniques tended to focus on some combination of removing the temptation to derail, providing a reminder to optimize, and changing mental state.

Getting Back to Optimizing Mode

Context switches are often helpful here.  Because for many people, work and procrastination both tend to be computer-based activities, it is both easy and tempting to switch to a time-wasting activity immediately upon hitting an obstacle.  Stepping away from the computer takes away the immediate distraction and depending on what you do away from the computer, helps you either think about the problem or change your mental state.  Depending on what sort of mood I'm in, I sometimes step away from the computer with a pen and paper to write down my thoughts (thinking about the problem), or I may step away to replenish my supply of water and/or caffeine (changing my mental state).  Other people in the discussion said they found going for a walk or getting more strenuous exercise to be helpful when they needed a break.  Strenuous exercise has the additional advantage of having very low risk of turning into a longer-than-intended break.

The danger with breaks is that they can turn into derailment.  Open-ended breaks ("I'll just browse Reddit for five minutes") have a tendency to expand, so it's best to avoid them in favor of things with more definite endings.  The other common say for breaks to turn into derailment is to return from a break and go to something non-productive.  I have had some success with attaching a sticky-note to my monitor reminding me what to do when I return to my computer.  I have also found that if the note makes clear what problem I need to solve also makes me less likely to sphex when I return to my computer.

In the week or so since the discussion that inspired this post, I have found that asking myself "what would Actually Trying look like right now?" This has helped me stay on track when I have encountered difficult problems at work.

Instrumental Rationality Questions Thread

14 AspiringRationalist 22 August 2015 08:25PM

This thread is for asking the rationalist community for practical advice.  It's inspired by the stupid questions series, but with an explicit focus on instrumental rationality.

Questions ranging from easy ("this is probably trivial for half the people on this site") to hard ("maybe someone here has a good answer, but probably not") are welcome.  However, please stick to problems that you actually face or anticipate facing soon, not hypotheticals.

As with the stupid questions thread, don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better, and please be respectful of other people's admitting ignorance and don't mock them for it, as they're doing a noble thing.

(See also the Boring Advice Repository)

Min/max goal factoring and belief mapping exercise

-1 [deleted] 23 June 2015 05:30AM

Edit 3: Removed description of previous edits and added the following:

This thread used to contain the description of a rationality exercise.

I have removed it and plan to rewrite it better.

I will repost it here, or delete this thread and repost in the discussion.

Thank you.

How to save (a lot of) money on flying

8 T3t 03 February 2015 06:25PM

I was going to wait to post this for reasons, but realized that was pretty dumb when the difference of a few weeks could literally save people hundreds, if not thousands of collective dollars.


If you fly regularly (or at all), you may already know about this method of saving money.  The method is quite simple: instead of buying a round-trip ticket from the airline or reseller, you hunt down much cheaper one-way flights with layovers at your destination and/or your point of origin.  Skiplagged is a service that will do this automatically for you, and has been in the news recently because the creator was sued by United Airlines and Orbitz.  While Skiplagged will allow you to click-through to purchase the one-way ticket to your destination, they have broken or disabled the functionality of the redirect to the one-way ticket back (possibly in order to raise more funds for their legal defense).  However, finding the return flight manually is fairly easy as the provide all the information to filter for it on other websites (time, airline, etc).  I personally have benefited from this - I am flying to Texas from Southern California soon, and instead of a round-trip ticket which would cost me about $450, I spent ~$180 on two one-way tickets (with the return flight being the "layover" at my point-of-origin).  These are, perhaps, larger than usual savings; I think 20-25% is more common, but even then it's a fairly significant amount of money.


Relevant warnings by gwillen:

You should be EXTREMELY CAREFUL when using this strategy. It is, at a minimum, against airline policy.

If you have any kind of airline status or membership, and you do this too often, they will cancel it. If you try to do this on a round-trip ticket, they will cancel your return. If the airlines have any means of making your life difficult available to them, they WILL use it.

Obviously you also cannot check bags when using this strategy, since they will go to the wrong place (your ostensible, rather than your actual, destination.) This also means that if you have an overhead-sized carryon, and you board late and are forced to check it, your bag will NOT make it to your intended destination; it will go to the final destination marked on your ticket. If you try to argue about this, you run the risk of getting your ticket cancelled altogether, since you're violating airline policies by using a ticket in this way.


Additionally, you should do all of your airline/hotel/etc shopping using whatever private browsing mode your web browser has.  This will often let you purchase the exact same product for a cheaper price.


That is all.

[Link]How to Achieve Impossible Career Goals (My manifesto on instrumental rationality)

6 [deleted] 02 January 2015 08:46PM

Hey guys,

Don't normally post from my blog to here, but the latest massive post on goal achievement in 2015 has a ton that would be relevant to people here.

Some things that I think would be of particular interest to LWers:


  • The section called "Map the Path to Your Goal" has some really great stuff on planning that haven't seen many other places. I know planning gets a bad wrap here, but when combined with the "Contigency Plans" method near the bottom of the post, I've found this stuff to be killer for getting results for students.
  • At the bottom, there's a section called "Choosing More Habits" that breaks down habits into the only five categories you should ever focus on. If you're planning to systematically take on new habits in 2015, this will help.
  • The section called "a proactive mindset" has some fun mental reframes to play around with.
Anyways, would love feedback and thoughts. Feel free to comment here or on the bottom of that post.



Is arguing worth it? If so, when and when not? Also, how do I become less arrogant?

9 27chaos 27 November 2014 09:28PM

I've had several political arguments about That Which Must Not Be Named in the past few days with people of a wide variety of... strong opinions. I'm rather doubtful I've changed anyone's mind about anything, but I've spent a lot of time trying to do so. I also seem to have offended one person I know rather severely. Also, even if I have managed to change someone's mind about something through argument, it feels as though someone will end up having to argue with them later down the line when the next controversy happens.

It's very discouraging to feel this way. It is frustrating when making an argument is taken as a reason for personal attack. And it's annoying to me to feel like I'm being forced into something by the disapproval of others. I'm tempted to just retreat from democratic engagement entirely. But there are disadvantages to this, for example it makes it easier to maintain irrational beliefs if you never talk to people who disagree with you.

I think a big part of the problem is that I have an irrational alief that makes me feel like my opinions are uniquely valuable and important to share with others. I do think I'm smarter, more moderate, and more creative than most. But the feeling's magnitude and influence over my behavior is far greater than what's justified by the facts.

How do I destroy this feeling? Indulging it satisfies some competitive urges of mine and boosts my self-esteem. But I think it's bad overall despite this, because it makes evaluating the social consequences of my choices more difficult. It's like a small addiction, and I have no idea how to get over it.

Does anyone else here have an opinion on any of this? Advice from your own lives, perhaps?

Things to consider when optimizing: Commuting, Transportation

2 [deleted] 10 November 2014 05:44PM

Previous topics:

[This is] a series of discussion posts, where each post is of the form "Let's brainstorm things you might consider when optimizing X", where X is something like sleep, exercise, commuting, studying, etc. Think of it like a specialized repository.

In the spirit of try more things, the direct benefit is to provide insights like "Oh, I never realized that BLAH is a knob I can fiddle. This gives me an idea of how I might change BLAH given my particular circumstances. I will try this and see what happens!"

The indirect benefit is to practice instrumental rationality using the "toy problem" provided by a general prompt.

Accordingly, participation could be in many forms:

* Pointers to scientific research
* General directions to consider
* Personal experience
* Boring advice
* Intersections with other community ideas, biases
* Cost-benefit, value-of-information analysis
* Related questions
* Other musings, thoughts, speculation, links, theories, etc.

This post is on commuting and transportation.


Things to consider when optimizing: Sleep

15 [deleted] 28 October 2014 05:26PM

I'd like to have a series of discussion posts, where each post is of the form "Let's brainstorm things you might consider when optimizing X", where X is something like sleep, exercise, commuting, studying, etc. Think of it like a specialized repository.

In the spirit of try more things, the direct benefit is to provide insights like "Oh, I never realized that BLAH is a knob I can fiddle. This gives me an idea of how I might change BLAH given my particular circumstances. I will try this and see what happens!"

The indirect benefit is to practice instrumental rationality using the "toy problem" provided by a general prompt.

Accordingly, participation could be in many forms:

* Pointers to scientific research
* General directions to consider
* Personal experience
* Boring advice
* Intersections with other community ideas, biases
* Cost-benefit, value-of-information analysis
* Related questions
* Other musings, thoughts, speculation, links, theories, etc.

This post is on sleep and circadian rhythms.

Talking to yourself: A useful thinking tool that seems understudied and underdiscussed

33 chaosmage 09 September 2014 04:56PM

I have returned from a particularly fruitful Google search, with unexpected results.

My question was simple. I was pretty sure that talking to myself aloud makes me temporarily better at solving problems that need a lot of working memory. It is a thinking tool that I find to be of great value, and that I imagine would be of interest to anyone who'd like to optimize their problem solving. I just wanted to collect some evidence on that, make sure I'm not deluding myself, and possibly learn how to enhance the effect.

This might be just lousy Googling on my part, but the evidence is surprisingly unclear and disorganized. There are at least three seperate Wiki pages for it. They don't link to each other. Instead they present the distinct models of three seperate fields: autocommunication in communication studies, semiotics and other cultural studies, intrapersonal communication ("self-talk" redirects here) in anthropology and (older) psychology and private speech in developmental psychology. The first is useless for my purpose, the second mentions "may increase concentration and retention" with no source, the third confirms my suspicion that this behavior boosts memory, motivation and creativity, but it only talks about children.

Google Scholar yields lots of sports-related results for "self-talk" because it can apparently improve the performance of athletes and if there's something that obviously needs the optimization power of psychology departments, it is competitive sports. For "intrapersonal communication" it has papers indicating it helps in language acquisition and in dealing with social anxiety. Both are dwarfed by the results for "private speech", which again focus on children. There's very little on "autocommunication" and what is there has nothing to do with the functioning of individual minds.

So there's a bunch of converging pieces of evidence supporting the usefulness of this behavior, but they're from several seperate fields that don't seem to have noticed each other very much. How often do you find that?

Let me quickly list a few ways that I find it plausible to imagine talking to yourself could enhance rational thought.

  • It taps the phonological loop, a distinct part of working memory that might otherwise sit idle in non-auditory tasks. More memory is always better, right?
  • Auditory information is retained more easily, so making thoughts auditory helps remember them later.
  • It lets you commit to thoughts, and build upon them, in a way that is more powerful (and slower) than unspoken thought while less powerful (but quicker) than action. (I don't have a good online source for this one, but Inside Jokes should convince you, and has lots of new cognitive science to boot.)
  • System 1 does seem to understand language, especially if it does not use complex grammar - so this might be a useful way for results of System 2 reasoning to be propagated. Compare affirmations. Anecdotally, whenever I'm starting a complex task, I find stating my intent out loud makes a huge difference in how well the various submodules of my mind cooperate.
  • It lets separate parts of your mind communicate in a fairly natural fashion, slows each of them down to the speed of your tongue and makes them not interrupt each other so much. (This is being used as a psychotherapy method.) In effect, your mouth becomes a kind of talking stick in their discussion.

All told, if you're talking to yourself you should be more able to solve complex problems than somebody of your IQ who doesn't, although somebody of your IQ with a pen and a piece of paper should still outthink both of you.

Given all that, I'm surprised this doesn't appear to have been discussed on LessWrong. Honesty: Beyond Internal Truth comes close but goes past it. Again, this might be me failing to use a search engine, but I think this is worth more of our attention that it has gotten so far.

I'm now almost certain talking to myself is useful, and I already find hindsight bias trying to convince me I've always been so sure. But I wasn't - I was suspicious because talking to yourself is an early warning sign of schizophrenia, and is frequent in dementia. But in those cases, it might simply be an autoregulatory response to failing working memory, not a pathogenetic element. After all, its memory enhancing effect is what the developmental psychologists say the kids use it for. I do expect social stigma, which is why I avoid talking to myself when around uninvolved or unsympathetic people, but my solving of complex problems tends to happen away from those anyway so that hasn't been an issue really.

So, what do you think? Useful?

What resources have increasing marginal utility?

36 Qiaochu_Yuan 14 June 2014 03:43AM

Most resources you might think to amass have decreasing marginal utility: for example, a marginal extra $1,000 means much more to you if you have $0 than if you have $100,000. That means you can safely apply the 80-20 rule to most resources: you only need to get some of the resource to get most of the benefits of having it.

At the most recent CFAR workshop, Val dedicated a class to arguing that one resource in particular has increasing marginal utility, namely attention. Initially, efforts to free up your attention have little effect: the difference between juggling 10 things and 9 things is pretty small. But once you've freed up most of your attention, the effect is larger: the difference between juggling 2 things and 1 thing is huge. Val also argued that because of this funny property of attention, most people likely undervalue the value of freeing up attention by orders of magnitude.

During a conversation later in the workshop I suggested another resource that might have increasing marginal utility, namely trust. A society where people abide by contracts 80% of the time is not 80% as good as a society where people abide by contracts 100% of the time; most of the societal value of trust (e.g. decreasing transaction costs) doesn't seem to manifest until people are pretty close to 100% trustworthy. The analogous way to undervalue trust is to argue that e.g. cheating on your spouse is not so bad, because only one person gets hurt. But cheating on spouses in general undermines the trust that spouses should have in each other, and the cumulative impact of even 1% of spouses cheating on the institution of marriage as a whole could be quite negative. (Lots of things about the world make more sense from this perspective: for example, it seems like one of the main practical benefits of religion is that it fosters trust.) 

What other resources have increasing marginal utility? How undervalued are they? 

Decision Auctions aka "How to fairly assign chores, or decide who gets the last cookie"

35 [deleted] 21 January 2014 09:13PM

After moving in with my new roomies (Danny and Bethany of Beeminder), I discovered they have a fair and useful way of auctioning off joint decisions. It helps you figure out how much you value certain chores or activities, and it guarantees that these decisions are worked out in a fair way. They call it "yootling", and wrote more about it here.

A quick example (Note: this only works if all participants are of the types of people who consider this sort of thing a Good Idea, and not A Grotesque Parody of Caring or whatnot):


Use Case: Who Picks up the Kids from Grandma's?

D and B are both busy working, but it's time to pick up the kids from their grandparents house. They decide to yootle for it.

B bids $100 (In a regular Normal Person exchange, this would be like saying "I'm elbows deep in code right now, and don't want to break flow. I'd really rather continue working right now, but of course I'll go if it's needed.")

D bids $15 (In a regular Normal Person exchange this would be like saying "I don't mind too much, though I do have other things to do now...")

So D "wins" the bid, and B pays him $15 to go get the kids from their grandma's.

Of course.... it would be a pain in the butt to constantly be paying each other, so instead they have a 10% chance of paying 10x the amount, and a 90% chance to pay nothing, using a random number generator.


This is made easier by the fact that we have a bot to run this, but before that they would use the high-tech solution of Holding Up Fingers.

We may do this multiple times per day, whenever there’s a good that we have shared ownership of and one of us wants to offload their shares onto the other person. The goods can be anything, e.g. the last brownie, but they’re more often “bads” like who will get up in the middle of the night with a vomiting child, or who will book plane tickets for a trip.

We find this an elegant means of assigning loathed tasks. The person who minded least winds up doing the chore, but gets compensated for it at a price that by their own estimation was fair.

Some other ways it can be implemented:

Joint purchase auction

The decision auction and variants are about allocating shared or partially shared resources to one person or the other, or picking one person to do something. Once in a while you have the opposite problem: deciding on a joint purchase.

Suppose Danny thinks we need a new sofa (this is very hypothetical). I think the one we have is just fine thank you. After some discussion I concede that it would be nice to have a sofa that was less doggy. Danny, being terribly excited about getting a new sofa does a bunch of research and finds his ideal sofa. I think it is a bit overpriced considering it is going to be a piece of gymnastics equipment for the kids for the next 6 years. Conflict ensues! I could bluff that I’m not interested in a new sofa at all and that he can buy it himself if he wants it that badly. But he probably doesn’t want it that bad, and I do want it a little. If only we could buy the sofa conditional on our combined utility for it exceeding the cost, and pay in proportion to our utilities to boot. Well, thanks to separate finances and the magic of mechanism design, we can! We submit sealed bids for the sofa and buy it if the sum of our bids is enough. (And, importantly, commit to not buying it for at least a year otherwise.) Any surplus is redistributed in proportion to our bids. For example, if Danny bid $80 and I bid $40 to buy a hundred dollar sofa, then we’d buy it, with Danny chipping in twice as much as me, namely $67 to my $33.

Generosity without sacrificing social efficiency

“The payments are simply what keep us honest in assessing that.”
If you’re thinking “how mercenary all this is!” then, well, I’m unclear how you made it this far into this post. But it’s not nearly as cold as it may sound. We do nice things for each other all the time, and frequently use yootling to make sure it’s socially efficient to do so. Suppose I invite Danny to a sing-along showing of Once More With Feeling (this may or may not be hypothetical) and Danny doesn’t exactly want to go but can see that I have value for his company. He might (quite non-hypothetically) say “I’ll half-accompany you!” by which he means that he’ll yootle me for whether he goes or not. In other words, he magnanimously decides to treat his joining me as a 50/50 joint decision. If I have greater value for him coming than he has for not coming, then I’ll pay him to come. But if it’s the other way around, he will pay me to let him off the hook. We don’t actually care much about the payments, though those are necessary for the auction to work. We care about making sure that he comes to the Buffy sing-along if and only if my value for his company exceeds his value for staying home. The payments are simply what keep us honest in assessing that. The increased fairness — the winner sharing their utility with the loser — is icing.

Try more things.

45 whales 12 January 2014 01:25AM

(Cross-posted from my personal site.)

Several months ago I began a list of "things to try," which I share at the bottom of this post. It suggests many mundane, trivial-to-medium-cost changes to lifestyle and routine. Now that I've spent some time with most of them and pursued at least as many more personal items in the same spirit, I'll suggest you do something similar. Why?

  • Raise the temperature in your optimization algorithm: avoid the trap of doing too much analysis on too little data and escape local optima.
  • You can think of this as a system for self-improvement; something that operates on a meta level, unlike an object-level goal or technique; something that helps you fail at almost everything but still win big.
  • Variety of experience is an intrinsic pleasure to many, and it may make you feel less that time has flown as you look back on your life.
  • Practice implementing small life changes, practice observing the effects of the changes, practice noticing further opportunities for changes, practice value of information calculations, and reinforce your self-image as an empiricist working to improve your life. Build small skills in the right order and you'll have better chances at bigger wins in the future.
  • Advice often falls prey to the typical-mind (or typical-body) fallacy. That doesn't mean you should dismiss it out of hand. Think about not just how likely it is to work for you, but how beneficial it would be if it worked, how much it would cost to try, and how likely it is that trying it would give you enough information to change your behavior. Then just try it anyway if it's cheap enough, because you forgot to account for uncertainty in your model inputs.
  • Speaking of value of information: don't ignore tweakable variables just because you don't yet have a gwern-tier tracking and evaluation apparatus for the perfect self-experiment. Sometimes you can expect consciously noticeable non-placebo effects from a successful trial. You might do better picking the low hanging fruit to gain momentum before you invest in a Zeo and a statistics textbook.
  • You know what, if there's an effect, it may not even need to be non-placebo. C.f. "Lampshading," as well as the often-observed "honeymoon" period of success with new productivity systems.
  • It's very tempting, especially in certain communities, to focus exclusively on shiny, counterintuitive, "rational," tech-based, hackeresque, or otherwise clever interventions and grand personal development schemes. Some of these are even good, but one suspects that some are optimized for punchiness, not effectiveness. Conversely, mundane ideas may not propagate as well, despite being potentially equally or more likely to succeed.
  • If you were already convinced of all of the above, then great! I hope you have the agency to try stuff like this all the time. If not, you might find it useful, as I did, just to have a list like this available. It's one less trivial inconvenience between thinking "I should try more things" and actually trying something. I've also found that I'm more likely to notice and remember optimization opportunities now that I have a place to capture them. And having spent the time to write them down and occasionally look over them, I'm more likely to notice when I'm in a position to enact something context-dependent on the list.

I removed the terribly personal items from my list, but what remains is still somewhat tailored to my own situation and habits. These are not recommendations; they are just things that struck me as having enough potential value to try for a week or two. The list isn't not remotely comprehensive, even as far as mundane self-experiments are concerned, but it's left as an exercise to the reader to find and fill the gaps. Take this list as an example or as a starting point, and brainstorm ideas of your own in the comments. The usual recommendation applies against going overboard in domains where you're currently impulsive or unreflective.

Related posts: Boring Advice RepositoryBreak your habits: Be more empiricalOn saying the obviousValue of Information: Four ExamplesSpend money on ergonomicsGo try thingsDon't fear failureJust try it: Quantity trumps qualityNo, seriously, just try it, etc.

continue reading »

Confidence In Opinions, Intensity In Opinion

0 lionhearted 04 September 2013 04:56PM

On a scale of 1 to 100, how sure are you?

It's a good thing to ask yourself from time to time about intense beliefs, especially if you're having a disagreement with someone else smart.

Just putting a number on something is good. If you're in business, putting any number in the high 90's is dangerous and shouldn't happen too often.

Yet, you still have to aggressively and intensely pursue your plans.

You can be only 80% sure you're correct, and still intensely pursue a course of action.

Most people make a mistake: they only go intensely after things they have a very high certainty will work.

But this is backwards. It's absolutely right to say "I'm only 80% sure that going and making a great talk to this group will help develop my business," and to still aggressively pursue giving a great talk.

The same is true with having ridiculously exceptionally good service. You can say, "I'm only 60% sure that doing this is going to lead to more customer loyalty... this might just be a time sink and cost more than it returns. But let's kill it on it, and find it."

You don't need to be highly confident to intensely pursue something.

In fact, intensely pursuing not-certain things seems to be how the world develops.

New Monthly Thread: Bragging

30 Joshua_Blaine 11 August 2013 05:50PM

In an attempt to encourage more people to actually do awesome things (a la instrumental rationality), I am proposing a new monthly thread (can be changed to bi-weekly, should that be demanded). Your job, should you choose to accept it, is to comment on this thread explaining the most awesome thing you've done this month. You may be as blatantly proud of you self as you feel. You may unabashedly consider yourself the coolest freaking person ever because of that awesome thing you're dying to tell everyone about. This is the place to do just that.

Remember, however, that this isn't any kind of progress thread. Nor is it any kind of proposal thread.This thread is solely for people to talk about the awesomest thing they've done all month. not will do. not are working on. have already done. This is to cultivate an environment of object level productivity rather than meta-productivity methods.

So, what's the coolest thing you've done this month?

Effective Rationality Training Online

2 Brendon_Wong 10 August 2013 01:58AM

Article Prerequisite: Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality


The goal of this post is to explore the idea of rationality training, feedback and ideas are greatly appreciated.

Less Wrong’s stated mission is to help people become more rational, and it has made progress toward that goal. Members read and discuss useful ideas on the internet, get instant feedback because of the voting system, and schedule meetups with other members. Less Wrong also helps attract more people to rationality.

Less Wrong helps with sharing ideas, but it fails to help people put elements of epistemic and instrumental rationality into practice. This is a serious problem, but it would be hard to fix without altering the core functionality of Less Wrong.

Having separate websites for reading and discussing ideas and then actually using those ideas would improve the real world performance of the Less Wrong community while maintaining the idea discussion, “marketing”, and other benefits of the Less Wrong website.

How to create a useful website for self improvement

1. Knowledge Management

When reading blogs, people only see recent posts and those posts are not significantly revised. A wiki would allow for the creation of a large body of organized knowledge that is frequently revised. Each wiki post would have a description, benefits of the topic described, resources to learn the topic, user submitted resources to learn the topic, and reviews of each resource. Posts would be organized hierarchically and voted on for usefulness to help readers effectively improve what they are looking for. Users could share self-improvement plans to help others improve effectiveness in general or in a specific topic as quickly as possible.

2. Effective Learning

Resources to learn topics should be arranged or written for effective skill acquisition, and there may be different resource categories like exercises for deliberate practice or active recall questions for spaced repetition.

3. Quality Contributors

Contributors would, at the very least, need to be familiar with how to write articles that supported the skill acquisition process agreed upon by the entire community. Required writing and research skills would produce higher quality work. I am not sure if being a rationalist would improve the quality of articles.


1. Difficult requirements

The number of prerequisites necessary to contribute to and use the wiki would really lower the amount of people who will be able to benefit from the wiki. It's a trade off between effectiveness and popularity. What elements should be included to maximize the effectiveness of the website?

2. Interest

There has to be enough interest in the website, or else a different project should be started instead. How many people in the Less Wrong community, and the world at large, would be interested in self improvement and rationality? 

3. Increasing the effectiveness of non altruistic people

How much of the target audience wants to improve the world? If most do not, then the wiki would essentially be a net negative on the world. What should the criteria be to view and contribute to the wiki? Perhaps only Less Wrong members should be able to view and edit the wiki, and contributors must read a quick start guide and pass a quick test before being allowed to post.

Useful Questions Repository

23 Qiaochu_Yuan 25 July 2013 02:58AM

See also: Boring Advice Repository, Solved Problems Repository, Grad Student Advice Repository, Useful Concepts Repository, Bad Concepts Repository

I just got back from the July CFAR workshop, where I was a guest instructor. One useful piece of rationality I started paying more attention to as a result of the workshop is the idea of useful questions to ask in various situations, particularly because I had been introduced to a new one:

"What skill am I actually training?"

This is a question that can be asked whenever you're practicing something, but more generally it can also be asked whenever you're doing something you do frequently, and it can help you notice when you're practicing a skill you weren't intending to train. Some examples of when to use this question:

  • You practice a piece of music so quickly that you consistently make mistakes. What skill are you actually training? How to play with mistakes.
  • You teach students math by putting them in a classroom and having them take notes while a lecturer talks about math. What skill are you actually training? How to take notes. 
  • A personal example: at the workshop, I noticed that I was more apprehensive about the idea of singing in public than I had previously thought I was. After walking outside and actually singing in public for a little, I had a hypothesis about why: for the past several years, I've been singing in public when I don't think anyone is around but stopping when I saw people because I didn't want to bother them. What skill was I actually training by doing that? How to not sing around people. 

Many of the lessons of the sequences can also be packaged as useful questions, like "what do I believe and why do I believe it?" and "what would I expect to see if this were true?" 

I'd like to invite people to post other examples of useful questions in the comments, hopefully together with an explanation of why they're useful and some examples of when to use them. As usual, one useful question per comment for voting purposes.

Instrumental rationality/self help resources

35 gothgirl420666 18 July 2013 02:58AM

I took part in a recent discussion in the current Open Thread about how instrumental rationality is under-emphasized on this website. I've heard other people say similar things, and I am inclined to agree. Someone suggested that there should be a "Instrumental Rationality Books" thread, similar to the "best textbooks on every subject" thread. I thought this sounded like a good idea. 

The title is "resources" because in addition to books, you can post self-help websites, online videos, whatever. 

The decorum for this thread will be as follows:

  • One resource per comment
  • Place your comment in the appropriate category
  • Only post resources you've actually used. Write a short review of your resource and if possible, a short summary of the key points. Say whether or not you would recommend the resource. 
  • Mention approximately how long it's been since you first used the resource and whether or not you have made external improvements in the subject area. On the other hand, keep in mind that there are a myriad of confounding factors that can be present when applying self-help resources to your life, and therefore it is perfectly acceptable to say "I would recommend this resource, but I have not improved" or "I do not recommend this resource, but I have improved". 

I think depending on how this thread goes, in a few days I might make a meta post on this subject in an attempt to inspire discussion on how the LessWrong community can work together to attempt to reach some sort of a consensus on what the best instrumental rationality methods and resources might be. lukeprog has already done great work in his The Science of Winning at Life sequence, but his reviews are uber-conservative and only mention resources with lots of scientific and academic backing. I think this leaves out a lot of really good stuff, and I think that we should be able to draw distinctions between stuff that isn't necessarily drawing on science but is reasonable, rational, and helps a lot of people, and The Secret

But I thought we should get the ball rolling a little before we have that conversation. In the meantime, if you have a meta comment, you can just go ahead and post it as a reply to the top-level post. 

Bad Concepts Repository

20 moridinamael 27 June 2013 03:16AM

We recently established a successful Useful Concepts Repository.  It got me thinking about all the useless or actively harmful concepts I had carried around for in some cases most of my life before seeing them for what they were.  Then it occurred to me that I probably still have some poisonous concepts lurking in my mind, and I thought creating this thread might be one way to discover what they are.

I'll start us off with one simple example:  The Bohr model of the atom as it is taught in school is a dangerous thing to keep in your head for too long.  I graduated from high school believing that it was basically a correct physical representation of atoms.  (And I went to a *good* high school.)  Some may say that the Bohr model serves a useful role as a lie-to-children to bridge understanding to the true physics, but if so, why do so many adults still think atoms look like concentric circular orbits of electrons around a nucleus?  

There's one hallmark of truly bad concepts: they actively work against correct induction.  Thinking in terms of the Bohr model actively prevents you from understanding molecular bonding and, really, everything about how an atom can serve as a functional piece of a real thing like a protein or a diamond.

Bad concepts don't have to be scientific.  Religion is held to be a pretty harmful concept around here.  There are certain political theories which might qualify, except I expect that one man's harmful political concept is another man's core value system, so as usual we should probably stay away from politics.  But I welcome input as fuzzy as common folk advice you receive that turned out to be really costly.

[LINK] Mr. Money Mustache on Back of the Napkin Calculations and Financial Planning

-2 Petruchio 24 June 2013 05:14PM

A new Mr. Money Mustache article for those who enjoyed my sequence on financial planning and extreme early retirement.

When the Back of the Napkin can be Worth Millions

Maximizing Financial Utility and Frugality

15 Petruchio 23 May 2013 03:55PM

The past few days have seen an increase of chatter concerning retirement and financial planning. One of us is even putting out a prospectus for a rational financial planning sequence. Some others have derided the concept of saving for retirement, as there is a probability of death before that time.

I am of the Extreme Early Retirement group. The idea is to save and invest 60-90% of your income, and you will have enough money to retire within a decade rather than four decades of the normal working career. This requires you to exercise your frugality muscle (such as cutting cable, biking to work, eating out less), but due to hedonistic adaptation, you will come out no less unhappy.

The sequences have already spoken on how spending money does not make us happier (after our basic needs are met). A Rational Financial plan should take this into account, even if a majority of people would not want to consider it.

I am just a beginner, so I linked the two big names in EEA, Mr. Money Mustache and Early Retirement Extreme. You can find their journeys towards financial independence here and here.

ERE is an austerity heavyweight, while MMM lives a pretty luxurious lifestyle, but still spends much less than his former coworkers. He just spends on what is important to him, such as travelling with his family and eating organic food, and not on anything frivolous, such as cable or eating out. He lives very far from a deprived lifestyle which the average person would shy away from. It takes a paradigm shift and some grit, but the people of LessWrong are not the type to reject munchkin ideas because it takes a little bit of mental effort.

If I were to make a compilation of posts for a Rational Financial Planning sequence, it will go as such…

How Little Money you need to Retire ?
Basic Retirement Math
Rationalist Spending 
Maximizing Utilons per Dollar
Utilons Free Of Charge
Investing Rationally Basics

These are just the basics. Investment advice is scare, and the above does not talk about many fianacial aspects, such as insurance, children, career choice. The authors do speak about them on their blog’s, but I omitted them for brevity. Read and follow these posts however, and you will be better off than 90% of your peers, and well on the road to Extreme Early Retirement.

[Edit] This idea of cutting your expenses and maximizing your savings obviously do not apply only to early retirement. Other financial goals, such as saving for a house, building up capital for a business, or giving more money to charity all will be more quickly accomplished if you learn to cut excesses from your life. The driving idea is the cost to live is very small, you are not made any happier by spending money on the extras, and you should put this money where it matters to you the most.


Preparing for a Rational Financial Planning Sequence

21 elharo 22 May 2013 11:48AM

What follows is a rough outline for a possible rational financial planning sequence that was inspired by some other recent discussion here. I'm not sure how useful this would be to how many people. I know there are some LessWrongers who would enjoy and learn from this; but I don't know if there are 5, 50, or 500. If you'd like to read it, let me know. If 500 people tell me they can't wait for this, I'll probably write it. If 5 people say maybe they'll glance at it, then probably not.



Part I: Preliminaries:

    Financial Rationality
    Multiplying uncertainties
    The inside and outside views
    Interpolation is reliable; extrapolation isn't

Part II: This is important:

  • Why to save for retirement
  • Dying alone in a hole: the story of Jane.   
  • Why compound interest is cool
  • 65-year old you will not want to live like a grad student
  • 65-year old you will not want to work like 35-year old you
  • Existential risk does not defeat personal risk
  • Existential success does not defeat personal risk

Part III: Analyzing Your Life

    (This section needs a lot more fleshing out, and thought)

    Personal satisfaction and happiness: do what you love, and adjust your financial expectations accordingly
    How much do you need to retire?
    When do you want to retire?
    How much do you need to live on today?
    Big expenses you need to plan for
    Increasing Income
    College the best financial decision you'll ever make or the worst?
    Choosing a career: what is your comparative advantage?
    Switching careers
    Career Decisions
        equity vs salary; steady singles or home run hitter
        employee or owner
    Career Tactics
        Salary negotiation
        when to change jobs
    Cutting Expenses
    Save more tomorrow

Part IV: The Practical How-to Advice:

Emergency Cash
Credit cards: the good, the bad, and the criminal
Where to save (tax advantaged accounts)
The importance of fees
401K matching: the highest return you'll ever see
Social Security
What to invest in (index funds)
    stock vs bond funds
    domestic vs. international
    target retirement funds
    Comic books are not a retirement plan (but a comic book store might be)
Avoiding hucksters and doomsayers
Investment Advisors
What if the shit hits the fan?
Can smart, rational investors beat the market?
Good debt; bad debt
Cars and other expensive purchases
Cutting out the middleman: making money on Craig's list, amazon, eBay and AirB&B
Buying a house
Renting vs. owning a house; rental parity
Student loans
Health Insurance
Life Insurance
Auto Insurance
Your Spouse: the most important financial decision you'll ever make
    Diamonds are forever, but most women would rather have a house.
    One or two incomes?
    Live longer, be happier, get married



If there are any topics you'd like to see covered that aren't here (wills? lawyers? the financial press?), let me know. Similarly, if you think there's a section that doesn't belong and should be dropped, let me know that too.


One caveat: while some sections are fairly generic, others will be very U.S. centric. The most specific advice will not be applicable to non-U.S. citizens and residents. That does limit the audience, but there's not too much I can do about that. Perhaps if it's successful I can seek out co-authors to do UK, Canadian, or other country editions.

A question for people who are interested in financial planning material: If this were available as a complete book (electronic and paper) today, how likely do you think it is that you would buy this book instead of one of the other available books on the subject? What would you pay for such a book?  If this were available as both a book and a sequence on LessWrong, how might that change your decision?

For now, this discussion thread is just a minimum viable product (MVP) to find out if a sequence is worth the time it would take me to complete. If the MVP pans out, I'll write and post one or two of these chapters to further gauge interest. If the MVP doesn't look promising, I'll drop it and move on to my next book idea.

[LINK] Soylent crowdfunding

7 Qiaochu_Yuan 21 May 2013 07:09PM

Rob Rhinehart's food replacement Soylent now has a crowdfunding campaign.

Soylent frees you from the time and money spent shopping, cooking and cleaning, puts you in excellent health, and vastly reduces your environmental impact by eliminating much of the waste and harm coming from agriculture, livestock, and food-related trash.

If you're interested in one or more of these benefits, send in some money! There is also a new blog post.

View more: Next