LessWrong podcasts
Today we're announcing a partnership with Castify to bring you Less Wrong content in audio form. Castify gets blog content read by professional readers and delivers it to their subscribers as a podcast so that you can listen to Less Wrong on the go. The founders of Castify are big fans of Less Wrong so they're rolling out their beta with some of our content.

To see how many people will use this, we're having the entire Mysterious Answers to Mysterious Questions core sequence read and recorded. We thought listening to it would be a great way for new readers to get caught up and for others to check out the quality of Castify's work. We will be adding more Less Wrong content based on community feedback, so let us know which content you'd like to see more of in the comments.
Thoughts on designing policies for oneself
Note: This was originally written in relation to this rather scary comment of lukeprog's on value drift. I'm now less certain that operant conditioning is a significant cause of value drift (leaning towards near/far type explanations), but I decided to share my thoughts on the topic of policy design anyway.
Several years ago, I had a reddit problem. I'd check reddit instead of working on important stuff. The more I browsed the site, the shorter my attention span got. The shorter my attention span got, the harder it was for me to find things that were enjoyable to read. Instead of being rejuvenating, I found reddit to be addictive, unsatisfying, and frustrating. Every time I thought to myself that I really should stop, there was always just one more thing to click on.
So I installed LeechBlock and blocked reddit at all hours. That worked really well... for a while.
Occasionally I wanted to dig up something I remembered seeing on reddit. (This wasn't always bad--in some cases I was looking up something related to stuff I was working on.) I tried a few different policies for dealing with this. All of them basically amounted to inconveniencing myself in some way or another whenever I wanted to dig something up.
After a few weeks, I no longer felt the urge to check reddit compulsively. And after a few months, I hardly even remembered what it was like to be an addict.
However, my inconvenience barriers were still present, and they were, well, inconvenient. It really was pretty annoying to make an entry in my notebook describing what I was visiting for and start up a different browser just to check something. I figured I could always turn LeechBlock on again if necessary, so I removed my self-imposed barriers. And slid back in to addiction.
After a while, I got sick of being addicted again and decided to do something about it (again). Interestingly, I forgot my earlier thought that I could just turn LeechBlock on again easily. Instead, thinking about LeechBlock made me feel hopeless because it seemed like it ultimately hadn't worked. But I did try it again, and the entire cycle then finished repeating itself: I got un-addicted, I removed LeechBlock, I got re-addicted.
This may seem like a surprising lack of self-awareness. All I can say is: Every second my brain gathers tons of sensory data and discards the vast majority of it. Narratives like the one you're reading right now don't get constructed on the fly automatically. Maybe if I had been following orthonormal's advice of keeping and monitoring a record of life changes attempted, I would've thought to try something different.
My Algorithm for Beating Procrastination
Part of the sequence: The Science of Winning at Life
After three months of practice, I now use a single algorithm to beat procrastination most of the times I face it.1 It probably won't work for you quite like it did for me, but it's the best advice on motivation I've got, and it's a major reason I'm known for having the "gets shit done" property. There are reasons to hope that we can eventually break the chain of akrasia; maybe this post is one baby step in the right direction.
How to Beat Procrastination explained our best current general theory of procrastination, called "temporal motivation theory" (TMT). As an exercise in practical advice backed by deep theories, this post explains the process I use to beat procrastination — a process implied by TMT.
As a reminder, here's a rough sketch of how motivation works according to TMT:

Or, as Piers Steel summarizes:
Decrease the certainty or the size of a task's reward — its expectancy or its value — and you are unlikely to pursue its completion with any vigor. Increase the delay for the task's reward and our susceptibility to delay — impulsiveness — and motivation also dips.
Of course, my motivation system is more complex than that. P.J. Eby likens TMT (as a guide for beating procrastination) to the "fuel, air, ignition, and compression" plan for starting your car: it might be true, but a more useful theory would include details and mechanism.
That's a fair criticism. Just as an fMRI captures the "big picture" of brain function at low resolution, TMT captures the big picture of motivation. This big picture helps us see where we need to work at the gears-and-circuits level, so we can become the goal-directed consequentialists we'd like to be.
So, I'll share my four-step algorithm below, and tackle the gears-and-circuits level in later posts.
Avoid misinterpreting your emotions
A couple of weeks ago, I was suffering from insomnia. Eventually my inability to fall asleep turned into frustration, which then led to feelings of self-doubt about my life in general. Soon I was wondering about whether I would ever amount to anything, whether any of my various projects would ever end up bearing fruit, and so forth. As usual, I quickly became convinced that my life prospects were dim, and that I should stop being ambitious and settle for some boring but safe path while I still had the chance.
Then I realized that there was no reason for me to believe in this, and I stopped thinking that way. I still felt frustrated about not being able to sleep, but I didn't feel miserable about my chances in life. To do otherwise would have been to misinterpret my emotions.
Let me explain what I mean by that. There are two common stereotypes about the role of emotions. The first says that emotions are something irrational, and should be completely disregarded when making decisions. The second says that emotions are basically always right, and one should follow their emotions above all. Psychological research on emotions suggests that the correct answer lies in between: we have emotions for a reason, and we should follow their advice, but not unthinkingly.
The Information Principle says that emotional feelings provide conscious information from unconscious appraisals of situations1. Your brain is constantly appraising the situation you happen to be in. It notes things like a passerby having slightly threatening body language, or conversation with some person being easy and free of misunderstandings. There are countless of such evaluations going on all the time, and you aren't consciously aware of them because you don't need to. Your subconscious mind can handle them just fine on its own. The end result of all those evaluations is packaged into a brief summary, which is the only thing that your conscious mind sees directly. That "executive summary" is what you experience as a particular emotional state. The passerby makes you feel slightly nervous and you avoid her, or your conversational partner feels pleasant to talk with and you begin to like him, even though you don't know why.
To some extent, then, your emotions will guide you to act appropriately in various situations, even when you don't know why you feel the way you do. However, it's important to intepret them correctly. Maybe you meet a new person on a good day and feel good when talking with them. Do you feel good because the person is pleasant to be with, or because the weather is pleasant? In general, emotions are only used as a source of information when their informational value is not called into question2. If you know that you are sad because of something that happened in the morning, and still feel sad when talking to your friend later on, you don't assume that something about your friend is making you feel sad.
People also pay more attention to their feelings when they think them relevant for the question at hand. For example, moods have a larger impact when people are making decisions for themselves rather than others, who may experience things differently. But by default, people tend to assume that their feelings and emotions are "about" whatever it is that they're thinking about at that moment. If they're not given a reason to presume that their emotions are caused by something else than the issue at hand, they don't.2
Calibrate your self-assessments
When I moved to Ireland, I knew that their school system, and in particular their examinations, would be different from the ones I was used to. I educated myself on them and by the time I took my first exam I thought I was reasonably prepared.
I walked out of my first examination almost certain I had failed. I remember emailing my parents, apologizing to them for my failure and promising I would do better when I repeated the class.
Then I got my results back, and learned I had passed with honors.
This situation repeated itself with depressing regularity over the next few semesters. Took exam, walked out in tears certain I had failed, made angsty complaints and apologies, got results back, celebrated. Eventually I decided that I might as well skip steps two to five and go straight to the celebrations.
This was harder than I expected. Just knowing that my feelings of abject failure usually ended out all right did not change those feelings of abject failure. I still walked out of each exam with the same gut certainty of disaster I had always had. What I did learn to do was ignore it: to force myself to walk home with a smile on my face and refuse to let myself dwell on the feelings of failure or take them seriously. And in this I was successful, and now the feelings of abject failure produce only a tiny twinge of stress.
In LW terminology, I am calibrating my self-assessment of examination success1.
Concepts Don't Work That Way
Part of the sequence: Rationality and Philosophy
Philosophy in the Flesh, by George Lakoff and Mark Johnson, opens with a bang:
The mind is inherently embodied. Thought is mostly unconscious. Abstract concepts are largely metaphorical.
These are three major findings of cognitive science. More than two millennia of a priori philosophical speculation about these aspects of reason are over. Because of these discoveries, philosophy can never be the same again.
When taken together and considered in detail, these three findings... are inconsistent with central parts of... analytic philosophy...
This book asks: What would happen if we started with these empirical discoveries about the nature of mind and constructed philosophy anew?
...A serious appreciation of cognitive science requires us to rethink philosophy from the beginning, in a way that would put it more in touch with the reality of how we think.
So what would happen if we dropped all philosophical methods that were developed when we had a Cartesian view of the mind and of reason, and instead invented philosophy anew given what we now know about the physical processes that produce human reasoning?
What emerges is a philosophy close to the bone. A philosophical perspective based on our empirical understanding of the embodiment of mind is a philosophy in the flesh, a philosophy that takes account of what we most basically are and can be.
Philosophy is a diseased discipline, but good philosophy can (and must) be done. I'd like to explore how one can do good philosophy, in part by taking cognitive science seriously.
The benefits of madness: A positive account of arationality
This post originated in a comment I posted about a strange and unpleasant experience I had when pushing myself too hard mentally. People seemed interested in hearing about it, so I sat down to write. In the process, however, it became something rather different (and a great deal longer) than what I originally intended. The incident referred to in the above comment was a case of manic focus gone wrong; but the truth is, often in my life it's gone incredibly right. I've gotten myself into some pretty strange headspaces, but through discipline and quick thinking I have often been able to turn them to my advantage and put them to good use.
Part 1, then, lays out a sort of cognitive history, focusing on the more extreme states I've been in. Part 2 continues the narrative; this is where I began to learn to ride them out and make them work for me. Part 3 is the incident in question: where I overstepped myself and suffered the consequences.
Some of you, however, may want to skip ahead to part 4 (unless you find my autobiographical writings interesting as a case study). There, I've written a proposal for a series of posts about how to effectively use the full spectrum of somatic and cognitive states to one's advantage. I have vacillated for a long time about this, for reasons that will be discussed below, but I decided that if I was already laying this much on the line, I might as well take it a step further. Read if you will; and if you're interested, please say so.
Extenuating Circumstances
Followup to: Tsuyoku Naritai
"Just remember, there but for a massive genetic difference, environmental factors, and conscious choices, go you or I." -- Justin Corwin
Failures don't have single causes. We choose single causes to focus on, but nothing in the universe emerges from a single parent event. Every assassination ever committed is the fault of every asteroid that wasn't in the right place to hit the assassin.
What good, then, does it do to blame circumstances for your failure? What good does it do? - to look over a huge causal lattice in which your own decisions played a part, and point to something you can't control, and say: "There is where it failed." It might be that a surgical intervention on the past, altering some node outside yourself, would have let you succeed instead of fail. But what good does this counterfactual do you? Will you choose that outside reality be different on your next try?
And yet... when I look at other people, not myself, I find myself taking "extenuating circumstances" into account a great deal. I go to great lengths to "save the world" (as I believe from my epistemic vantage point). When I consider doing less, I consider that this would make me a horrible awful unforgivable person. And then I cheerfully shake hands with others who aren't trying at all to save the world. I seem to want to have my cake and eat it too - to instantiate Goetz's Paradox: "Society tells you to work to make yourself more valuable. Then it tells you that when you reason morally, you must assume that all lives are equally valuable. You can't have it both ways."
Is this an inherent subjective asymmetry - does morality just look different from the outside than inside? If so, is that okay, or is it a sign of self-contradiction? Or is it condescension on my part - that I think less of others and so hold them to lower standards?
Verifying Rationality via RationalPoker.com
Related to: Problem of verifying rationality
We're excited to announce the (soft) launch of RationalPoker.com! It's a new guide developed by me, Zvi, Kevin, and patrissimo detailing how to use online poker as rationality training to conquer your cognitive biases. We want our community to go from knowing a lot about cognitive biases to actually having a training method that allows us to integrate that knowledge into our habits -- truly reducing biases instead of just leaving us perpetually lamenting our flawed brain-ware. In the coming weeks, we'll be making the case that online poker is a useful rationalist pursuit along with developing introductory "How To" material that allows those who join us to play profitably.
We want to make sure we aren’t wasting our time practicing an ungrounded art with methods that don’t work. Poker gives us an objective way to test x-rationality. The difference between winning and losing in poker once you know a small amount of domain-specific knowledge is due to differing levels of rationality. Our site will be presenting the case that a strong rationalist who can act on their knowledge of cognitive biases (a defining feature of x-rationality but not traditional rationality) should have a distinct advantage. We'll be offering the connecting material between the sequences and online poker to teach you how to apply knowledge of cognitive biases to poker in a way that verifies your current level of rationality and naturally teaches you to improve your rationality over time.
Incidentally, this also presents a solution for those of us looking to earn money from anywhere with a flexible schedule that leaves time for outside interests.
Procedural Knowledge Gaps
I am beginning to suspect that it is surprisingly common for intelligent, competent adults to somehow make it through the world for a few decades while missing some ordinary skill, like mailing a physical letter, folding a fitted sheet, depositing a check, or reading a bus schedule. Since these tasks are often presented atomically - or, worse, embedded implicitly into other instructions - and it is often possible to get around the need for them, this ignorance is not self-correcting. One can Google "how to deposit a check" and similar phrases, but the sorts of instructions that crop up are often misleading, rely on entangled and potentially similarly-deficient knowledge to be understandable, or are not so much instructions as they are tips and tricks and warnings for people who already know the basic procedure. Asking other people is more effective because they can respond to requests for clarification (and physically pointing at stuff is useful too), but embarrassing, since lacking these skills as an adult is stigmatized. (They are rarely even considered skills by people who have had them for a while.)
This seems like a bad situation. And - if I am correct and gaps like these are common - then it is something of a collective action problem to handle gap-filling without undue social drama. Supposedly, we're good at collective action problems, us rationalists, right? So I propose a thread for the purpose here, with the stipulation that all replies to gap announcements are to be constructive attempts at conveying the relevant procedural knowledge. No asking "how did you manage to be X years old without knowing that?" - if the gap-haver wishes to volunteer the information, that is fine, but asking is to be considered poor form.
(And yes, I have one. It's this: how in the world do people go about the supposedly atomic action of investing in the stock market? Here I am, sitting at my computer, and suppose I want a share of Apple - there isn't a button that says "Buy Our Stock" on their website. There goes my one idea. Where do I go and what do I do there?)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)