Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
A large element of instrumental rationality consists of filtering, prioritizing, and focusing. It's true for tasks, for emails, for blogs, and for the multitude of other inputs that many of us are drowning in these days. Doing everything, reading everything, commenting on everything is simply not an option - it would take infinite time. We could simply limit time and do what happens to catch our attention in that limited time, but that's clearly not optimal. Spending some time prioritizing rather than executing will always improve results if items can be prioritized and vary widely in benefit. So maximizing the results we get from our finite time requires, for a variety of domains:
- Filtering: a quick first-pass to get input down to a manageable size for the higher-cost effort of prioritizing.
- Prioritizing: briefly evaluating the impact each item will have towards your goals.
- Focusing: on the highest-priority items.
I have some thoughts, and am looking for more advice on how to do this for non-fiction reading. I've stopped buying books that catch my attention, because I have an inpile of about 3-4 shelves of unread books that have been unread for years. Instead, I put them on my Amazon Wishlists, which as a result have swelled to a total of 254 books - obviously un-manageable, and growing much faster than I read.
One obvious question to ask when optimizing is: what is the goal of reading? Let me suggest a few possibilities:
- Improve performance at a current job/role. For example, as Executive Director of a nonprofit, I could read books on fundraising or management.
- Relatedly, work towards a current goal. Here is where it helps to have identified your goals, perhaps in an Annual Review. As a parent, for example, there are an infinitude of parenting books that I could read, but I chose for this year to work specifically on positive psychology parenting, as it seemed like a potentially high-impact skill to learn. This massively filters the set of possible parenting books. Essentially, goal-setting ("learn positive psychology parenting habits") was a conscious prioritization step based on considering what new parenting skills would best advance my goals (in this case, to benefit my kids while making parenting more pleasant along the way).
- Improve core skills or attributes relevant to many areas of life - productivity, happiness, social skills, diet, etc.
- Expand your worldview (improve your map). Myopically focusing only on immediate needs would eliminate some of the greatest benefit I feel I've gotten from non-fiction in my life, which is a richer and more accurate understanding of the world.
- Be able to converse intelligently on currently popular books. (Much as one might watch the news in order to facilitate social bonding by being able to discuss current events). Note that I don't actually recommend this as a goal - I think you can find other things to bond over, plus you will sometimes read currently popular books because they serve other goals - but it may be important for some people.
The following should sound familiar:
A thoughtful and observant young protagonist dedicates their life to fighting a great world-threatening evil unrecognized by almost all of their short-sighted elders (except perhaps for one encouraging mentor), gathering a rag-tag band of colorful misfits along the way and forging them into a team by accepting their idiosyncrasies and making the most of their unique abilities, winning over previously neutral allies, ignoring those who just don't get it, obtaining or creating artifacts of great power, growing and changing along the way to become more powerful, fulfilling the potential seen by their mentors/supporters/early adopters, while becoming more human (greater empathy, connection, humility) as they collect resources to prepare for their climactic battle against the inhuman enemy.
Hmm, sounds a bit like SIAI! (And while I'm throwing stones, let me make it clear that I live in a glass house, since the same story could just as easily be adapted to TSI, my organization, as well as many others)
This story is related to Robin's Abstract/Distant Future Bias:
Regarding distant futures, however, we’ll be too confident, focus too much on unlikely global events, rely too much on trends, theories, and loose abstractions, while neglecting details and variation. We’ll assume the main events take place far away (e.g., space), and uniformly across large regions. We’ll focus on untrustworthy consistently-behaving globally-organized social-others. And we’ll neglect feasibility, taking chances to achieve core grand symbolic values, rather than ordinary muddled values.
More bluntly, we seem primed to confidently see history as an inevitable march toward a theory-predicted global conflict with an alien united them determined to oppose our core symbolic values, making infeasible overly-risky overconfident plans to oppose them. We seem primed to neglect the value and prospect of trillions of quirky future creatures not fundamentally that different from us, focused on their simple day-to-day pleasures, mostly getting along peacefully in vastly-varied uncoordinated and hard-to-predict local cultures and life-styles.
Living a story is potentially risky, for example Tyler Cowen warns us to be cautious of stories as there are far fewer stories than there are real scenarios, and so stories must oversimplify. Our view of the future may be colored by a "fiction bias", which leads us to expect outcomes like those we see in movies (climactic battles, generally interesting events following a single plotline). Thus stories threaten both epistemic rationality (we assume the real world is more like stories than it is) and instrumental rationality (we assume the best actions to effect real-world change are those which story heroes take).
Yet we'll tend to live stories anyway because it is fun - it inspires supporters, allies, and protagonists. The marketing for "we are an alliance to fight a great unrecognized evil" can be quite emotionally evocative. Including in our own self-narrative, which means we'll be tempted to buy into a story whether or not it is correct. So while living a fun story is a utility benefit, it also means that story causes are likely to be over-represented among all causes, as they are memetically attractive. This is especially true for the story that there is risk of great, world-threatening evil, since those who believe it are inclined to shout it from the rooftops, while those who don't believe it get on with their lives. (There are, of course, biases in the other direction as well).
Which is not to say that all aspects of the story are wrong - advancing an original idea to greater prominence (scaling) will naturally lead to some of these tropes - most people disbelieving, a few allies, winning more people over time, eventual recognition as a visionary. And Michael Vassar suggests that some of the tropes arise as a result of "trying to rise in station beyond the level that their society channels them towards". For these aspects, the tropes may contain evolved wisdom about how our ancestors negotiated similar situations.
And whether or not a potential protagonist believes in this wisdom, the fact that others do will surely affect marketing decisions. If Harry wishes to not be seen as Dark, he must care what others see as the signs of a Dark Wizard, whether or not he agrees with them. If potential collaborators have internalized these stories, skillful protagonists will invoke them in recruiting, converting, and team-building. Yet the space of story actions is constrained, and the best strategy may sometimes lie far outside them.
Since this is not a story, we are left with no simple answer. Many aspects of stories are false but resonate with us, and we must guard against them lest they contaminate our rationality. Others contain wisdom about how those like us have navigated similar situations in the past - we must decide whether the similarities are true or superficial. The most universal stories are likely to be the most effective in manipulating others, which any protagonist must due to amplify their own efforts in fighting for their cause. Some of these universal stories are true and generally applicable, like scaling techniques, yet the set of common tropes seems far too detailed to reflect universal truths rather than arbitrary biases of humanity and our evolutionary history.
May you live happily ever after (vanquishing your inhuman enemy with your team of true friends, bonded through a cause despite superficial dissimilarities).
Less Wrong is explicitly intended is to help people become more rational. Eliezer has posted that rationality means epistemic rationality (having & updating a correct model of the world), and instrumental rationality (the art of achieving your goals effectively). Both are fundamentally tied to the real world and our performance in it - they are about ability in practice, not theoretical knowledge (except inasmuch as that knowledge helps ability in practice). Unfortunately, I think Less Wrong is a failure at instilling abilities-in-practice, and designed in a way that detracts from people's real-world performance.
It will take some time, and it may be unpleasant to hear, but I'm going to try to explain what LW is, why that's bad, and sketch what a tool to actually help people become more rational would look like.
(This post was motivated by Anna Salomon's Humans are not automatically strategic and the response, more detailed background in footnote .)
Update / Clarification in response to some comments: This post is based on the assumption that a) the creators of Less Wrong wish Less Wrong to result in people becoming better at achieving their goals (instrumental rationality, aka "efficient productivity"), and b) Some (perhaps many) readers read it towards that goal. It is this I think is self-deception. I do not dispute that LW can be used in a positive way (read during fun time instead of the NYT or funny pictures on Digg), or that it has positive effects (exposing people to important ideas they might not see elsewhere). I merely dispute that reading fun things on the internet can help people become more instrumentally rational. Additionally, I think instrumental rationality is really important and could be a huge benefit to people's lives (in fact, is by definition!), and so a community value that "deliberate practice towards self-improvement" is more valuable and more important than "reading entertaining ideas on the internet" would be of immense value to LW as a community - while greatly decreasing the importance of LW as a website.
Why Less Wrong is not an effective route to increasing rationality.
Work: time spent acting in an instrumentally rational manner, ie forcing your attention towards the tasks you have consciously determined will be the most effective at achieving your consciously chosen goals, rather than allowing your mind to drift to what is shiny and fun.
By definition, Work is what (instrumental) rationalists wish to do more of. A corollary is that Work is also what is required in order to increase one's capacity to Work. This must be true by the definition of instrumental rationality - if it's the most efficient way to achieve one's goals, and if one's goal is to increase one's instrumental rationality, doing so is most efficiently done by being instrumentally rational about it. 
That was almost circular, so to add meat, you'll notice in the definition an embedded assumption that the "hard" part of Work is directing attention - forcing yourself to do what you know you ought to instead of what is fun & easy. (And to a lesser degree, determining your goals and the most effective tasks to achieve them). This assumption may not hold true for everyone, but with the amount of discussion of "Akrasia" on LW, the general drift of writing by smart people about productivity (Paul Graham: Addiction, Distraction, Merlin Mann: Time & Attention), and the common themes in the numerous productivity/self-help books I've read, I think it's fair to say that identifying the goals and tasks that matter and getting yourself to do them is what most humans fundamentally struggle with when it comes to instrumental rationality.
Figuring out goals is fairly personal, often subjective, and can be difficult. I definitely think the deep philosophical elements of Less Wrong and it's contributions to epistemic rationality  are useful to this, but (like psychedelics) the benefit comes from small occasional doses of the good stuff. Goals should be re-examined regularly, but occasionally (roughly yearly, and at major life forks). An annual retreat with a mix of close friends and distant-but-respected acquaintances (Burning Man, perhaps) will do the trick - reading a regularly updated blog is way overkill.
And figuring out tasks, once you turn your attention to it, is pretty easy. Once you have explicit goals, just consciously and continuously examining whether your actions have been effective at achieving those goals will get you way above the average smart human at correctly choosing the most effective tasks. The big deal here for many (most?) of us, is the conscious direction of our attention.
What is the enemy of consciously directed attention? It is shiny distraction. And what is Less Wrong? It is a blog, a succession of short fun posts with comments, most likely read when people wish to distract or entertain themselves, and tuned for producing shiny ideas which successfully distract and entertain people. As Merlin Mann says: "Joining a Facebook group about creative productivity is like buying a chair about jogging". Well, reading a blog to overcome akrasia IS joining a Facebook group about creative productivity. It's the opposite of this classic piece of advice.
Here's a little Sunday irreverence. Someone else has probably written this story before, and I'm sure the points have been made many times, but it popped into my head when I woke up and I thought it might be fun to write it out.
Last week I was walkin' along mindin' my own business when I met a Christian Minister, who asked me if I'd accepted Jesus as my Lord and Personal Saviour. "Why I sure think so", I responded, "But...what was that name again?". "Why, Jesus!" he answered, and began to launch into an account of this man's fascinatin' historical doin's, when I interrupted him.
"Funny you should mention it", I replied. "I do accept as my Lord and Personal Saviour a man who was born of the blessed Virgin Mary in Bethlehem long ago, and was the Son of God, but we call him Schmesus."
The poor man choked and started turnin' a little red, and warned me in menacing tones that lest I accepted his JESUS, I would burn forever in the fire and brimstone of Hell. "For sure!", said I, "We Schmistians know ALL about Hell. After all, we use your same holy text, only we call it the Schmible. It's got all the same books of Genesis an' Paul an' all that, with all the same verses. There's just one key difference which makes us Schmistians prefer our religion to yours."
"What's that?", he spluttered.
This idea has been mentioned in several comments, but it deserves a top-level post. From an ancient, ancient web article (1995!), Stanford philosophy professor John Perry writes:
I have been intending to write this essay for months. Why am I finally doing it? Because I finally found some uncommitted time? Wrong. I have papers to grade, textbook orders to fill out, an NSF proposal to referee, dissertation drafts to read. I am working on this essay as a way of not doing all of those things. This is the essence of what I call structured procrastination, an amazing strategy I have discovered that converts procrastinators into effective human beings, respected and admired for all that they can accomplish and the good use they make of time. All procrastinators put off things they have to do. Structured procrastination is the art of making this bad trait work for you. The key idea is that procrastinating does not mean doing absolutely nothing. Procrastinators seldom do absolutely nothing; they do marginally useful things, like gardening or sharpening pencils or making a diagram of how they will reorganize their files when they get around to it. Why does the procrastinator do these things? Because they are a way of not doing something more important. If all the procrastinator had left to do was to sharpen some pencils, no force on earth could get him do it. However, the procrastinator can be motivated to do difficult, timely and important tasks, as long as these tasks are a way of not doing something more important.
The insightful observation that procrastinators fill their time with effort, not staring at the walls, gives rise to this form of akrasia aikido, where the urge to not do something is cleverly redirected into productivity. If you can "waste time" by doing useful things, while feeling like you are avoiding doing the "real work", then you avoid depleting your limited supply of willpower (which happens when you force yourself to do something).
In other words, structured procrastination (SP) is an efficient use of this limited resource, because doing A in order to avoid doing B is easier than making yourself do A. If A is something you want to get done, then the less willpower you can use to do it, the more you will be to accomplish. This only works if A is something that you do want to get done - that's how SP differs from normal procrastination, of course.
The traditional wisdom says that publicly committing to a goal is a useful technique for accomplishment. It creates pressure to fulfill one's claims, lest one lose status. However, when the goal is related to one's identity, a recent study shows that public commitment may actually be counterproductive. Nyuanshin posts:
"Identity-related behavioral intentions that had been noticed by other people were translated into action less intensively than those that had been ignored. . . . when other people take notice of an individual's identity-related behavioral intention, this gives the individual a premature sense of possessing the aspired-to identity."
This empirical finding flies in the face of conventional wisdom about the motivational effects of public goal-setting, but rings true to my experience. Belief is, apparently, fungible -- when you know that people think of you as an x-doer, you afffirm that self-image more confidently than you would if you had only your own estimation to go on. [info]colinmarshall and myself have already become aware of the dangers of vanity to any non-trivial endeavor, but it's nice to have some empirical corroboration. Keep your head down, your goals relatively private, and don't pat yourself on the back until you've got the job done.
This matches my experience over the first year of The Seasteading Institute. We've received tons of press, and I've probably spent as much time at this point interacting with the media as working on engineering. And the press is definitely useful - it helps us reach and get credibility with major donors, and it helps us grow our community of interested seasteaders (it takes a lot of people to found a country, and it takes a mega-lot of somewhat interested people to have a committed subset who will actually go do it).
Yet I've always been vaguely uncomfortable about how much media attention we've gotten, even though we've just started progressing towards our long-term goals. It feels like an unearned reward. But is that bad? I keep wondering "Why should that bother me? Isn't it a good thing to be given extra help in accomplishing this huge and difficult goal? Aren't unearned rewards the best kind of rewards?" This study suggests the answer.
I'm about half-way through this fascinating book, conveniently available for free online, which is at the intersection of psychiatry and evolutionary psychology. I don't have the time to do it justice, so I'm going to post a few choice excerpts here in the hope that those who are more prolific and insightful than I am will add further analysis.
Just to make sure it's clear how this all ties in to bias, I'll start with a bias-relevant section. The book ties delusional behavior in with the theory of consciousness as primarily existing for social intelligence purposes, and thus malfunctions in our reading of the social facts such as human intention are what cause delusions:
But some people with delusions are entirely ‘normal’ except for the false belief, and the belief itself is neither impossible nor outlandish. Any other unusual behaviors can be traced back to that false belief. For instance, a man may have the fixed, false and dominating belief that his wife is having an affair with a neighbour. This belief may be so dominating as to lead to a large program of surveillance - spying on his wife, searching her handbag, examining her clothes etc. Yet the same man may show no evidence of irrationality in other areas of his life, being able to function normally at work and socializing easily with acquaintances, so that only close friends and family are aware of the existence of the delusion. In such instances the delusion is said to be ‘encapsulated’, ie. sealed-off from other aspects of mental life, and these people are said to have a delusional disorder.
Delusions are typically stated to have three major defining characteristics. Firstly that a delusional belief is false, secondly that this false belief is behaviorally dominant, and thirdly that the false belief is resistant to counter-argument. All these characteristics are shown by delusional disorders, yet they occur in a context of generally non-pathological cognitive functioning.
Humans are extremely prone to ‘false’ beliefs, or at least beliefs that strike many or most other people as false. Some of these false beliefs are strongly held and dominate behavior. It is trivially obvious that humans are imperfect logicians operating for most of the time on incomplete information, so mistakes are inevitable. But it is striking that although everyone would acknowledge the imperfections of human reasoning, many of these false beliefs are not susceptible to argument. For example, deeply cherished religious and political beliefs are nonetheless based on little or no hard evidence, vary widely, yet may dominate a person’s life, and are sometimes held with unshakeable intensity. And religious and political beliefs may strike the vast majority of other people as obviously false.
On at least two occasions - one only a year past - my life was at serious risk because I was not thinking clearly. Both times, I was lucky (and once, the car even survived!). As a gambler I don't like counting on luck, and I'd much rather be rational enough to avoid serious mistakes. So when I checked the top-ranked posts here and saw Robin's Rational Me or We? arguing against rationality as a martial art I was dumbfounded. To me, individual rationality is a matter of life and death.
In poker, much attention is given to the sexy art of reading your opponent, but the true veteran knows that far more important is the art of reading and controlling yourself. It is very rare that a situation comes up where a "tell" matters, and each of my opponents is only in an occasional hand. I and my irrationalities, however, are in every decision in every hand. This is why self-knowledge and self-discipline are first-order concerns in poker, while opponent reading is second or perhaps even third.
And this is why Robin's post is so wrong. Our minds and their irrationalities are part of every second of our lives, every moment we experience, and every decision that we make. And contra to Robin's security metaphor, few of our decisions can be outsourced. My two bad decisions regarding motor vehicles, for example, could not have easily been outsourced to a group rationality mechanism. Only a tiny percentage of the choices I make every day can be punted to experts.
View more: Next