Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Applied Rationality Workshops: Jan 25-28 and March 1-4

20 AnnaSalamon 03 January 2013 01:00AM

The Center for Applied Rationality is running two more four-day workshops: Jan 25-28 and March 1-4 in the SF bay area.  Like the previous workshop, these sessions are targeted at ambitious, analytic people who have broad intellectual interests, and who care about making real-world projects work.  Less Wrong veterans and Less Wrong newcomers alike are welcome: as discussed below, we are intentionally bringing together folks with varied backgrounds and skill bases.

Workshop details:

continue reading »

Nov 16-18: Rationality for Entrepreneurs

25 AnnaSalamon 08 November 2012 06:15PM

CFAR is taking LW-style rationality into the world, this month, with a new kind of rationality camp: Rationality for Entrepreneurs.  It is aimed at ambitious, relatively successful folk (regardless of whether they are familiar with LW), who like analytic thinking and care about making practical real-world projects work.  Some will be paying for themselves; others will be covered by their companies.  

If you'd like to learn rationality in a more practical context, consider applying.  Also, if you were hoping to introduce rationality and related ideas to a friend/acquaintance who fits the bill, please talk to them about the workshop, both for their sake and to strengthen the rationality community.

The price will be out of reach for some: the workshop costs $3.9k.  But there is a money-back guarantee.  Some partial scholarships may be available. This fee buys participants:

  • Four nights and three days at a retreat center, with small classes, interactive exercises, and much opportunity for unstructured conversation that applies the material at meals and during the evenings (room and board is included);
  • One instructor for every three participants; 
  • Six weeks of Skype/phone and email follow-up, to help participants make the material into regular habits, and navigate real-life business and personal situations with these tools.

CFAR is planning future camps which are more directly targeted at a Less Wrong audience (like our previous camps), so don’t worry if this camp doesn’t seem like the right fit for you (because of cost, interests, etc.).  There will be others.  But if you or someone you know does have an entrepreneurial bent[1], then we strongly recommend applying to this camp rather than waiting.  Attendees will be surrounded by other ambitious, successful, practically-minded folks, learn from materials that have been tailored to entrepreneurial issues, and receive extensive follow-up to help apply what they’ve learned to their businesses and personal lives.

Our schedule is below.

(See also the thread about the camp on Hacker News.)

continue reading »

Checklist of Rationality Habits

117 AnnaSalamon 07 November 2012 09:19PM
As you may know, the Center for Applied Rationality has run several workshops, each teaching content similar to that in the core sequences, but made more practical, and more into fine-grained habits.

Below is the checklist of rationality habits we have been using in the minicamps' opening session.  It was co-written by Eliezer, myself, and a number of others at CFAR.  As mentioned below, the goal is not to assess how "rational" you are, but, rather, to develop a personal shopping list of habits to consider developing.  We generated it by asking ourselves, not what rationality content it's useful to understand, but what rationality-related actions (or thinking habits) it's useful to actually do.

I hope you find it useful; I certainly have.  Comments and suggestions are most welcome; it remains a work in progress. (It's also available as a pdf.) 
continue reading »

To Learn Critical Thinking, Study Critical Thinking

26 gwern 07 July 2012 11:50PM

Critical thinking courses may increase students’ rationality, especially if they do argument mapping.

The following excerpts are from “Does philosophy improve critical thinking skills?”, Ortiz 2007.

1 Excerpts

This thesis makes a first attempt to subject the assumption that studying [Anglo-American analytic] philosophy improves critical thinking skills to rigorous investigation.

…Thus the second task, in Chapter 3, is to articulate and critically examine the standard arguments that are raised in support of the assumption (or rather, would be raised if philosophers were in the habit of providing support for the assumption). These arguments are found to be too weak to establish the truth of the assumption. The failure of the standard arguments leaves open the question of whether the assumption is in fact true. The thesis argues at this point that, since the assumption is making an empirical assertion, it should be investigated using standard empirical techniques as developed in the social sciences. In Chapter 4, I conduct an informal review of the empirical literature. The review finds that evidence from the existing empirical literature is inconclusive. Chapter 5 presents the empirical core of the thesis. I use the technique of meta-analysis to integrate data from a large number of empirical studies. This meta-analysis gives us the best yet fix on the extent to which critical thinking skills improve over a semester of studying philosophy, general university study, and studying critical thinking. The meta-analysis results indicate that students do improve while studying philosophy, and apparently more so than general university students, though we cannot be very confident that this difference is not just the result of random variation. More importantly, studying philosophy is less effective than studying critical thinking, regardless of whether one is being taught in a philosophy department or in some other department. Finally, studying philosophy is much less effective than studying critical thinking using techniques known to be particularly effective such as LAMP.

continue reading »

Exploring the Idea Space Efficiently

22 Elithrion 08 April 2012 04:28AM

Simon is writing a calculus textbook. Since there are a lot of textbooks on the market, he wants to make his distinctive by including a lot of original examples. To do this, he decides to first check what sorts of examples are in some of the other books, and then make sure to avoid those. Unfortunately, after skimming through several other books, he finds himself completely unable to think of original examples—his mind keeps returning to the examples he's just read instead of coming up with new ones.

What he's experiencing here is another aspect of priming or anchoring. The way it appears to happen in my brain is that it decides to anchor on the examples it's already seen and explore the idea-space from there, moving from an idea only to ideas that are closely related to it (similarly to a depth-first search)

At first, this search strategy might not seem so bad—in fact, it's ideal if there is one best solution and the closer you get to it the better. For example, if you were shooting arrows at a target, all you'd need to consider is how close to the center you can hit. Where we run into problems, however, is trying to come up with multiple solutions (such as multiple examples of the applications of calculus), or trying to come up with the best solution when there are many plausible solutions. In these cases, our brain's default search algorithm will often grab the first idea it can think of and try to refine it, even if what we really need is a completely different idea.

continue reading »

Hack Away at the Edges

48 lukeprog 01 December 2011 01:26PM

See also: Challenging the Difficult and Tips and Tricks for Answering Hard Questions.

From Michael Nielsen's Reinventing Discovery:

In January 2009, [mathematician Tim] Gowers decided to use his blog to run a very unusual social experiment. He picked out an important and difficult unsolved mathematical problem, a problem he said he’d “love to solve.” But instead of attacking the problem on his own, or with a few close colleagues, he decided to attack the problem completely in the open, using his blog to post ideas and partial progress. What’s more, he issued an open invitation asking other people to help out. Anyone could follow along and, if they had an idea, explain it in the comments section of the blog. Gowers hoped that many minds would be more powerful than one, that they would stimulate each other with different expertise and perspectives, and collectively make easy work of his hard mathematical problem. He dubbed the experiment the Polymath Project.

The Polymath Project got off to a slow start. Seven hours after Gowers opened up his blog for mathematical discussion, not a single person had commented. Then a mathematician named Jozsef Solymosi from the University of British Columbia posted a comment suggesting a variation on Gowers’s problem, a variation which was easier, but which Solymosi thought might throw light on the original problem. Fifteen minutes later, an Arizona high-school teacher named Jason Dyer chimed in with a thought of his own. And just three minutes after that, UCLA mathematician Terence Tao—like Gowers, a Fields medalist—added a comment. The comments erupted: over the next 37 days, 27 people wrote 800 mathematical comments, containing more than 170,000 words. Reading through the comments you see ideas proposed, refined, and discarded, all with incredible speed. You see top mathematicians making mistakes, going down wrong paths, getting their hands dirty following up the most mundane of details, relentlessly pursuing a solution. And through all the false starts and wrong turns, you see a gradual dawning of insight. Gowers described the Polymath process as being “to normal research as driving is to pushing a car.” Just 37 days after the project began Gowers announced that he was confident the polymaths had solved not just his original problem, but a harder problem that included the original as a special case.

This episode is a microcosm of how intellectual progress happens.

Humanity's intellectual history is not the story of a Few Great Men who had a burst of insight, cried "Eureka!" and jumped 10 paces ahead of everyone else. More often, an intellectual breakthrough is the story of dozens of people building on the ideas of others before them, making wrong turns, proposing and discarding ideas, combining insights from multiple subfields, slamming into brick walls and getting back up again. Very slowly, the space around the solution is crowded in by dozens of investigators until finally one of them hits the payload.

continue reading »

Less Wrong NYC: Case Study of a Successful Rationalist Chapter

138 Cosmos 17 March 2011 08:12PM

It is perhaps the best-kept secret on Less Wrong that the New York City community has been meeting regularly for almost two years. For nearly a year we've been meeting weekly or more.  The rest of this post is going to be a practical guide to the benefits of group rationality, but first I will do something that is still too rare on this blog: make it clear how strongly I feel about this. Before this community took off, I did not believe that life could be this much fun or that I could possibly achieve such a sustained level of happiness.

Being rational in an irrational world is incredibly lonely. Every interaction reveals that our thought processes differ widely from those around us, and I had accepted that such a divide would always exist. For the first time in my life I have dozens of people with whom I can act freely and revel in the joy of rationality without any social concern - hell, it's actively rewarded! Until the NYC Less Wrong community formed, I didn't realize that I was a forager lost without a tribe...

Rationalists are still human, and we still have basic human needs. lukeprog summarizes the literature on subjective well-being, and the only factors which correlate to any degree are genetics, health, work satisfaction and social life - which actually gets listed three separate times as social activity, relationship satisfaction and religiosity. Rationalists tend to be less socially adept on average, and this can make it difficult to obtain the full rewards of social interaction. However, once rationalists learn to socialize with each other, they also become increasingly social towards everyone more generally. This improves your life. A lot.

We are a group of friends to enjoy life alongside, while we try miracle fruit, dance ecstatically until sunrise, actively embarrass ourselves at karaoke, get lost in the woods, and jump off waterfalls.  Poker, paintball, parties, go-karts, concerts, camping... I have a community where I can live in truth and be accepted as I am, where I can give and receive feedback and get help becoming stronger. I am immensely grateful to have all of these people in my life, and I look forward to every moment I spend with them. To love and be loved is an unparalleled experience in this world, once you actually try it.

So, you ask, how did all of this get started...?

continue reading »

Use curiosity

58 AnnaSalamon 25 February 2011 10:23PM

Related to: Rationalization, Meditation on curiosity, Original Seeing.

Why aren’t you learning faster?

For me, one answer is: because I’m not asking questions.  I blunder through conversations trying to “do my job”, or to look good, or elaborating my own theories, or allowing cached replies to come out of my mouth on autopilot.  I blunder through readings, scanning my eyes over the words and letting thoughts strike me as they may.  Rarely am I pulled by a specific desire to know.

And most of my learning happens at those rare times.

How about you?  When you read, how often do you chase something?  When you chat with your friends -- are you curious about how they’re doing, why their mouth twitched as they said that, or why exactly they disagree with you about X?  When you sit down to write, or to do research -- are you asking yourself specific questions, and then answering them?

Are there certain situations in which you get most of your useful ideas -- situations you could put yourself in more often?

Lately, when I notice that I’m not curious about anything, I’ve been trying to interrupt whatever I’m doing.  If I’m in a conversation, and neither I nor my interlocutor is trying to figure something out, I call a mini “halt, melt, and catch fire” (inside my head, at least), and ask myself what I want.  Surely not stale conversations.  If I’m writing, and I don’t like the sentence I just wrote -- instead of reshuffling the words in the hopes that the new version will just happen to be better, I ask myself what I don’t like about it.  

Thus, for the past six months, several times a day, I've interrupted my thoughts and put them back on an “ask questions” track.  (“Grrr, he said my argument was dishonest... Wait, is he right?  What should it look like if he is?”; “I notice I feel hopeless about this paper writing.  Maybe there’s something I should do differently?”)  It's helping.  I'm building the habit of interrupting myself when I'm "thinking" without trying to find something out, or taking actions that I expect won't accomplish anything.  As a human,  I’m probably stuck running on habits -- but I can at least change *which* habits I run on.

When are you in the habit of asking questions?  Would you learn more if you habitually asked other questions, too?  Which ones?

Fun and Games with Cognitive Biases

62 Cosmos 18 February 2011 08:38PM

You may have heard about IARPA's Sirius Program, which is a proposal to develop serious games that would teach intelligence analysts to recognize and correct their cognitive biases.  The intelligence community has a long history of interest in debiasing, and even produced a rationality handbook based on internal CIA publications from the 70's and 80's.  Creating games which would systematically improve our thinking skills has enormous potential, and I would highly encourage the LW community to consider this as a potential way forward to encourage rationality more broadly.

While developing these particular games will require thought and programming, the proposal did inspire the NYC LW community to play a game of our own.  Using a list of cognitive biases, we broke up into groups of no larger than four, and spent five minutes discussing each bias with regards to three questions:

  1. How do we recognize it?
  2. How do we correct it?
  3. How do we use its existence to help us win?

The Sirius Program specifically targets Confirmation Bias, Fundamental Attribution Error, Bias Blind Spot, Anchoring Bias, Representativeness Bias, and Projection Bias.  To this list, I also decided to add the Planning Fallacy, the Availability Heuristic, Hindsight Bias, the Halo Effect, Confabulation, and the Overconfidence Effect.  We did this Pomodoro style, with six rounds of five minutes, a quick break, another six rounds, before a break and then a group discussion of the exercise.

Results of this exercise are posted below the fold.  I encourage you to try the exercise for yourself before looking at our answers.

continue reading »

Cheat codes

37 sketerpot 01 December 2010 09:19PM

Most things worth doing take serious, sustained effort. If you want to become an expert violinist, you're going to have to spend a lot of time practicing. If you want to write a good book, there really is no quick-and-dirty way to do it. But sustained effort is hard, and can be difficult to get rolling. Maybe there are some easier gains to be had with simple, local optimizations. Contrary to oft-repeated cached wisdom, not everything worth doing is hard. Some little things you can do are like cheat codes for the real world.

Take habits, for example: your habits are not fixed. My diet got dramatically better once I figured out how to change my own habits, and actually applied that knowledge. The general trick was to figure out a new, stable state to change my habits to, then use willpower for a week or two until I settle into that stable state. In the case of diet, a stable state was one where junk food was replaced with fruit, tea, or having a slightly more substantial meal beforehand so I wouldn't feel hungry for snacks. That's an equilibrium I can live with, long-term, without needing to worry about "falling off the wagon." Once I figured out the pattern -- work out a stable state, and force myself into it over 1-2 weeks -- I was able to improve several habits, permanently. It was amazing. Why didn't anybody tell me about this?

In education, there are similar easy wins. If you're trying to commit a lot of things to memory, there's solid evidence that spaced repetition works. If you're trying to learn from a difficult textbook, reading in multiple overlapping passes is often more time-efficient than reading through linearly. And I've personally witnessed several people academically un-cripple themselves by learning to reflexively look everything up on Wikipedia. None of this stuff is particularly hard. The problem is just that a lot of people don't know about it.

What other easy things have a high marginal return-on-effort? Feel free to include speculative ones, if they're testable.

View more: Next