Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Make your training useful

94 AnnaSalamon 12 February 2011 02:14AM

As Tom slips on the ice puddle, his arm automatically pulls back to slap the ground.  He’s been taking Jiu-Jitsu for only a month, but, already, he’s practiced falling hundreds of times.  Tom’s training keeps him from getting hurt.

By contrast, Sandra is in her second year of university mathematics.  She got an “A” in calculus and in several more advanced courses, and she can easily recite that “derivatives” are “rates of change”.  But when she goes on her afternoon walk and stares at the local businesses, she doesn’t see derivatives.

For many of us, rationality is more like Sandra’s calculus than Tom’s martial arts.  You may think “overconfidence” when you hear an explicit probability (“It’s 99% likely I’ll make it to Boston on Tuesday”).  But when no probability is mentioned -- or, worse, when you act on a belief without noticing that belief at all -- your training has little impact.

Learn error patterns ahead of time

If you want to notice errors while you’re making them, think ahead of time about what your errors might look like. List the circumstances in which to watch out and the alternative action to try then.

Here's an example of what your lists might look like.  A bunch of visiting fellows generated this list at one of our rationality trainings last summer; I’m including their list here (with some edits) because I found the specific suggestions useful, and because you may be able to use it as a model for your own lists.

continue reading »

Is Scott Alexander bad at math?

30 JonahSinick 04 May 2015 05:11AM

This post is a third installment to the sequence that I started with The Truth About Mathematical Ability and Innate Mathematical Ability. I begin to discuss the role of aesthetics in math. 

There was strong interest in the first two posts in my sequence, and I apologize for the long delay. The reason for it is that I've accumulated hundreds of pages of relevant material in draft form, and have struggled with how to organize such a large body of material. I still don't know what's best, but since people have been asking, I decided to continue posting on the subject, even if I don't have my thoughts as organized as I'd like. I'd greatly welcome and appreciate any comments, but I won't have time to respond to them individually, because I already have my hands full with putting my hundreds of pages of writing in public form.

continue reading »

Autism, or early isolation?

17 JonahSinick 17 June 2015 08:52AM

I've often heard LWers describe themselves as having autism, or Asperger's Syndrome (which is no longer considered a valid construct, and was removed from the Diagnostic and Statistical Manual of Mental Disorders two years ago.) This is given as an explanation for various forms of social dysfunction. The suggestion is that such people have a genetic disorder.

I've come to think that the issues are seldom genetic in origin. There's a simpler explanation. LWers are often intellectually gifted. This is conducive to early isolation. In The Outsiders Grady Towers writes:

The single greatest adjustment problem faced by the intellectually gifted, however, is their tendency to become isolated from the rest of humanity. Hollingworth points out that the exceptionally gifted do not deliberately choose isolation, but are forced into it against their wills. These children are not unfriendly or ungregarious by nature. Typically they strive to play with others but their efforts are defeated by the difficulties of the case... Other children do not share their interests, their vocabulary, or their desire to organize activities. [...] Forms of solitary play develop, and these, becoming fixed as habits, may explain the fact that many highly intellectual adults are shy, ungregarious, and unmindful of human relationships, or even misanthropic and uncomfortable in ordinary social intercourse.

Most people pick up a huge amount of tacit social knowledge as children and adolescents, through very frequent interaction with many peers. This is often not true of intellectually gifted people, who usually grew up in relative isolation on account of lack of peers who shared their interests.

They often have the chance to meet others similar to themselves later on in life. One might think that this would resolve the issue. But in many cases intellectually gifted people simply never learn how beneficial it can be to interact with others. For example, the great mathematician Robert Langlands wrote:

Bochner pointed out my existence to Selberg and he invited me over to speak with him at the Institute. I have known Selberg for more than 40 years. We are on cordial terms and our offices have been essentially adjacent for more than 20 years.This is nevertheless the only mathematical conversation I ever had with him. It was a revelation.

At first blush, this seems very strange: much of Langlands' work involves generalizations of Selberg's trace formula. It seems obvious that it would be fruitful for Langlands to have spoken with Selberg about math more than once, especially given that the one conversation that he had was very fruitful! But if one thinks about what their early life experiences must have been like, as a couple of the most brilliant people in the world, it sort of makes sense: they plausibly had essentially nobody to talk to about their interests for many years, and if you go for many years without having substantive conversations with people, you might never get into the habit.

When intellectually gifted people do interact, one often sees cultural clashes, because such people created their own cultures as a substitute for usual cultural acclimation, and share no common background culture. From the inside, one sees other intellectually gifted people, recognizes that they're very odd by mainstream standards, and thinks "these people are freaks!" But at the same time, the people who one sees as freaks see one in the same light, and one is often blind to how unusual one's own behavior is, only in different ways. Thus, one gets trainwreck scenarios, as when I inadvertently offended dozens of people when I made strong criticisms of MIRI and Eliezer back in 2010, just after I joined the LW community.

Grady Towers concludes the essay by writing:

The tragedy is that none of the super high IQ societies created thus far have been able to meet those needs, and the reason for this is simple. None of these groups is willing to acknowledge or come to terms with the fact that much of their membership belong to the psychological walking wounded. This alone is enough to explain the constant schisms that develop, the frequent vendettas, and the mediocre level of their publications. But those are not immutable facts; they can be changed. And the first step in doing so is to see ourselves as we are.

An alarming fact about the anti-aging community

30 diegocaleiro 16 February 2015 05:49PM

Past and Present

Ten years ago teenager me was hopeful. And stupid.

The world neglected aging as a disease, Aubrey had barely started spreading memes, to the point it was worth it for him to let me work remotely to help with Metuselah foundation. They had not even received that initial 1,000,000 donation from an anonymous donor. The Metuselah prize was running for less than 400,000 if I remember well. Still, I was a believer.

Now we live in the age of Larry Page's Calico, 100,000,000 dollars trying to tackle the problem, besides many other amazing initiatives, from the research paid for by Life Extension Foundation and Bill Faloon, to scholars in top universities like Steve Garan and Kenneth Hayworth fixing things from our models of aging to plastination techniques. Yet, I am much more skeptical now.

Individual risk

I am skeptical because I could not find a single individual who already used a simple technique that could certainly save you many years of healthy life. I could not even find a single individual who looked into it and decided it wasn't worth it, or was too pricy, or something of that sort.

That technique is freezing some of your cells now.

Freezing cells is not a far future hope, this is something that already exists, and has been possible for decades. The reason you would want to freeze them, in case you haven't thought of it, is that they are getting older every day, so the ones you have now are the youngest ones you'll ever be able to use.

Using these cells to create new organs is not something that may help you if medicine and technology continue progressing according to the law of accelerating returns in 10 or 30 years. We already know how to make organs out of your cells. Right now. Some organs live longer, some shorter, but it can be done - for instance to bladders - and is being done.

Hope versus Reason

Now, you'd think if there was an almost non-invasive technique already shown to work in humans that can preserve many years of your life and involves only a few trivial inconveniences - compared to changing diet or exercising for instance- the whole longevist/immortalist crowd would be lining up for it and keeping back up tissue samples all over the place.

Well I've asked them. I've asked some of the adamant researchers, and I've asked the superwealthy; I've asked the cryonicists and supplement gorgers; I've asked those who work on this 8 hour a day every day, and I've asked those who pay others to do so. I asked it mostly for selfish reasons, I saw the TEDs by Juan Enriquez and Anthony Atala and thought: hey look, clearly beneficial expected life length increase, yay! let me call someone who found this out before me - anyone, I'm probably the last one, silly me - and fix this.

I've asked them all, and I have nothing to show for it.

My takeaway lesson is: whatever it is that other people are doing to solve their own impending death, they are far from doing it rationally, and maybe most of the money and psychology involved in this whole business is about buying hope, not about staring into the void and finding out the best ways of dodging it. Maybe people are not in fact going to go all-in if the opportunity comes.

How to fix this?

Let me disclose first that I have no idea how to fix this problem. I don't mean the problem of getting all longevists to freeze their cells, I mean the problem of getting them to take information from the world of science and biomedicine and applying it to themselves. To become users of the technology they are boasters of. To behave rationally in a CFAR or even homo economicus sense.

I was hoping for a grandiose idea in this last paragraph, but it didn't come. I'll go with a quote from this emotional song sung by us during last year's Secular Solstice celebration

Do you realize? that everyone, you know, someday will die...

And instead of sending all your goodbyes

Let them know you realize that life goes fast

It's hard to make the good things last

Attempted Telekinesis

81 AnnaSalamon 07 February 2015 06:53PM

Related to: Compartmentalization in epistemic and instrumental rationality; That other kind of status.

Summary:  I’d like to share some techniques that made a large difference for me, and for several other folks I shared them with.  They are techniques for reducing stress, social shame, and certain other kinds of “wasted effort”.  These techniques are less developed and rigorous than the techniques that CFAR teaches in our workshops -- for example, they currently only work for perhaps 1/3rd of the dozen or so people I’ve shared them with -- but they’ve made a large enough impact for that 1/3rd that I wanted to share them with the larger group.  I’ll share them through a sequence of stories and metaphors, because, for now, that is what I have.

continue reading »

Could you be Prof Nick Bostrom's sidekick?

45 RobertWiblin 05 December 2014 01:09AM

If funding were available, the Centre for Effective Altruism would consider hiring someone to work closely with Prof Nick Bostrom to provide anything and everything he needs to be more productive. Bostrom is obviously the Director of the Future of Humanity Institute at Oxford University, and author of Superintelligence, the best guide yet to the possible risks posed by artificial intelligence.

Nobody has yet confirmed they will fund this role, but we are nevertheless interested in getting expressions of interest from suitable candidates.

The list of required characteristics is hefty, and the position would be a challenging one:

  • Willing to commit to the role for at least a year, and preferably several
  • Able to live and work in Oxford during this time
  • Conscientious and discreet
  • Trustworthy
  • Able to keep flexible hours (some days a lot of work, others not much)
  • Highly competent at almost everything in life (for example, organising travel, media appearances, choosing good products, and so on)
  • Will not screw up and look bad when dealing with external parties (e.g. media, event organisers, the university)
  • Has a good personality 'fit' with Bostrom
  • Willing to do some tasks that are not high-status
  • Willing to help Bostrom with both his professional and personal life (to free up his attention)
  • Can speak English well
  • Knowledge of rationality, philosophy and artificial intelligence would also be helpful, and would allow you to also do more work as a research assistant.

The research Bostrom can do is unique; to my knowledge we don't have anyone who has made such significant strides clarifying the biggest risks facing humanity as a whole. As a result, helping increase Bostrom's output by say, 20%, would be a major contribution. This person's work would also help the rest of the Future of Humanity Institute run smoothly.

The role would offer significant skill development in operations, some skill development in communications and research, and the chance to build extensive relationships with the people and organisations working on existential risks.

If you would like to know more, or be added to the list of potential candidates, please email me: robert [dot] wiblin [at] centreforeffectivealtruism [dot] org. Feel free to share this post around.

Note that we are also hiring for a bunch of other roles, with applications closing Friday the 12th December.


Productivity 101 For Beginners

20 peter_hurford 05 November 2014 11:04PM

I'd like to believe that I'm pretty productive, and people seem interested in how I do it.  Previously, I had written "How I Am Productive"and it became one of my most popular essays of all time.

The real secret is that, in the past, I wasn't nearly as productive.  I struggled with procrastination, had issues completing assignments on time, and always felt like I never had enough time to do things.  But, starting in January 2013 and continuing for the past year and a half, I have slowly implemented several systems and habits in my life that, taken together, have made me productive.

I've learned productivity, and I want to try to teach it to others.

When I wrote "How I Am Productive", I kind of brain dumped everything that I knew in one place.  To do better, I should help people go one step at a time.  I also focused a lot on particulars of my situation -- to do better, I should be more general.  The aim of this -- Productivity 101 for Beginners -- is to try to make a general, step-by-step guide to increasing people's productivity.

...It's basically what I would do if I somehow had to start over.


Disclaimer: This is still advice based on what works for me.  I've attempted to validate it by trying it on a couple of other people and integrating feedback.  I've also tried to improve it based on what I've learned in the year between writing this and writing "How I Am Productive".  But your mileage still may vary, and I'm not a professional coach.



Step One: Get some goals!

...So here's my step-by-step guide to being productive.  ...Start on step one.  Focus on step one.  Do not move on from step one until you're done with step one.

Most people think productivity starts with "how", but I always find that it starts with "why".

Why do you want to be productive?

...If you could do more, what would you do?  Would you try to exercise?  Would you practice programming regularly?  Would you start writing?

Action point for this step: Carefully pick two goals -- two things that you want to accomplish that you're currently not doing.  Focus on them and how awesome it would be if you could get those things done!

Avoid this common mistake: Do not pick more than two goals.  Only focus on two to start small and simple.  You can add more goals later.

You can progress to the next step when you've picked two goals that you're excited about.  These are the reasons why you want to be productive.



Step Two: Track Your Time!

So you have your two goals now.  (If you don't have your two goals, go back to Step One.)  We now know why you want to be productive.

Now we have to make some time for your goals.  But in order to do that, we have to figure out where your time is currently going.

Action point for this step: Using paper and a pencil, Google Calendar, Toggl, or some other time tracker, map out roughly what you do on a given week.  If your week is atypical, wait until a more typical week.  If all your weeks are atypical, just track one and we'll work with it.

Avoid this common mistake: Don't stress out about timing.  You can do rough estimates (I started out with fifteen minute intervals, but half hour intervals are fine) and if you miss something, it's ok.  It might take a day of practice.  Remember to have your timer with you (carry your notebook, get Toggl's mobile app, etc.) so it's easier to track things.

You can progress to the next step when you have at least three days of usable timelogs, preferably a week of timelogs.



Step Three: Timebox

Now you have to figure out when you want to accomplish your goals.  Timeboxing refers to making a box of time in your calendar when you'll accomplish something.

Action point for this step: Look in your timelog to see if you have any time that you're not spending the way you want, and make that the time you do your goals.  When I started out, I found that I would read the internet aimlessly for two hours a day.  I cut that down to one hour and then used that free hour to exercise.

You might find that good times include right when you wake up, right before you go to sleep, after class, before work, after work, etc.  Lots of different times work for different people -- just find a time that works for you!

Avoid this common mistake: Don't cut out too much suboptimal time.  Breaks are important for rest!  Maybe you can set a timer (implicitly based on agreeing only to watch one TV episode, or an actual timer that rings), take a break for that amount, and then do what productive thing you want.  Remember how excited you are about doing it, and how bad you'll feel if you watch that second TV show!

You can progress to the next step when you have a concrete time in which you will accomplish both your goals.


Step Four: Commit!

We've long recognized that we can't get our goals done ourselves -- weakness of will is just too strong.  You need the power of a commitment device if you actually want to accomplish your goals in the long-run -- there is no other way.

Action point for this step: Bind both your goals to some sort of commitment device that works for you.  Go to the gym with a friend and don't let them let you cancel.  Sign up for Beeminder.  Sign up for HabitRPG.  Bet a friend.  Start making checkmarks for every day on track and don't let yourself break the streak.  Do more than one of these things.  Do whatever it takes to get yourself on track!

Avoid this common mistake: Don't use a commitment device that doesn't work for you.  If you'd lie to Beeminder, don't use it.  If you'd lie to a friend you bet, find some way to increase their oversight so that you can't lie.  You have to make your commitment device inescapable.

You can progress to the next step when you have a commitment device that has successfully made you stick to your two habits for five days in a row.  If your commitment device isn't working, get a new one.  If your time isn't working, choose a new time.  If you find yourself still failing, maybe your goal isn't important to you?  Focus on why you want to do this goal, or consider switching goals.



Step Five: Keep Going!

Don't stop now!  Keep your habit up!

Action point for this step: Continue to stick to your two goals.

Avoid this common mistake: Do not add more goals.  You must focus on your current two goals in order to make them stick.  It's worth it in the long run.

You can progress to the next step when you have stuck to your goal successfully for three weeks.



Step Six: Build!

Congrats on getting this far.  Now you're ready to add more goals as you see fit and dig into more advanced productivity advice.

Remember to keep things going slow.  Productivity is a marathon, not a sprint, and the same rules apply.  Minor setbacks don't matter if the long-run is an improvement.

You have reached the end of Productivity 101, but I'd be glad to help you further.  I'd love feedback on how it went for you.

...I'd also love feedback if one of the steps didn't work for you, so I can improve this guide for you or others.

What It's Like to Notice Things

32 BrienneStrohl 17 September 2014 02:19PM


Phenomenology is the study of the structures of experience and consciousness. Literally, it is the study of "that which appears". The first time you look at a twig sticking up out of the water, you might be curious and ask, "What forces cause things to bend when placed in water?" If you're a curious phenomenologist, though, you'll ask things like, "Why does that twig in water appear as though bent? Do other things appear to bend when placed in water? Do all things placed in water appear to bend to the same degree? Are there things that do not appear to bend when placed in water? Does my perception of the bending depend on the angle or direction from which I observe the twig?"

Pehenomenology means breaking experience down to its more basic components, and being precise in our descriptions of what we actually observe, free of further speculation and assumption. A phenomenologist recognizes the difference between observing "a six-sided cube", and observing the three faces, at most, from which we extrapolate the rest.

I consider phenomenology to be a central skill of rationality. The most obvious example: You're unlikely to generate alternative hypotheses when the confirming observation and the favored hypothesis are one and the same in your experience of experience. The importance of phenomenology to rationality goes deeper than that, though. Phenomenology trains especially fine grained introspection. The more tiny and subtle are the thoughts you're aware of, the more precise can be the control you gain over the workings of your mind, and the faster can be your cognitive reflexes.

(I do not at all mean to say that you should go read Husserl and Heidegger. Despite their apparent potential for unprecedented clarity, the phenomenologists, without exception, seem to revel in obfuscation. It's probably not worth your time to wade through all of that nonsense. I've mostly read about phenomenology myself for this very reason.)

I've been doing some experimental phenomenology of late.


I've noticed that rationality, in practice, depends on noticing. Some people have told me this is basically tautological, and therefore uninteresting. But if I'm right, I think it's likely very important to know, and to train deliberately.

The difference between seeing the twig as bent and seeing the twig as seeming bent may seem inane. It is not news that things that are bent tend to seem bent. Without that level of granularity in your observations, though, you may not notice that it could be possible for things to merely seem bent without being bent. When we're talking about something that may be ubiquitous to all applications of rationality, like noticing, it's worth taking a closer look at the contents of our experiences.

Many people talk about "noticing confusion", because Eliezer's written about it. Really, though, every successful application of a rationality skill begins with noticing. In particular, applied rationality is founded on noticing opportunities and obstacles. (To be clear, I'm making this up right this moment, so as far as I know it's not a generally agreed-upon thing. That goes for nearly everything in this post. I still think it's true.) You can be the most technically skilled batter in the world, and it won't help a bit if you consistently fail to notice when the ball whizzes by you--if you miss the opportunities to swing. And you're not going to run very many bases if you launch the ball straight at an opposing catcher--if you're oblivious to the obstacles.

It doesn't matter how many techniques you've learned if you miss all the opportunities to apply them, and fail to notice the obstacles when they get in your way. Opportunities and obstacles are everywhere. We can only be as strong as our ability to notice the ones that will make a difference.

Inspired by Whales' self-experiment in noticing confusion, I've been practicing noticing things. Not difficult or complicated things, like noticing confusion, or noticing biases. I've just been trying to get a handle on noticing, full stop. And it's been interesting.

Noticing Noticing

What does it mean to notice something, and what does it feel like?

I started by checking to see what I expected it to feel like to notice that it's raining, just going from memory. I tried for a split-second prediction, to find what my brain automatically stored under "noticing rain". When I thought about noticing rain, I got this sort of vague impression of rainyness, which included few sensory details and was more of an overall rainy feeling. My brain tried to tell me that "noticing rain" meant "being directly acquainted with rainyness", in much the same way that it tries to tell me it's experiencing a cube when it's actually only experiencing a pattern of light and shadows I interpret as three faces.

Then, I waited for rain. It didn't take long, because I'm in North Carolina for the month. (This didn't happen last time I was in North Carolina, so perhaps I just happened to choose The One Valley of Eternal Rain.)

The real "noticing rain" turned out to be a response to the physical sensations concurrent with the first raindrop falling on my skin. I did eventually have an "abstract rainyness feeling", but that happened a full two seconds later. My actual experience went like this.

It was cloudy and humid. This was not at the forefront of my attention, but it slowly moved in that direction as the temperature dropped. I was fairly focused on reading a book.

(I'm a little baffled by the apparent gradient between "not at all conscious of x" and "fully aware of x". I don't know how that works, but I experience the difference between being a little aware of the sky being cloudy and being focused on the patterns of light in the clouds, as analogous to the difference between being very-slightly-but-not-uncomfortably warm and burning my hand on the stove.)

My awareness of something like an "abstract rainyness feeling" moved further toward consciousness as the wind picked up. Suddenly--and the suddenness was an important part of the experience--I felt something like a cool, dull pin-prick on my arm. I looked at it, saw the water, and recognized it as a raindrop. Over the course of about half a second, several sensations leapt forward into full awareness: the darkness of my surroundings, the humidity in the air, the dark grey-blueness of the sky, the sound of rain on leaves like television static, the scent of ozone and damp earth, the feeling of cool humid wind on my face, and the word "rain" in my internal monologue.

I think it is that sudden leaping forward of many associated sensations that I would call "noticing rain".

After that, I felt a sort of mental step backward--though it was more like a zooming out or sliding away than a discrete step--from the sensations, and then a feeling of viewing them from the outside. There was a sensation of the potential to access other memories of times when it's rained.

(Sensations of potential are fascinating to me. I noticed a few weeks ago that after memorizing a list of names and faces, I could predict in the first half second of seeing the face whether or not I'd be able to retrieve the name in the next five seconds. Before I actually retrieved the name. What??? I don't know either.)

Only then did all of it resolve into the more distant and abstract "feeling of rainyness" that I'd predicted before. The resolution took four times as long as the simultaneous-leaping-into-consciousness-of-related-sensations that I now prefer to call "noticing", and ten times as long as the first-raindrop-pin-prick, which I think I'll call the "noticing trigger" if it turns out to be a general class of pre-noticing experiences.

("Can you really distinguish between 200 and 500 milliseconds?" Yes, but it's an acquired skill. I spent a block of a few minutes every day for a month, then several blocks a day for about a week, doing this Psychomotor Vigiliance Task when I was gathering data for the polyphasic sleep experiment. (No, I'm sorry, to the best of my knowledge Leverage has not yet published anything on the results of this. Long story short: Everyone who wasn't already polyphasic is still not polyphasic today.) It gives you fast feedback on simple response time. I'm not sure if it's useful for anything else, but it comes in handy when taking notes on experiences that pass very quickly.)

Noticing Environmental Cues

My second experiment was in repeated noticing. This is more closely related to rationality as habit cultivation.

Can I get better at noticing something just by practicing?

I was trying to zoom in on the experience of noticing itself, so I wanted something as simple as possible. Nothing subtle, nothing psychological, and certainly nothing I might be motivated to ignore. I wanted a straightforward element of my physical environment. I'm out in the country and driving around for errands and such about once a day, so I went with "red barn roofs".

I had an intuition that I should give myself some outward sign of having noticed, lest I not notice that I noticed, and decided to snap my fingers every time I noticed a red barn roof.

On the first drive, I noticed one red barn roof. That happened when I was almost at my destination and I thought, "Oh right, I'm supposed to be noticing red barn roofs, oops" then started actively searching for them.

Noticing a red barn roof while searching for it feels very different from noticing rain while reading a book. With the rain, it felt sort of like waking up, or like catching my name in an overheard conversation. There was a complete shift in what my brain was doing. With the barn roof, it was like I had a box with a red-barn-roof-shaped hole, and it felt like completion when a I grabbed a roof and dropped it through the hole. I was prepared for the roof, and it was a smaller change in the contents of consciousness.

I noticed two on the way back, also while actively searching for them, before I started thinking about something else and became oblivious.

I thought that maybe there weren't enough red barn roofs, and decided to try noticing red roofs of all sorts of buildings the next day. This, it turns out, was the correct move.

On day two of red-roof-noticing, I got lots of practice. I noticed around fifteen roofs on the way to the store, and around seven on the way back. By the end, I was not searching for the roofs as intently as I had been the day before, but I was still explicitly thinking about the project. I was still aware of directing my eyes to spend extra time at the right level in my field of vision to pick up roofs. It was like waving the box around and waiting for something to fall in, while thinking about how to build boxes.

I went out briefly again on day two, and on the way back, I noticed a red roof while thinking about something else entirely. Specifically, I was thinking about the possibility of moving to Uruguay, and whether I knew enough Spanish to survive. In the middle of one of those unrelated thoughts, my eyes moved over a barn roof and stayed there briefly while I had the leaping-into-consciousness experience with respect to the sensations of redness, recognizing something as shaped like a building, and feeling the impulse to snap my fingers. It was like I'd been wearing the box as a hat to free up my hands, and I'd forgotten about it. And then, with a heavy ker-thunk, the roof became my new center of attention.

And oh my gosh, it was so exciting! It sounds so absurd in retrospect to have been excited about noticing a roof. But I was! It meant I'd successfully installed a new cognitive habit to run in the background. On purpose. "Woo hoo! Yeah!" (I literally said that.)

On the third day, I noticed TOO MANY red roofs. I followed the same path to the store as before, but I noticed somewhere between twenty and thirty red roofs. I got about the same number going back, so I think I was catching nearly all the opportunities to notice red roofs. (I'd have to do it for a few days to be sure.) There was a pattern to noticing, where I'd notice-in-the-background, while thinking about something else, the first roof, and then I'd be more specifically on the lookout for a minute or two after that, before my mind wandered back to something other than roofs. I got faster over time at returning to my previous thoughts after snapping my fingers, but there were still enough noticed roofs to intrude uncomfortably upon my thoughts. It was getting annoying.

So I decided to switch back to only noticing the red roofs of barns in particular.

Extinction of the more general habit didn't take very long. It was over by the end of my next fifteen minute drive. For the first three times I saw a roof, I rose my hand a little to snap my fingers before reminding myself that I don't care about non-barns anymore. The next couple times I didn't raise my hand, but still forcefully reminded myself of my disinterest in my non-barns. The promotion of red roofs into consciousness got weaker with each roof, until the difference between seeing a non-red non-barn roof and a red non-barn roof was barely perceptible. That was my drive to town today.

On the drive back, I noticed about ten red barn roofs. Three I noticed while thinking about how to install habits, four while thinking about the differences between designing exercises for in-person workshops and designing exercises to put in books, and three soon enough after the previous barn to probably count as "searching for barns".

So yes, for at least some things, it seems I can get better at noticing them my  by practicing.

What These Silly Little Experiments Are Really About

My plan is to try noticing an internal psychological phenomenon next, but still something straightforward that I wouldn't be motivated not to notice. I probably need to try a couple things to find something that works well. I might go with "thinking the word 'tomorrow' in my internal monologue", for example, or possibly "wondering what my boyfriend is thinking about". I'll probably go with something more like the first, because it is clearer, and zooms in on "noticing things inside my head" without the extra noise of "noticing things that are relatively temporally indiscrete", but the second is actually a useful thing to notice.

Most of the useful things to notice are a lot less obvious than "thinking the word 'tomorrow' in my internal monologue". From what I've learned so far, I think that for "wondering what my boyfriend is thinking about", I'll need to pick out a couple of very specific, instantaneous sensations that happen when I'm curious what my boyfriend is thinking about. I expect that to be a repetition of the rain experiment, where I predict what it will feel like, then wait 'til I can gather data in real time. Once I have a specific trigger, I can repeat the red roof experiment to catch the tiny moments when I wonder what he's thinking. I might need to start with a broader category, like "notice when I'm thinking about my boyfriend", get used to noticing those sensations, and then reduce the set of sensations I'm watching out for to things that happen only when I'm curious what my boyfriend is thinking.

After that, I imagine I'll want to practice with different kinds of actions I can take when I notice a trigger. (If you've never heard of Implementation Intentions, I suggest trying them out.) So far, I've used the physical action of snapping my fingers. That was originally for clarity in recognizing the noticing, but it's also a behavioral response to a trigger. I could respond with a psychological behavior instead of a physical one, like "imagining a carrot". A useful response to noticing that I'm curious about what my boyfriend is thinking would be "check to see if he's busy" and then "say, 'What are you thinking about?'"

See, this "noticing" thing sounds boringly simple at first, and not worth much consideration in the art of rationality. Even in his original "noticing confusion" post, Eliezer really talked more about recognizing the implications of confusion than about the noticing itself.

Noticing is more complicated than it seems at first, and it's easy to mix it up with responding. There's a whole sub-art to noticing, and I really think that deliberate practice is making me better at it. Responses can be hard. It's essential to make noticing as effortless as possible. Then you can break the noticing and the responding apart, and you can recognize reality even before you know what to do with it.

Overcoming Decision Anxiety

14 TimMartin 11 September 2014 04:22AM

I get pretty anxious about open-ended decisions. I often spend an unacceptable amount of time agonizing over things like what design options to get on a custom suit, or what kind of job I want to pursue, or what apartment I want to live in. Some of these decisions are obviously important ones, with implications for my future happiness. However, in general my sense of anxiety is poorly calibrated with the importance of the decision. This makes life harder than it has to be, and lowers my productivity.

I moved apartments recently, and I decided that this would be a good time to address my anxiety about open-ended decisions. My hope is to present some ideas that will be helpful for others with similar anxieties, or to stimulate helpful discussion.



Exposure therapy

One promising way of dealing with decision anxiety is to practice making decisions without worrying about them quite so much. Match your clothes together in a new way, even if you're not 100% sure that you like the resulting outfit. Buy a new set of headphones, even if it isn't the “perfect choice.” Aim for good enough. Remind yourself that life will be okay if your clothes are slightly mismatched for one day.

This is basically exposure therapy – exposing oneself to a slightly aversive stimulus while remaining calm about it. Doing something you're (mildly) afraid to do can have a tremendously positive impact when you try it and realize that it wasn't all that bad. Of course, you can always start small and build up to bolder activities as your anxieties diminish.

For the past several months, I had been practicing this with small decisions. With the move approaching in July, I needed some more tricks for dealing with a bigger, more important decision.

Reasoning with yourself

It helps to think up reasons why your anxieties aren't justified. As in actual, honest-to-goodness reasons that you think are true. Check out this conversation between my System 1 and System 2 that happened just after my roommates and I made a decision on an apartment:

System 1: Oh man, this neighborhood [the old neighborhood] is such a great place to go for walks. It's so scenic and calm. I'm going to miss that. The new neighborhood isn't as pretty.
System 2: Well that's true, but how many walks did we actually take in five years living in the old neighborhood? If I recall correctly, we didn't even take two per year.
System 1: Well, yeah... but...
System 2: So maybe “how good the neighborhood is for taking walks” isn't actually that important to us. At least not to the extent that you're feeling. There were things that we really liked about our old living situation, but taking walks really wasn't one of them.
System 1: Yeah, you may be right...

Of course, this “conversation” took place after the decision had already been made. But making a difficult decision often entails second-guessing oneself, and this too can be a source of great anxiety. As in the above, I find that poking holes in my own anxieties really makes me feel better. I do this by being a good skeptic and turning on my critical thinking skills – only instead of, say, debunking an article on pseudoscience, I'm debunking my own worries about how bad things are going to be. This helps me remain calm.


The last piece of this process is something that should help when making future decisions. I reasoned that if my System 1 feels anxiety about things that aren't very important – if it is, as I said, poorly calibrated – then I perhaps I can re-calibrate it.

Before moving apartments, I decided to make predictions about what aspects of the new living situation would affect my happiness. “How good the neighborhood is for walks” may not be important to me, but surely there are some factors that are important. So I wrote down things that I thought would be good and bad about the new place. I also rated them on how good or bad I thought they would be.

In several months, I plan to go back over that list and compare my predicted feelings to my actual feelings. What was I right about? This will hopefully give my System 1 a strong impetus to re-calibrate, and only feel anxious about aspects of a decision that are strongly correlated with my future happiness.

Future Benefits

I think we each carry in our heads a model of what is possible for us to achieve, and anxiety about the choices we make limits how bold we can be in trying new things. As a result, I think that my attempts to feel less anxiety about decisions will be very valuable to me, and allow me to do things that I couldn't do before. At the same time, I expect that making decisions of all kinds will be a quicker and more pleasant process, which is a great outcome in and of itself.

Why the tails come apart

112 Thrasymachus 01 August 2014 10:41PM

[I'm unsure how much this rehashes things 'everyone knows already' - if old hat, feel free to downvote into oblivion. My other motivation for the cross-post is the hope it might catch the interest of someone with a stronger mathematical background who could make this line of argument more robust]

[Edit 2014/11/14: mainly adjustments and rewording in light of the many helpful comments below (thanks!). I've also added a geometric explanation.]

Many outcomes of interest have pretty good predictors. It seems that height correlates to performance in basketball (the average height in the NBA is around 6'7"). Faster serves in tennis improve one's likelihood of winning. IQ scores are known to predict a slew of factors, from income, to chance of being imprisoned, to lifespan.

What's interesting is what happens to these relationships 'out on the tail': extreme outliers of a given predictor are seldom similarly extreme outliers on the outcome it predicts, and vice versa. Although 6'7" is very tall, it lies within a couple of standard deviations of the median US adult male height - there are many thousands of US men taller than the average NBA player, yet are not in the NBA. Although elite tennis players have very fast serves, if you look at the players serving the fastest serves ever recorded, they aren't the very best players of their time. It is harder to look at the IQ case due to test ceilings, but again there seems to be some divergence near the top: the very highest earners tend to be very smart, but their intelligence is not in step with their income (their cognitive ability is around +3 to +4 SD above the mean, yet their wealth is much higher than this) (1).

The trend seems to be that even when two factors are correlated, their tails diverge: the fastest servers are good tennis players, but not the very best (and the very best players serve fast, but not the very fastest); the very richest tend to be smart, but not the very smartest (and vice versa). Why?

Too much of a good thing?

One candidate explanation would be that more isn't always better, and the correlations one gets looking at the whole population doesn't capture a reversal at the right tail. Maybe being taller at basketball is good up to a point, but being really tall leads to greater costs in terms of things like agility. Maybe although having a faster serve is better all things being equal, but focusing too heavily on one's serve counterproductively neglects other areas of one's game. Maybe a high IQ is good for earning money, but a stratospherically high IQ has an increased risk of productivity-reducing mental illness. Or something along those lines.

I would guess that these sorts of 'hidden trade-offs' are common. But, the 'divergence of tails' seems pretty ubiquitous (the tallest aren't the heaviest, the smartest parents don't have the smartest children, the fastest runners aren't the best footballers, etc. etc.), and it would be weird if there was always a 'too much of a good thing' story to be told for all of these associations. I think there is a more general explanation.

The simple graphical explanation

[Inspired by this essay from Grady Towers]

Suppose you make a scatter plot of two correlated variables. Here's one I grabbed off google, comparing the speed of a ball out of a baseball pitchers hand compared to its speed crossing crossing the plate:

It is unsurprising to see these are correlated (I'd guess the R-square is > 0.8). But if one looks at the extreme end of the graph, the very fastest balls out of the hand aren't the very fastest balls crossing the plate, and vice versa. This feature is general. Look at this data (again convenience sampled from googling 'scatter plot') of this:

Or this:

Or this:

Given a correlation, the envelope of the distribution should form some sort of ellipse, narrower as the correlation goes stronger, and more circular as it gets weaker: (2)

The thing is, as one approaches the far corners of this ellipse, we see 'divergence of the tails': as the ellipse doesn't sharpen to a point, there are bulges where the maximum x and y values lie with sub-maximal y and x values respectively:

So this offers an explanation why divergence at the tails is ubiquitous. Providing the sample size is largeish, and the correlation not too tight (the tighter the correlation, the larger the sample size required), one will observe the ellipses with the bulging sides of the distribution. (3)

Hence the very best basketball players aren't the very tallest (and vice versa), the very wealthiest not the very smartest, and so on and so forth for any correlated X and Y. If X and Y are "Estimated effect size" and "Actual effect size", or "Performance at T", and "Performance at T+n", then you have a graphical display of winner's curse and regression to the mean.

An intuitive explanation of the graphical explanation

It would be nice to have an intuitive handle on why this happens, even if we can be convinced that it happens. Here's my offer towards an explanation:

The fact that a correlation is less than 1 implies that other things matter to an outcome of interest. Although being tall matters for being good at basketball, strength, agility, hand-eye-coordination matter as well (to name but a few). The same applies to other outcomes where multiple factors play a role: being smart helps in getting rich, but so does being hard working, being lucky, and so on.

For a toy model, pretend that wealth is wholly explained by two factors: intelligence and conscientiousness. Let's also say these are equally important to the outcome, independent of one another and are normally distributed. (4) So, ceteris paribus, being more intelligent will make one richer, and the toy model stipulates there aren't 'hidden trade-offs': there's no negative correlation between intelligence and conscientiousness, even at the extremes. Yet the graphical explanation suggests we should still see divergence of the tails: the very smartest shouldn't be the very richest.

The intuitive explanation would go like this: start at the extreme tail - +4SD above the mean for intelligence, say. Although this gives them a massive boost to their wealth, we'd expect them to be average with respect to conscientiousness (we've stipulated they're independent). Further, as this ultra-smart population is small, we'd expect them to fall close to the average in this other independent factor: with 10 people at +4SD, you wouldn't expect any of them to be +2SD in conscientiousness.

Move down the tail to less extremely smart people - +3SD say. These people don't get such a boost to their wealth from their intelligence, but there should be a lot more of them (if 10 at +4SD, around 500 at +3SD), this means one should expect more variation in conscientiousness - it is much less surprising to find someone +3SD in intelligence and also +2SD in conscientiousness, and in the world where these things were equally important, they would 'beat' someone +4SD in intelligence but average in conscientiousness. Although a +4SD intelligence person will likely be better than a given +3SD intelligence person (the mean conscientiousness in both populations is 0SD, and so the average wealth of the +4SD intelligence population is 1SD higher than the 3SD intelligence people), the wealthiest of the +4SDs will not be as good as the best of the much larger number of +3SDs. The same sort of story emerges when we look at larger numbers of factors, and in cases where the factors contribute unequally to the outcome of interest.

When looking at a factor known to be predictive of an outcome, the largest outcome values will occur with sub-maximal factor values, as the larger population increases the chances of 'getting lucky' with the other factors:

So that's why the tails diverge.


A parallel geometric explanation

There's also a geometric explanation. The R-square measure of correlation between two sets of data is the same as the cosine of the angle between them when presented as vectors in N-dimensional space (explanations, derivations, and elaborations here, here, and here). (5) So here's another intuitive handle for tail divergence:

Grant a factor correlated with an outcome, which we represent with two vectors at an angle theta, the inverse cosine equal the R-squared. 'Reading off the expected outcome given a factor score is just moving along the factor vector and multiplying by cosine theta to get the distance along the outcome vector. As cos theta is never greater than 1, we see regression to the mean. The geometrical analogue to the tails coming apart is the absolute difference in length along factor versus length along outcome|factor scales with the length along the factor; the gap between extreme values of a factor and the less extreme values of the outcome grows linearly as the factor value gets more extreme. For concreteness (and granting normality), an R-square of 0.5 (corresponding to an angle of sixty degrees) means that +4SD (~1/15000) on a factor will be expected to be 'merely' +2SD (~1/40) in the outcome - and an R-square of 0.5 is remarkably strong in the social sciences, implying it accounts for half the variance.(6) The reverse - extreme outliers on outcome are not expected to be so extreme an outlier on a given contributing factor - follows by symmetry.


Endnote: EA relevance

I think this is interesting in and of itself, but it has relevance to Effective Altruism, given it generally focuses on the right tail of various things (What are the most effective charities? What is the best career? etc.) It generally vindicates worries about regression to the mean or winner's curse, and suggests that these will be pretty insoluble in all cases where the populations are large: even if you have really good means of assessing the best charities or the best careers so that your assessments correlate really strongly with what ones actually are the best, the very best ones you identify are unlikely to be actually the very best, as the tails will diverge.

This probably has limited practical relevance. Although you might expect that one of the 'not estimated as the very best' charities is in fact better than your estimated-to-be-best charity, you don't know which one, and your best bet remains your estimate (in the same way - at least in the toy model above - you should bet a 6'11" person is better at basketball than someone who is 6'4".)

There may be spread betting or portfolio scenarios where this factor comes into play - perhaps instead of funding AMF to diminishing returns when its marginal effectiveness dips below charity #2, we should be willing to spread funds sooner.(6) Mainly, though, it should lead us to be less self-confident.

1. Given income isn't normally distributed, using SDs might be misleading. But non-parametric ranking to get a similar picture: if Bill Gates is ~+4SD in intelligence, despite being the richest man in america, he is 'merely' in the smartest tens of thousands. Looking the other way, one might look at the generally modest achievements of people in high-IQ societies, but there are worries about adverse selection.

2. As nshepperd notes below, this depends on something like multivariate CLT. I'm pretty sure this can be weakened: all that is needed, by the lights of my graphical intuition, is that the envelope be concave. It is also worth clarifying the 'envelope' is only meant to illustrate the shape of the distribution, rather than some boundary that contains the entire probability density: as suggested by homunq: it is an 'pdf isobar' where probability density is higher inside the line than outside it. 

3. One needs a large enough sample to 'fill in' the elliptical population density envelope, and the tighter the correlation, the larger the sample needed to fill in the sub-maximal bulges. The old faithful case is an example where actually you do get a 'point', although it is likely an outlier.


4. It's clear that this model is fairly easy to extend to >2 factor cases, but it is worth noting that in cases where the factors are positively correlated, one would need to take whatever component of the factors which are independent of one another.

5. My intuition is that in cartesian coordinates the R-square between correlated X and Y is actually also the cosine of the angle between the regression lines of X on Y and Y on X. But I can't see an obvious derivation, and I'm too lazy to demonstrate it myself. Sorry!

6. Another intuitive dividend is that this makes it clear why you can by R-squared to move between z-scores of correlated normal variables, which wasn't straightforwardly obvious to me.

7. I'd intuit, but again I can't demonstrate, the case for this becomes stronger with highly skewed interventions where almost all the impact is focused in relatively low probability channels, like averting a very specified existential risk.

View more: Next