Filter Last three months

Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Why the tails come apart

80 Thrasymachus 01 August 2014 10:41PM

[I'm unsure how much this rehashes things 'everyone knows already' - if old hat, feel free to downvote into oblivion. My other motivation for the cross-post is the hope it might catch the interest of someone with a stronger mathematical background who could make this line of argument more robust]

Many outcomes of interest have pretty good predictors. It seems that height correlates to performance in basketball (the average height in the NBA is around 6'7"). Faster serves in tennis improve one's likelihood of winning. IQ scores are known to predict a slew of factors, from income, to chance of being imprisoned, to lifespan.

What is interesting is the strength of these relationships appear to deteriorate as you advance far along the right tail. Although 6'7" is very tall, is lies within a couple of standard deviations of the median US adult male height - there are many thousands of US men taller than the average NBA player, yet are not in the NBA. Although elite tennis players have very fast serves, if you look at the players serving the fastest serves ever recorded, they aren't the very best players of their time. It is harder to look at the IQ case due to test ceilings, but again there seems to be some divergence near the top: the very highest earners tend to be very smart, but their intelligence is not in step with their income (their cognitive ability is around +3 to +4 SD above the mean, yet their wealth is much higher than this) (1).

The trend seems to be that although we know the predictors are correlated with the outcome, freakishly extreme outcomes do not go together with similarly freakishly extreme predictors. Why?

Too much of a good thing?

One candidate explanation would be that more isn't always better, and the correlations one gets looking at the whole population doesn't capture a reversal at the right tail. Maybe being taller at basketball is good up to a point, but being really tall leads to greater costs in terms of things like agility. Maybe although having a faster serve is better all things being equal, but focusing too heavily on one's serve counterproductively neglects other areas of one's game. Maybe a high IQ is good for earning money, but a stratospherically high IQ has an increased risk of productivity-reducing mental illness. Or something along those lines.

I would guess that these sorts of 'hidden trade-offs' are common. But, the 'divergence of tails' seems pretty ubiquitous (the tallest aren't the heaviest, the smartest parents don't have the smartest children, the fastest runners aren't the best footballers, etc. etc.), and it would be weird if there was always a 'too much of a good thing' story to be told for all of these associations. I think there is a more general explanation.

The simple graphical explanation

[Inspired by this essay from Grady Towers]

Suppose you make a scatter plot of two correlated variables. Here's one I grabbed off google, comparing the speed of a ball out of a baseball pitchers hand compared to its speed crossing crossing the plate:

It is unsurprising to see these are correlated (I'd guess the R-square is > 0.8). But if one looks at the extreme end of the graph, the very fastest balls out of the hand aren't the very fastest balls crossing the plate, and vice versa. This feature is general. Look at this data (again convenience sampled from googling 'scatter plot') of quiz time versus test score:

Or this:

Or this:

Given a correlation, the envelope of the distribution should form some sort of ellipse, narrower as the correlation goes stronger, and more circular as it gets weaker:

correlations

The thing is, as one approaches the far corners of this ellipse, we see 'divergence of the tails': as the ellipse doesn't sharpen to a point, there are bulges where the maximum x and y values lie with sub-maximal y and x values respectively:

diffmaxes

So this offers an explanation why divergence at the tails is ubiquitous. Providing the sample size is largeish, and the correlation not to tight (the tighter the correlation, the larger the sample size required), one will observe the ellipses with the bulging sides of the distribution (2).

Hence the very best basketball players aren't the tallest (and vice versa), the very wealthiest not the smartest, and so on and so forth for any correlated X and Y. If X and Y are "Estimated effect size" and "Actual effect size", or "Performance at T", and "Performance at T+n", then you have a graphical display of winner's curse and regression to the mean.

An intuitive explanation of the graphical explanation

It would be nice to have an intuitive handle on why this happens, even if we can be convinced that it happens. Here's my offer towards an explanation:

The fact that a correlation is less than 1 implies that other things matter to an outcome of interest. Although being tall matters for being good at basketball, strength, agility, hand-eye-coordination matter as well (to name but a few). The same applies to other outcomes where multiple factors play a role: being smart helps in getting rich, but so does being hard working, being lucky, and so on.

For a toy model, pretend these height, strength, agility and hand-eye-coordination are independent of one another, gaussian, and additive towards the outcome of basketball ability with equal weight.(3) So, ceritus paribus, being taller will make one better at basketball, and the toy model stipulates there aren't 'hidden trade-offs': there's no negative correlation between height and the other attributes, even at the extremes. Yet the graphical explanation suggests we should still see divergence of the tails: the very tallest shouldn't be the very best.

The intuitive explanation would go like this: Start at the extreme tail - +4SD above the mean for height. Although their 'basketball-score' gets a  massive boost from their height, we'd expect them to be average with respect to the other basketball relevant abilities (we've stipulated they're independent). Further, as this ultra-tall population is small, this population won't have a very high variance: with 10 people at +4SD, you wouldn't expect any of them to be +2SD in another factor like agility.

Move down the tail to slightly less extreme values - +3SD say. These people don't get such a boost to their basketball score for their height, but there should be a lot more of them (if 10 at +4SD, around 500 at +3SD), this means there is a lot more expected variance in the other basketball relevant activities - it is much less surprising to find someone +3SD in height and also +2SD in agility, and in the world where these things were equally important, they would 'beat' someone +4SD in height but average in the other attributes. Although a +4SD height person will likely be better than a given +3SD height person, the best of the +4SDs will not be as good as the best of the much larger number of +3SDs

The trade-off will vary depending on the exact weighting of the factors, which explain more of the variance, but the point seems to hold in the general case: when looking at a factor known to be predictive of an outcome, the largest outcome values will occur with sub-maximal factor values, as the larger population increases the chances of 'getting lucky' with the other factors:

maxisubmax

So that's why the tails diverge.

Endnote: EA relevance

I think this is interesting in and of itself, but it has relevance to Effective Altruism, given it generally focuses on the right tail of various things (What are the most effective charities? What is the best career? etc.) It generally vindicates worries about regression to the mean or winner's curse, and suggests that these will be pretty insoluble in all cases where the populations are large: even if you have really good means of assessing the best charities or the best careers so that your assessments correlate really strongly with what ones actually are the best, the very best ones you identify are unlikely to be actually the very best, as the tails will diverge.

This probably has limited practical relevance. Although you might expect that one of the 'not estimated as the very best' charities is in fact better than your estimated-to-be-best charity, you don't know which one, and your best bet remains your estimate (in the same way - at least in the toy model above - you should bet a 6'11" person is better at basketball than someone who is 6'4".)

There may be spread betting or portfolio scenarios where this factor comes into play - perhaps instead of funding AMF to diminishing returns when its marginal effectiveness dips below charity #2, we should be willing to spread funds sooner.(4) Mainly, though, it should lead us to be less self-confident.


1. One might look at the generally modest achievements of people in high-IQ societies as further evidence, but there are worries about adverse selection.

2. One needs a large enough sample to 'fill in' the elliptical population density envelope, and the tighter the correlation, the larger the sample needed to fill in the sub-maximal bulges. The old faithful case is an example where actually you do get a 'point', although it is likely an outlier.

 

3. If you want to apply it to cases where the factors are positively correlated - which they often are - just use the components of the other factors that are independent of the factor of interest. I think, but I can't demonstrate, the other stipulations could also be relaxed.

4. I'd intuit, but again I can't demonstrate, the case for this becomes stronger with highly skewed interventions where almost all the impact is focused in relatively low probability channels, like averting a very specified existential risk.

On Terminal Goals and Virtue Ethics

59 Swimmer963 18 June 2014 04:00AM

Introduction

A few months ago, my friend said the following thing to me: “After seeing Divergent, I finally understand virtue ethics. The main character is a cross between Aristotle and you.”

That was an impossible-to-resist pitch, and I saw the movie. The thing that resonated most with me–also the thing that my friend thought I had in common with the main character–was the idea that you could make a particular decision, and set yourself down a particular course of action, in order to make yourself become a particular kind of person. Tris didn’t join the Dauntless cast because she thought they were doing the most good in society, or because she thought her comparative advantage to do good lay there–she chose it because they were brave, and she wasn’t, yet, and she wanted to be. Bravery was a virtue that she thought she ought to have. If the graph of her motivations even went any deeper, the only node beyond ‘become brave’ was ‘become good.’ 

(Tris did have a concept of some future world-outcomes being better than others, and wanting to have an effect on the world. But that wasn't the causal reason why she chose Dauntless; as far as I can tell, it was unrelated.)

My twelve-year-old self had a similar attitude. I read a lot of fiction, and stories had heroes, and I wanted to be like them–and that meant acquiring the right skills and the right traits. I knew I was terrible at reacting under pressure–that in the case of an earthquake or other natural disaster, I would freeze up and not be useful at all. Being good at reacting under pressure was an important trait for a hero to have. I could be sad that I didn’t have it, or I could decide to acquire it by doing the things that scared me over and over and over again. So that someday, when the world tried to throw bad things at my friends and family, I’d be ready.

You could call that an awfully passive way to look at things. It reveals a deep-seated belief that I’m not in control, that the world is big and complicated and beyond my ability to understand and predict, much less steer–that I am not the locus of control. But this way of thinking is an algorithm. It will almost always spit out an answer, when otherwise I might get stuck in the complexity and unpredictability of trying to make a particular outcome happen.


Virtue Ethics

I find the different houses of the HPMOR universe to be a very compelling metaphor. It’s not because they suggest actions to take; instead, they suggest virtues to focus on, so that when a particular situation comes up, you can act ‘in character.’ Courage and bravery for Gryffindor, for example. It also suggests the idea that different people can focus on different virtues–diversity is a useful thing to have in the world. (I'm probably mangling the concept of virtue ethics here, not having any background in philosophy, but it's the closest term for the thing I mean.)

I’ve thought a lot about the virtue of loyalty. In the past, loyalty has kept me with jobs and friends that, from an objective perspective, might not seem like the optimal things to spend my time on. But the costs of quitting and finding a new job, or cutting off friendships, wouldn’t just have been about direct consequences in the world, like needing to spend a bunch of time handing out resumes or having an unpleasant conversation. There would also be a shift within myself, a weakening in the drive towards loyalty. It wasn’t that I thought everyone ought to be extremely loyal–it’s a virtue with obvious downsides and failure modes. But it was a virtue that I wanted, partly because it seemed undervalued. 

By calling myself a ‘loyal person’, I can aim myself in a particular direction without having to understand all the subcomponents of the world. More importantly, I can make decisions even when I’m rushed, or tired, or under cognitive strain that makes it hard to calculate through all of the consequences of a particular action.

 

Terminal Goals

The Less Wrong/CFAR/rationalist community puts a lot of emphasis on a different way of trying to be a hero–where you start from a terminal goal, like “saving the world”, and break it into subgoals, and do whatever it takes to accomplish it. In the past I’ve thought of myself as being mostly consequentialist, in terms of morality, and this is a very consequentialist way to think about being a good person. And it doesn't feel like it would work. 

There are some bad reasons why it might feel wrong–i.e. that it feels arrogant to think you can accomplish something that big–but I think the main reason is that it feels fake. There is strong social pressure in the CFAR/Less Wrong community to claim that you have terminal goals, that you’re working towards something big. My System 2 understands terminal goals and consequentialism, as a thing that other people do–I could talk about my terminal goals, and get the points, and fit in, but I’d be lying about my thoughts. My model of my mind would be incorrect, and that would have consequences on, for example, whether my plans actually worked.

 

Practicing the art of rationality

Recently, Anna Salamon brought up a question with the other CFAR staff: “What is the thing that’s wrong with your own practice of the art of rationality?” The terminal goals thing was what I thought of immediately–namely, the conversations I've had over the past two years, where other rationalists have asked me "so what are your terminal goals/values?" and I've stammered something and then gone to hide in a corner and try to come up with some. 

In Alicorn’s Luminosity, Bella says about her thoughts that “they were liable to morph into versions of themselves that were more idealized, more consistent - and not what they were originally, and therefore false. Or they'd be forgotten altogether, which was even worse (those thoughts were mine, and I wanted them).”

I want to know true things about myself. I also want to impress my friends by having the traits that they think are cool, but not at the price of faking it–my brain screams that pretending to be something other than what you are isn’t virtuous. When my immediate response to someone asking me about my terminal goals is “but brains don’t work that way!” it may not be a true statement about all brains, but it’s a true statement about my brain. My motivational system is wired in a certain way. I could think it was broken; I could let my friends convince me that I needed to change, and try to shoehorn my brain into a different shape; or I could accept that it works, that I get things done and people find me useful to have around and this is how I am. For now. I'm not going to rule out future attempts to hack my brain, because Growth Mindset, and maybe some other reasons will convince me that it's important enough, but if I do it, it'll be on my terms. Other people are welcome to have their terminal goals and existential struggles. I’m okay the way I am–I have an algorithm to follow.

 

Why write this post?

It would be an awfully surprising coincidence if mine was the only brain that worked this way. I’m not a special snowflake. And other people who interact with the Less Wrong community might not deal with it the way I do. They might try to twist their brains into the ‘right’ shape, and break their motivational system. Or they might decide that rationality is stupid and walk away.

Willpower Depletion vs Willpower Distraction

58 Academian 15 June 2014 06:29PM

I once asked a room full of about 100 neuroscientists whether willpower depletion was a thing, and there was widespread disagreement with the idea. (A propos, this is a great way to quickly gauge consensus in a field.) Basically, for a while some researchers believed that willpower depletion "is" glucose depletion in the prefrontal cortex, but some more recent experiments have failed to replicate this, e.g. by finding that the mere taste of sugar is enough to "replenish" willpower faster than the time it takes blood to move from the mouth to the brain:

Carbohydrate mouth-rinses activate dopaminergic pathways in the striatum–a region of the brain associated with responses to reward (Kringelbach, 2004)–whereas artificially-sweetened non-carbohydrate mouth-rinses do not (Chambers et al., 2009). Thus, the sensing of carbohydrates in the mouth appears to signal the possibility of reward (i.e., the future availability of additional energy), which could motivate rather than fuel physical effort.

-- Molden, D. C. et al, The Motivational versus Metabolic Effects of Carbohydrates on Self-Control. Psychological Science.

Stanford's Carol Dweck and Greg Walden even found that hinting to people that using willpower is energizing might actually make them less depletable:

When we had people read statements that reminded them of the power of willpower like, “Sometimes, working on a strenuous mental task can make you feel energized for further challenging activities,” they kept on working and performing well with no sign of depletion. They made half as many mistakes on a difficult cognitive task as people who read statements about limited willpower. In another study, they scored 15 percent better on I.Q. problems.

-- Dweck and Walden, Willpower: It’s in Your Head? New York Times.

While these are all interesting empirical findings, there’s a very similar phenomenon that’s much less debated and which could explain many of these observations, but I think gets too little popular attention in these discussions:

Willpower is distractible.

Indeed, willpower and working memory are both strongly mediated by the dorsolateral prefontal cortex, so “distraction” could just be the two functions funging against one another. To use the terms of Stanovich popularized by Kahneman in Thinking: Fast and Slow, "System 2" can only override so many "System 1" defaults at any given moment.

So what’s going on when people say "willpower depletion"? I’m not sure, but even if willpower depletion is not a thing, the following distracting phenomena clearly are:

  • Thirst
  • Hunger
  • Sleepiness
  • Physical fatigue (like from running)
  • Physical discomfort (like from sitting)
  • That specific-other-thing you want to do
  • Anxiety about willpower depletion
  • Indignation at being asked for too much by bosses, partners, or experimenters...

... and "willpower depletion" might be nothing more than mental distraction by one of these processes. Perhaps it really is better to think of willpower as power (a rate) than energy (a resource).

If that’s true, then figuring out what processes might be distracting us might be much more useful than saying “I’m out of willpower” and giving up. Maybe try having a sip of water or a bit of food if your diet permits it. Maybe try reading lying down to see if you get nap-ish. Maybe set a timer to remind you to call that friend you keep thinking about.

The last two bullets,

  • Anxiety about willpower depletion
  • Indignation at being asked for too much by bosses, partners, or experimenters...

are also enough to explain why being told willpower depletion isn’t a thing might reduce the effects typically attributed to it: we might simply be less distracted by anxiety or indignation about doing “too much” willpower-intensive work in a short period of time.

Of course, any speculation about how human minds work in general is prone to the "typical mind fallacy". Maybe my willpower is depletable and yours isn’t. But then that wouldn’t explain why you can cause people to exhibit less willpower depletion by suggesting otherwise. But then again, most published research findings are false. But then again the research on the DLPFC and working memory seems relatively old and well established, and distraction is clearly a thing...

All in all, more of my chips are falling on the hypothesis that willpower “depletion” is often just willpower distraction, and that finding and addressing those distractions is probably a better a strategy than avoiding activities altogether in order to "conserve willpower".

A Dialogue On Doublethink

49 BrienneStrohl 11 May 2014 07:38PM

Followup to: Against Doublethink (sequence), Dark Arts of Rationality, Your Strength as a Rationalist


Doublethink

It is obvious that the same thing will not be willing to do or undergo opposites in the same part of itself, in relation to the same thing, at the same time. --Book IV of Plato's Republic

Can you simultaneously want sex and not want it? Can you believe in God and not believe in Him at the same time? Can you be fearless while frightened?

To be fair to Plato, this was meant not as an assertion that such contradictions are impossible, but as an argument that the soul has multiple parts. It seems we can, in fact, want something while also not wanting it. This is awfully strange, and it led Plato to conclude the soul must have multiple parts, for surely no one part could contain both sides of the contradiction.

Often, when we attempt to accept contradictory statements as correct, it causes cognitive dissonance--that nagging, itchy feeling in your brain that won't leave you alone until you admit that something is wrong. Like when you try to convince yourself that staying up just a little longer playing 2048 won't have adverse effects on the presentation you're giving tomorrow, when you know full well that's exactly what's going to happen.

But it may be that cognitive dissonance is the exception in the face of contradictions, rather than the rule. How would you know? If it doesn't cause any emotional friction, the two propositions will just sit quietly together in your brain, never mentioning that it's logically impossible for both of them to be true. When we accept a contradiction wholesale without cognitive dissonance, it's what Orwell called "doublethink".

When you're a mere mortal trying to get by in a complex universe, doublethink may be adaptive. If you want to be completely free of contradictory beliefs without spending your whole life alone in a cave, you'll likely waste a lot of your precious time working through conundrums, which will often produce even more conundrums.

Suppose I believe that my husband is faithful, and I also believe that the unfamiliar perfume on his collar indicates he's sleeping with other women without my permission. I could let that pesky little contradiction turn into an extended investigation that may ultimately ruin my marriage. Or I could get on with my day and leave my marriage intact.

It's better to just leave those kinds of thoughts alone, isn't it? It probably makes for a happier life.

Against Doublethink

Suppose you believe that driving is dangerous, and also that, while you are driving, you're completely safe. As established in Doublethink, there may be some benefits to letting that mental configuration be.

There are also some life-shattering downsides. One of the things you believe is false, you see, by the law of the excluded middle. In point of fact, it's the one that goes "I'm completely safe while driving". Believing false things has consequences.

Be irrationally optimistic about your driving skills, and you will be happily unconcerned where others sweat and fear. You won't have to put up with the inconvenience of a seatbelt. You will be happily unconcerned for a day, a week, a year. Then CRASH, and spend the rest of your life wishing you could scratch the itch in your phantom limb. Or paralyzed from the neck down. Or dead. It's not inevitable, but it's possible; how probable is it? You can't make that tradeoff rationally unless you know your real driving skills, so you can figure out how much danger you're placing yourself in. --Eliezer Yudkowsky, Doublethink (Choosing to be Biased)

What are beliefs for? Please pause for ten seconds and come up with your own answer.

Ultimately, I think beliefs are inputs for predictions. We're basically very complicated simulators that try to guess which actions will cause desired outcomes, like survival or reproduction or chocolate. We input beliefs about how the world behaves, make inferences from them to which experiences we should anticipate given various changes we might make to the world, and output behaviors that get us what we want, provided our simulations are good enough.

My car is making a mysterious ticking sound. I have many beliefs about cars, and one of them is that if my car makes noises it shouldn't, it will probably stop working eventually, and possibly explode. I can use this input to simulate the future. Since I've observed my car making a noise it shouldn't, I predict that my car will stop working. I also believe that there is something causing the ticking. So I predict that if I intervene and stop the ticking (in non-ridiculous ways), my car will keep working. My belief has thus led to the action of researching the ticking noise, planning some simple tests, and will probably lead to cleaning the sticky lifters.

If it's true that solving the ticking noise will keep my car running, then my beliefs will cash out in correctly anticipated experiences, and my actions will cause desired outcomes. If it's false, perhaps because the ticking can be solved without addressing a larger underlying problem, then the experiences I anticipate will not occur, and my actions may lead to my car exploding.

Doublethink guarantees that you believe falsehoods. Some of the time you'll call upon the true belief ("driving is dangerous"), anticipate future experiences accurately, and get the results you want from your chosen actions ("don't drive three times the speed limit at night while it's raining"). But some of the time, if you actually believe the false thing as well, you'll call upon the opposite belief, anticipate inaccurately, and choose the last action you'll ever take.

Without any principled algorithm determining which of the contradictory propositions to use as an input for the simulation at hand, you'll fail as often as you succeed. So it makes no sense to anticipate more positive outcomes from believing contradictions.

Contradictions may keep you happy as long as you never need to use them. Should you call upon them, though, to guide your actions, the debt on false beliefs will come due. You will drive too fast at night in the rain, you will crash, you will fly out of the car with no seat belt to restrain you, you will die, and it will be your fault.

Against Against Doublethink

What if Plato was pretty much right, and we sometimes believe contradictions because we're sort of not actually one single person?

It is not literally true that Systems 1 and 2 are separate individuals the way you and I are. But the idea of Systems 1 and 2 suggests to me something quite interesting with respect to the relationship between beliefs and their role in decision making, and modeling them as separate people with very different personalities seems to work pretty darn well when I test my suspicions.

I read Atlas Shrugged probably about a decade ago. I was impressed with its defense of capitalism, which really hammers home the reasons it’s good and important on a gut level. But I was equally turned off by its promotion of selfishness as a moral ideal. I thought that was *basically* just being a jerk. After all, if there’s one thing the world doesn’t need (I thought) it’s more selfishness.

Then I talked to a friend who told me Atlas Shrugged had changed his life. That he’d been raised in a really strict family that had told him that ever enjoying himself was selfish and made him a bad person, that he had to be working at every moment to make his family and other people happy or else let them shame him to pieces. And the revelation that it was sometimes okay to consider your own happiness gave him the strength to stand up to them and turn his life around, while still keeping the basic human instinct of helping others when he wanted to and he felt they deserved it (as, indeed, do Rand characters). --Scott of Slate Star Codex in All Debates Are Bravery Debates

If you're generous to a fault, "I should be more selfish" is probably a belief that will pay off in positive outcomes should you install it for future use. If you're selfish to a fault, the same belief will be harmful. So what if you were too generous half of the time and too selfish the other half? Well, then you would want to believe "I should be more selfish" with only the generous half, while disbelieving it with the selfish half.

Systems 1 and 2 need to hear different things. System 2 might be able to understand the reality of biases and make appropriate adjustments that would work if System 1 were on board, but System 1 isn't so great at being reasonable. And it's not System 2 that's in charge of most of your actions. If you want your beliefs to positively influence your actions (which is the point of beliefs, after all), you need to tailor your beliefs to System 1's needs.

For example: The planning fallacy is nearly ubiquitous. I know this because for the past three years or so, I've gotten everywhere five to fifteen minutes early. Almost every single person I meet with arrives five to fifteen minutes late. It is very rare for someone to be on time, and only twice in three years have I encountered the (rather awkward) circumstance of meeting with someone who also arrived early.

Before three years ago, I was also usually late, and I far underestimated how long my projects would take. I knew, abstractly and intellectually, about the planning fallacy, but that didn't stop System 1 from thinking things would go implausibly quickly. System 1's just optimistic like that. It responds to, "Dude, that is not going to work, and I have a twelve point argument supporting my position and suggesting alternative plans," with "Naaaaw, it'll be fine! We can totally make that deadline."

At some point (I don't remember when or exactly how), I gained the ability to look at the true due date, shift my System 1 beliefs to make up for the planning fallacy, and then hide my memory that I'd ever seen the original due date. I would see that my flight left at 2:30, and be surprised to discover on travel day that I was not late for my 2:00 flight, but a little early for my 2:30 one. I consistently finished projects on time, and only disasters caused me to be late for meetings. It took me about three months before I noticed the pattern and realized what must be going on.

I got a little worried I might make a mistake, such as leaving a meeting thinking the other person just wasn't going to show when the actual meeting time hadn't arrived. I did have a couple close calls along those lines. But it was easy enough to fix; in important cases, I started receiving Boomeranged notes from past-me around the time present-me expected things to start that said, "Surprise! You've still got ten minutes!"

This unquestionably improved my life. You don't realize just how inconvenient the planning fallacy is until you've left it behind. Clearly, considered in isolation, the action of believing falsely in this domain was instrumentally rational.

Doublethink, and the Dark Arts generally, applied to carefully chosen domains is a powerful tool. It's dumb to believe false things about really dangerous stuff like driving, obviously. But you don't have to doublethink indiscriminately. As long as you're careful, as long as you suspend epistemic rationality only when it's clearly beneficial to do so, employing doublethink at will is a great idea.

Instrumental rationality is what really matters. Epistemic rationality is useful, but what use is holding accurate beliefs in situations where that won't get you what you want?

Against Against Against Doublethink

There are indeed epistemically irrational actions that are instrumentally rational, and instrumental rationality is what really matters. It is pointless to believing true things if it doesn't get you what you want. This has always been very obvious to me, and it remains so.

There is a bigger picture.

Certain epistemic rationality techniques are not compatible with dark side epistemology. Most importantly, the Dark Arts do not play nicely with "notice your confusion", which is essentially your strength as a rationalist. If you use doublethink on purpose, confusion doesn't always indicate that you need to find out what false thing you believe so you can fix it. Sometimes you have to bury your confusion. There's an itsy bitsy pause where you try to predict whether it's useful to bury.

As soon as I finally decided to abandon the Dark Arts, I began to sweep out corners I'd allowed myself to neglect before. They were mainly corners I didn't know I'd neglected.

The first one I noticed was the way I responded to requests from my boyfriend. He'd mentioned before that I often seemed resentful when he made requests of me, and I'd insisted that he was wrong, that I was actually happy all the while. (Notice that in the short term, since I was probably going to do as he asked anyway, attending to the resentment would probably have made things more difficult for me.) This self-deception went on for months.

Shortly after I gave up doublethink, he made a request, and I felt a little stab of dissonance. Something I might have swept away before, because it seemed more immediately useful to bury the confusion than to notice it. But I thought (wordlessly and with my emotions), "No, look at it. This is exactly what I've decided to watch for. I have noticed confusion, and I will attend to it."

It was very upsetting at first to learn that he'd been right. I feared the implications for our relationship. But that fear didn't last, because we both knew the only problems you can solve are the ones you acknowledge, so it is a comfort to know the truth.

I was far more shaken by the realization that I really, truly was ignorant that this had been happening. Not because the consequences of this one bit of ignorance were so important, but because who knows what other epistemic curses have hidden themselves in the shadows? I realized that I had not been in control of my doublethink, that I couldn't have been.

Pinning down that one tiny little stab of dissonance took great preparation and effort, and there's no way I'd been working fast enough before. "How often," I wondered, "does this kind of thing happen?"

Very often, it turns out. I began noticing and acting on confusion several times a day, where before I'd been doing it a couple times a week. I wasn't just noticing things that I'd have ignored on purpose before; I was noticing things that would have slipped by because my reflexes slowed as I weighed the benefit of paying attention. "Ignore it" was not an available action in the face of confusion anymore, and that was a dramatic change. Because there are no disruptions, acting on confusion is becoming automatic.

I can't know for sure which bits of confusion I've noticed since the change would otherwise have slipped by unseen. But here's a plausible instance. Tonight I was having dinner with a friend I've met very recently. I was feeling s little bit tired and nervous, so I wasn't putting as much effort as usual into directing the conversation. At one point I realized we had stopped making making any progress toward my goals, since it was clear we were drifting toward small talk. In a tired and slightly nervous state, I imagine that I might have buried that bit of information and abdicated responsibility for the conversation--not by means of considering whether allowing small talk to happen was actually a good idea, but by not pouncing on the dissonance aggressively, and thereby letting it get away. Instead, I directed my attention at the feeling (without effort this time!), inquired of myself what precisely was causing it, identified the prediction that the current course of conversation was leading away from my goals, listed potential interventions, weighed their costs and benefits against my simulation of small talk, and said, "What are your terminal values?"

(I know that sounds like a lot of work, but it took at most three seconds. The hard part was building the pouncing reflex.)

When you know that some of your beliefs are false, and you know that leaving them be is instrumentally rational, you do not develop the automatic reflex of interrogating every suspicion of confusion. You might think you can do this selectively, but if you do, I strongly suspect you're wrong in exactly the way I was.

I have long been more viscerally motivated by things that are interesting or beautiful than by things that correspond to the territory. So it's not too surprising that toward the beginning of my rationality training, I went through a long period of being so enamored with a-veridical instrumental techniques--things like willful doublethink--that I double-thought myself into believing accuracy was not so great.

But I was wrong. And that mattered. Having accurate beliefs is a ridiculously convergent incentive. Every utility function that involves interaction with the territory--interaction of just about any kind!--benefits from a sound map. Even if "beauty" is a terminal value, "being viscerally motivated to increase your ability to make predictions that lead to greater beauty" increases your odds of success.

Dark side epistemology prevents total dedication to continuous improvement in epistemic rationality. Though individual dark side actions may be instrumentally rational, the patterns of thought required to allow them are not. Though instrumental rationality is ultimately the goal, your instrumental rationality will always be limited by your epistemic rationality.

That was important enough to say again: Your instrumental rationality will always be limited by your epistemic rationality.

It only takes a fraction of a second to sweep an observation into the corner. You don't have time to decide whether looking at it might prove problematic. If you take the time to protect your compartments, false beliefs you don't endorse will slide in from everywhere through those split-second cracks in your art. You must attend to your confusion the very moment you notice it. You must be relentless an unmerciful toward your own beliefs.

Excellent epistemology is not the natural state of a human brain. Rationality is hard. Without extreme dedication and advanced training, without reliable automatic reflexes of rational thought, your belief structure will be a mess. You can't have totally automatic anti-rationalization reflexes if you use doublethink as a technique of instrumental rationality.

This has been a difficult lesson for me. I have lost some benefits I'd gained from the Dark Arts. I'm late now, sometimes. And painful truths are painful, though now they are sharp and fast instead of dull and damaging.

And it is so worth it! I have much more work to do before I can move on to the next thing. But whatever the next thing is, I'll tackle it with far more predictive power than I otherwise would have--though I doubt I'd have noticed the difference.

So when I say that I'm against against against doublethink--that dark side epistemology is bad--I mean that there is more potential on the light side, not that the dark side has no redeeming features. Its fruits hang low, and they are delicious.

But the fruits of the light side are worth the climb. You'll never even know they're there if you gorge yourself in the dark forever.

New organization - Future of Life Institute (FLI)

44 Vika 14 June 2014 11:00PM

As of May 2014, there is an existential risk research and outreach organization based in the Boston area. The Future of Life Institute (FLI), spearheaded by Max Tegmark, was co-founded by Jaan Tallinn, Meia Chita-Tegmark, Anthony Aguirre and myself.

Our idea was to create a hub on the US East Coast to bring together people who care about x-risk and the future of life. FLI is currently run entirely by volunteers, and is based on brainstorming meetings where the members come together and discuss active and potential projects. The attendees are a mix of local scientists, researchers and rationalists, which results in a diversity of skills and ideas. We also hold more narrowly focused meetings where smaller groups work on specific projects. We have projects in the pipeline ranging from improving Wikipedia resources related to x-risk, to bringing together AI researchers in order to develop safety guidelines and make the topic of AI safety more mainstream.

Max has assembled an impressive advisory board that includes Stuart Russell, George Church and Stephen Hawking. The advisory board is not just for prestige - the local members attend our meetings, and some others participate in our projects remotely. We consider ourselves a sister organization to FHI, CSER and MIRI, and touch base with them often.

We recently held our launch event, a panel discussion "The Future of Technology: Benefits and Risks" at MIT. The panelists were synthetic biologist George Church, geneticist Ting Wu, economist Andrew McAfee, physicist and Nobel laureate Frank Wilczek and Skype co-founder Jaan Tallinn. The discussion covered a broad range of topics from the future of bioengineering and personal genetics, to autonomous weapons, AI ethics and the Singularity. A video and transcript are available.

FLI is a grassroots organization that thrives on contributions from awesome people like the LW community - here are some ways you can help:

  • If you have ideas for research or outreach we could be doing, or improvements to what we're already doing, please let us know (in the comments to this post, or by contacting me directly).
  • If you are in the vicinity of the Boston area and are interested in getting involved, you are especially encouraged to get in touch with us!
  • Support in the form of donations is much appreciated. (We are grateful for seed funding provided by Jaan Tallinn and Matt Wage.)
More details on the ideas behind FLI can be found in this article

Confound it! Correlation is (usually) not causation! But why not?

41 gwern 09 July 2014 03:04AM

It is widely understood that statistical correlation between two variables ≠ causation. But despite this admonition, people are routinely overconfident in claiming correlations to support particular causal interpretations and are surprised by the results of randomized experiments, suggesting that they are biased & systematically underestimating the prevalence of confounds/common-causation. I speculate that in realistic causal networks or DAGs, the number of possible correlations grows faster than the number of possible causal relationships. So confounds really are that common, and since people do not think in DAGs, the imbalance also explains overconfidence.

I’ve noticed I seem to be unusually willing to bite the correlation≠causation bullet, and I think it’s due to an idea I had some time ago about the nature of reality.

1.1 The Problem

One of the constant problems I face in my reading is that I constantly want to know about causal relationships but usually I only have correlational data, and as we all know, correlation≠causation. If the general public naively thinks correlation=causation, then most geeks know better and that correlation≠causation, but then some go meta and point out that correlation and causation do tend to correlate and so correlation weakly implies causation. But how much evidence…? If I suspect that A→B, and I collect data and establish beyond doubt that A&B correlates r=0.7, how much evidence do I have that A→B?

Now, the correlation could be an illusory correlation thrown up by all the standard statistical problems we all know about, such as too-small n, false positive from sampling error (A & B just happened to sync together due to randomness), multiple testing, p-hacking, data snooping, selection bias, publication bias, misconduct, inappropriate statistical tests, etc. I’ve read about those problems at length, and despite knowing about all that, there still seems to be a problem: I don’t think those issues explain away all the correlations which turn out to be confounds - correlation too often ≠ causation.

To measure this directly you need a clear set of correlations which are proposed to be causal, randomized experiments to establish what the true causal relationship is in each case, and both categories need to be sharply delineated in advance to avoid issues of cherrypicking and retroactively confirming a correlation. Then you’d be able to say something like ‘11 out of the 100 proposed A→B causal relationships panned out’, and start with a prior of 11% that in your case, A→B. This sort of dataset is pretty rare, although the few examples I’ve found from medicine tend to indicate that our prior should be under 10%. Not great. Why are our best guesses at causal relationships are so bad?

We’d expect that the a priori odds are good: 1/3! After all, you can divvy up the possibilities as:

  1. A causes B
  2. B causes A
  3. both A and B are caused by a C (possibly in a complex way like Berkson’s paradox or conditioning on unmentioned variables, like a phone-based survey inadvertently generating conclusions valid only for the phone-using part of the population, causing amusing pseudo-correlations)

If it’s either #1 or #2, we’re good and we’ve found a causal relationship; it’s only outcome #3 which leaves us baffled & frustrated. Even if we were guessing at random, you’d expect us to be right at least 33% of the time, if not much more often because of all the knowledge we can draw on. (Because we can draw on other knowledge, like temporal order or biological plausibility. For example, in medicine you can generally rule out some of the relationships this way: if you find a correlation between taking superdupertetrohydracyline™ and pancreas cancer remission, it seems unlikely that #2 curing pancreas cancer causes a desire to take superdupertetrohydracyline™ so the causal relationship is probably either #1 superdupertetrohydracyline™ cures cancer or #3 a common cause like ‘doctors prescribe superdupertetrohydracyline™ to patients who are getting better’.)

I think a lot of people tend to put a lot of weight on any observed correlation because of this intuition that a causal relationship is normal & probable because, well, “how else could this correlation happen if there’s no causal connection between A & B‽” And fair enough - there’s no grand cosmic conspiracy arranging matters to fool us by always putting in place a C factor to cause scenario #3, right? If you question people, of course they know correlation doesn’t necessarily mean causation - everyone knows that - since there’s always a chance of a lurking confound, and it would be great if you had a randomized experiment to draw on; but you think with the data you have, not the data you wish you had, and can’t let the perfect be the enemy of the better. So when someone finds a correlation between A and B, it’s no surprise that suddenly their language & attitude change and they seem to place great confidence in their favored causal relationship even if they piously acknowledge “Yes, correlation is not causation, but… [obviously hanging out with fat people can be expected to make you fat] [surely giving babies antibiotics will help them] [apparently female-named hurricanes increase death tolls] etc etc”.

So, correlations tend to not be causation because it’s almost always #3, a shared cause. This commonness is contrary to our expectations, based on a simple & unobjectionable observation that of the 3 possible relationships, 2 are causal; and so we often reason as though correlation were strong evidence for causation. This leaves us with a paradox: experimental results seem to contradict intuition. To resolve the paradox, I need to offer a clear account of why shared causes/confounds are so common, and hopefully motivate a different set of intuitions.

1.2 What a Tangled Net We Weave When First We Practice to Believe

Here’s where Bayes nets & causal networks (seen previously on LW & Michael Nielsen) come up. When networks are inferred on real-world data, they often start to look pretty gnarly: tons of nodes, tons of arrows pointing all over the place. Daphne Koller early on in her Probabilistic Graphical Models course shows an example from a medical setting where the network has like 600 nodes and you can’t understand it at all. When you look at a biological causal network like this:

A Toolkit Supporting Formal Reasoning about Causality in Metabolic Networks

“A Toolkit Supporting Formal Reasoning about Causality in Metabolic Networks”

You start to appreciate how everything might be correlated with everything, but not cause each other.

This is not too surprising if you step back and think about it: life is complicated, we have limited resources, and everything has a lot of moving parts. (How many discrete parts does an airplane have? Or your car? Or a single cell? Or think about a chess player analyzing a position: ‘if my bishop goes there, then the other pawn can go here, which opens up a move there or here, but of course, they could also do that or try an en passant in which case I’ll be down in material but up on initiative in the center, which causes an overall shift in tempo…’) Fortunately, these networks are still simple compared to what they could be, since most nodes aren’t directly connected to each other, which tamps down on the combinatorial explosion of possible networks. (How many different causal networks are possible if you have 600 nodes to play with? The exact answer is complicated but it’s much larger than 2600 - so very large!)

One interesting thing I managed to learn from PGM (before concluding it was too hard for me and I should try it later) was that in a Bayes net even if two nodes were not in a simple direct correlation relationship A→B, you could still learn a lot about A from setting B to a value, even if the two nodes were ‘way across the network’ from each other. You could trace the influence flowing up and down the pathways to some surprisingly distant places if there weren’t any blockers.

The bigger the network, the more possible combinations of nodes to look for a pairwise correlation between them (eg If there are 10 nodes/variables and you are looking at bivariate correlations, then you have 10 choose 2 = 45 possible comparisons, and with 20, 190, and 40, 780. 40 variables is not that much for many real-world problems.) A lot of these combos will yield some sort of correlation. But does the number of causal relationships go up as fast? I don’t think so (although I can’t prove it).

If not, then as causal networks get bigger, the number of genuine correlations will explode but the number of genuine causal relationships will increase slower, and so the fraction of correlations which are also causal will collapse.

(Or more concretely: suppose you generated a randomly connected causal network with x nodes and y arrows perhaps using the algorithm in Kuipers & Moffa 2012, where each arrow has some random noise in it; count how many pairs of nodes are in a causal relationship; now, n times initialize the root nodes to random values and generate a possible state of the network & storing the values for each node; count how many pairwise correlations there are between all the nodes using the n samples (using an appropriate significance test & alpha if one wants); divide # of causal relationships by # of correlations, store; return to the beginning and resume with x+1 nodes and y+1 arrows… As one graphs each value of x against its respective estimated fraction, does the fraction head toward 0 as x increases? My thesis is it does. Or, since there must be at least as many causal relationships in a graph as there are arrows, you could simply use that as an upper bound on the fraction.)

It turns out, we weren’t supposed to be reasoning ‘there are 3 categories of possible relationships, so we start with 33%’, but rather: ‘there is only one explanation “A causes B”, only one explanation “B causes A”, but there are many explanations of the form “C1 causes A and B”, “C2 causes A and B”, “C3 causes A and B”…’, and the more nodes in a field’s true causal networks (psychology or biology vs physics, say), the bigger this last category will be.

The real world is the largest of causal networks, so it is unsurprising that most correlations are not causal, even after we clamp down our data collection to narrow domains. Hence, our prior for “A causes B” is not 50% (it’s either true or false) nor is it 33% (either A causes B, B causes A, or mutual cause C) but something much smaller: the number of causal relationships divided by the number of pairwise correlations for a graph, which ratio can be roughly estimated on a field-by-field basis by looking at existing work or directly for a particular problem (perhaps one could derive the fraction based on the properties of the smallest inferrable graph that fits large datasets in that field). And since the larger a correlation relative to the usual correlations for a field, the more likely the two nodes are to be close in the causal network and hence more likely to be joined causally, one could even give causality estimates based on the size of a correlation (eg. an r=0.9 leaves less room for confounding than an r of 0.1, but how much will depend on the causal network).

This is exactly what we see. How do you treat cancer? Thousands of treatments get tried before one works. How do you deal with poverty? Most programs are not even wrong. Or how do you fix societal woes in general? Most attempts fail miserably and the higher-quality your studies, the worse attempts look (leading to Rossi’s Metallic Rules). This even explains why ‘everything correlates with everything’ and Andrew Gelman’s dictum about how coefficients are never zero: the reason datasets like those mentioned by Cohen or Meehl find most of their variables to have non-zero correlations (often reaching statistical-significance) is because the data is being drawn from large complicated causal networks in which almost everything really is correlated with everything else.

And thus I was enlightened.

1.3 Comment

Since I know so little about causal modeling, I asked our local causal researcher Ilya Shpitser to maybe leave a comment about whether the above was trivially wrong / already-proven / well-known folklore / etc; for convenience, I’ll excerpt the core of his comment:

But does the number of causal relationships go up just as fast? I don’t think so (although at the moment I can’t prove it).

I am not sure exactly what you mean, but I can think of a formalization where this is not hard to show. We say A “structurally causes” B in a DAG G if and only if there is a directed path from A to B in G. We say A is “structurally dependent” with B in a DAG G if and only if there is a marginal d-connecting path from A to B in G.

A marginal d-connecting path between two nodes is a path with no consecutive edges of the form * -> * <- * (that is, no colliders on the path). In other words all directed paths are marginal d-connecting but the opposite isn’t true.

The justification for this definition is that if A “structurally causes” B in a DAG G, then if we were to intervene on A, we would observe B change (but not vice versa) in “most” distributions that arise from causal structures consistent with G. Similarly, if A and B are “structurally dependent” in a DAG G, then in “most” distributions consistent with G, A and B would be marginally dependent (e.g. what you probably mean when you say ‘correlations are there’).

I qualify with “most” because we cannot simultaneously represent dependences and independences by a graph, so we have to choose. People have chosen to represent independences. That is, if in a DAG G some arrow is missing, then in any distribution (causal structure) consistent with G, there is some sort of independence (missing effect). But if the arrow is not missing we cannot say anything. Maybe there is dependence, maybe there is independence. An arrow may be present in G, and there may still be independence in a distribution consistent with G. We call such distributions “unfaithful” to G. If we pick distributions consistent with G randomly, we are unlikely to hit on unfaithful ones (subset of all distributions consistent with G that is unfaithful to G has measure zero), but Nature does not pick randomly.. so unfaithful distributions are a worry. They may arise for systematic reasons (maybe equilibrium of a feedback process in bio?)

If you accept above definition, then clearly for a DAG with n vertices, the number of pairwise structural dependence relationships is an upper bound on the number of pairwise structural causal relationships. I am not aware of anyone having worked out the exact combinatorics here, but it’s clear there are many many more paths for structural dependence than paths for structural causality.


But what you actually want is not a DAG with n vertices, but another type of graph with n vertices. The “Universe DAG” has a lot of vertices, but what we actually observe is a very small subset of these vertices, and we marginalize over the rest. The trouble is, if you start with a distribution that is consistent with a DAG, and you marginalize over some things, you may end up with a distribution that isn’t well represented by a DAG. Or “DAG models aren’t closed under marginalization.”

That is, if our DAG is A -> B <- H -> C <- D, and we marginalize over H because we do not observe H, what we get is a distribution where no DAG can properly represent all conditional independences. We need another kind of graph.

In fact, people have come up with a mixed graph (containing -> arrows and <-> arrows) to represent margins of DAGs. Here -> means the same as in a causal DAG, but <-> means “there is some sort of common cause/confounder that we don’t want to explicitly write down.” Note: <-> is not a correlative arrow, it is still encoding something causal (the presence of a hidden common cause or causes). I am being loose here – in fact it is the absence of arrows that means things, not the presence.

I do a lot of work on these kinds of graphs, because these are graphs are the sensible representation of data we typically get – drawn from a marginal of a joint distribution consistent with a big unknown DAG.

But the combinatorics work out the same in these graphs – the number of marginal d-connected paths is much bigger than the number of directed paths. This is probably the source of your intuition. Of course what often happens is you do have a (weak) causal link between A and B, but a much stronger non-causal link between A and B through an unobserved common parent. So the causal link is hard to find without “tricks.”

1.4 Heuristics & Biases

Now assuming the foregoing to be right (which I’m not sure about; in particular, I’m dubious that correlations in causal nets really do increase much faster than causal relations do), what’s the psychology of this? I see a few major ways that people might be incorrectly reasoning when they overestimate the evidence given by a correlation:

  • they might be aware of the imbalance between correlations and causation, but underestimate how much more common correlation becomes compared to causation.

    This could be shown by giving causal diagrams and seeing how elicited probability changes with the size of the diagrams: if the probability is constant, then the subjects would seem to be considering the relationship in isolation and ignoring the context.

    It might be remediable by showing a network and jarring people out of a simplistic comparison approach.
  • they might not be reasoning in a causal-net framework at all, but starting from the naive 33% base-rate you get when you treat all 3 kinds of causal relationships equally.

    This could be shown by eliciting estimates and seeing whether the estimates tend to look like base rates of 33% and modifications thereof.

    Sterner measures might be needed: could we draw causal nets with not just arrows showing influence but also another kind of arrow showing correlations? For example, the arrows could be drawn in black, inverse correlations drawn in red, and regular correlations drawn in green. The picture would be rather messy, but simply by comparing how few black arrows there are to how many green and red ones, it might visually make the case that correlation is much more common than causation.
  • alternately, they may really be reasoning causally and suffer from a truly deep & persistent cognitive illusion that when people say ‘correlation’ it’s really a kind of causation and don’t understand the technical meaning of ‘correlation’ in the first place (which is not as unlikely as it may sound, given examples like David Hestenes’s demonstration of the persistence of Aristotelian folk-physics in physics students as all they had learned was guessing passwords; on the test used, see eg Halloun & Hestenes 1985 & Hestenes et al 1992); in which cause it’s not surprising that if they think they’ve been told a relationship is ‘causation’, then they’ll think the relationship is causation. Ilya remarks:

    Pearl has this hypothesis that a lot of probabilistic fallacies/paradoxes/biases are due to the fact that causal and not probabilistic relationships are what our brain natively thinks about. So e.g. Simpson’s paradox is surprising because we intuitively think of a conditional distribution (where conditioning can change anything!) as a kind of “interventional distribution” (no Simpson’s type reversal under interventions: “Understanding Simpson’s Paradox”, Pearl 2014 [see also Pearl’s comments on Nielsen’s blog)).

    This hypothesis would claim that people who haven’t looked into the math just interpret statements about conditional probabilities as about “interventional probabilities” (or whatever their intuitive analogue of a causal thing is).

    This might be testable by trying to identify simple examples where the two approaches diverge, similar to Hestenes’s quiz for diagnosing belief in folk-physics.


This was originally posted to an open thread but due to the favorable response I am posting an expanded version here.

Confused as to usefulness of 'consciousness' as a concept

33 KnaveOfAllTrades 13 July 2014 11:01AM

Years ago, before I had come across many of the power tools in statistics, information theory, algorithmics, decision theory, or the Sequences, I was very confused by the concept of intelligence. Like many, I was inclined to reify it as some mysterious, effectively-supernatural force that tilted success at problem-solving in various domains towards the 'intelligent', and which occupied a scale imperfectly captured by measures such as IQ.

Realising that 'intelligence' (as a ranking of agents or as a scale) was a lossy compression of an infinity of statements about the relative success of different agents in various situations was part of dissolving the confusion; the reason that those called 'intelligent' or 'skillful' succeeded more often was that there were underlying processes that had a greater average tendency to output success, and that greater average success caused the application of the labels.

Any agent can be made to lose by an adversarial environment. But for a fixed set of environments, there might be some types of decision processes that do relatively well over that set of environments than other processes, and one can quantify this relative success in any number of ways.

It's almost embarrassing to write that since put that way, it's obvious. But it still seems to me that intelligence is reified (for example, look at most discussions about IQ), and the same basic mistake is made in other contexts, e.g. the commonly-held teleological approach to physical and mental diseases or 'conditions', in which the label is treated as if—by some force of supernatural linguistic determinism—it *causes* the condition, rather than the symptoms of the condition, in their presentation, causing the application of the labels. Or how a label like 'human biological sex' is treated as if it is a true binary distinction that carves reality at the joints and exerts magical causal power over the characteristics of humans, when it is really a fuzzy dividing 'line' in the space of possible or actual humans, the validity of which can only be granted by how well it summarises the characteristics.

For the sake of brevity, even when we realise these approximations, we often use them without commenting upon or disclaiming our usage, and in many cases this is sensible. Indeed, in many cases it's not clear what the exact, decompressed form of a concept would be, or it seems obvious that there can in fact be no single, unique rigorous form of the concept, but that the usage of the imprecise term is still reasonably consistent and correlates usefully with some relevant phenomenon (e.g. tendency to successfully solve problems). Hearing that one person has a higher IQ than another might allow one to make more reliable predictions about who will have the higher lifetime income, for example.

However, widespread use of such shorthands has drawbacks. If a term like 'intelligence' is used without concern or without understanding of its core (i.e. tendencies of agents to succeed in varying situations, or 'efficient cross-domain optimization'), then it might be used teleologically; the term is reified (the mental causal graph goes from "optimising algorithm->success->'intelligent'" to "'intelligent'->success").

In this teleological mode, it feels like 'intelligence' is the 'prime mover' in the system, rather than a description applied retroactively to a set of correlations. But knowledge of those correlations makes the term redundant; once we are aware of the correlations, the term 'intelligence' is just a pointer to them, and does not add anything to them. Despite this, it seems to me that some smart people get caught up in obsessing about reified intelligence (or measures like IQ) as if it were a magical key to all else.

Over the past while, I have been leaning more and more towards the conclusion that the term 'consciousness' is used in similarly dubious ways, and today it occurred to me that there is a very strong analogy between the potential failure modes of discussion of 'consciousness' and between the potential failure modes of discussion of 'intelligence'. In fact, I suspect that the perils of 'consciousness' might be far greater than those of 'intelligence'.

~

A few weeks ago, Scott Aaronson posted to his blog a criticism of integrated information theory (IIT). IIT attempts to provide a quantitative measure of the consciousness of a system. (Specifically, a nonnegative real number phi). Scott points out what he sees as failures of the measure phi to meet the desiderata of a definition or measure of consciousness, thereby arguing that IIT fails to capture the notion of consciousness.

What I read and understood of Scott's criticism seemed sound and decisive, but I can't shake a feeling that such arguments about measuring consciousness are missing the broader point that all such measures of consciousness are doomed to failure from the start, in the same way that arguments about specific measures of intelligence are missing a broader point about lossy compression.

Let's say I ask you to make predictions about the outcome of a game of half-court basketball between Alpha and Beta. Your prior knowledge is that Alpha always beats Beta at (individual versions of) every sport except half-court basketball, and that Beta always beats Alpha at half-court basketball. From this fact you assign Alpha a Sports Quotient (SQ) of 100 and Beta an SQ of 10. Since Alpha's SQ is greater than Beta's, you confidently predict that Alpha will beat Beta at half-court.

Of course, that would be wrong, wrong, wrong; the SQ's are encoding (or compressing) the comparative strengths and weaknesses of Alpha and Beta across various sports, and in particular that Alpha always loses to Beta at half-court. (In fact, if other combinations lead to the same SQ's, then *not even that much* information is encoded, since other combinations might lead to the same scores.) So to just look at the SQ's as numbers and use that as your prediction criterion is a knowably inferior strategy to looking at the details of the case in question, i.e. the actual past results of half-court games between the two.

Since measures like this fictional SQ or actual IQ or fuzzy (or even quantitative) notions of consciousness are at best shorthands for specific abilities or behaviours, tabooing the shorthand should never leave you with less information, since a true shorthand, by its very nature, does not add any information.

When I look at something like IIT, which (if Scott's criticism is accurate) assigns a superhuman consciousness score to a system that evaluates a polynomial at some points, my reaction is pretty much, "Well, this kind of flaw is pretty much inevitable in such an overambitious definition."

Six months ago, I wrote:

"...it feels like there's a useful (but possibly quantitative and not qualitative) difference between myself (obviously 'conscious' for any coherent extrapolated meaning of the term) and my computer (obviously not conscious (to any significant extent?))..."

Mark Friedenbach replied recently (so, a few months later):

"Why do you think your computer is not conscious? It probably has more of a conscious experience than, say, a flatworm or sea urchin. (As byrnema notes, conscious does not necessarily imply self-aware here.)"

I feel like if Mark had made that reply soon after my comment, I might have had a hard time formulating why, but that I would have been inclined towards disputing that my computer is conscious. As it is, at this point I am struggling to see that there is any meaningful disagreement here. Would we disagree over what my computer can do? What information it can process? What tasks it is good for, and for which not so much?

What about an animal instead of my computer? Would we feel the same philosophical confusion over any given capability of an average chicken? An average human?

Even if we did disagree (or at least did not agree) over, say, an average human's ability to detect and avoid ultraviolet light without artificial aids and modern knowledge, this lack of agreement would not feel like a messy, confusing philosophical one. It would feel like one tractable to direct experimentation. You know, like, blindfold some experimental subjects, control subjects, and experimenters and see how the experimental subjects react to ultraviolet light versus other light in the control subjects. Just like if we were arguing about whether Alpha or Beta is the better athlete, there would be no mystery left over once we'd agreed about their relative abilities at every athletic activity. At most there would be terminological bickering over which scoring rule over athletic activities we should be using to measure 'athletic ability', but not any disagreement for any fixed measure.

I have been turning it over for a while now, and I am struggling to think of contexts in which consciousness really holds up to attempts to reify it. If asked why it doesn't make sense to politely ask a virus to stop multiplying because it's going to kill its host, a conceivable response might be something like, "Erm, you know it's not conscious, right?" This response might well do the job. But if pressed to cash out this response, what we're really concerned with is the absence of the usual physical-biological processes by which talking at a system might affect its behaviour, so that there is no reason to expect the polite request to increase the chance of the favourable outcome. Sufficient knowledge of physics and biology could make this even more rigorous, and no reference need be made to consciousness.

The only context in which the notion of consciousness seems inextricable from the statement is in ethical statements like, "We shouldn't eat chickens because they're conscious." In such statements, it feels like a particular sense of 'conscious' is being used, one which is *defined* (or at least characterised) as 'the thing that gives moral worth to creatures, such that we shouldn't eat them'. But then it's not clear why we should call this moral criterion 'consciousness'; insomuch as consciousness is about information processing or understanding an environment, it's not obvious what connection this has to moral worth. And insomuch as consciousness is the Magic Token of Moral Worth, it's not clear what it has to do with information processing.

If we relabelled zxcv=conscious and rewrote, "We shouldn't eat chickens because they're zxcv," then this makes it clearer that the explanation is not entirely satisfactory; what does zxcv have to do with moral worth? Well, what does consciousness have to do with moral worth? Conservation of argumentative work and the usual prohibitions on equivocation apply: You can't introduce a new sense of the word 'conscious' then plug it into a statement like "We shouldn't eat chickens because they're conscious" and dust your hands off as if your argumentative work is done. That work is done only if one's actual values and the definition of consciousness to do with information processing already exactly coincide, and this coincidence is known. But it seems to me like a claim of any such coincidence must stem from confusion rather than actual understanding of one's values; valuing a system commensurate with its ability to process information is a fake utility function.

When intelligence is reified, it becomes a teleological fake explanation; consistently successful people are consistently successful because they are known to be Intelligent, rather than their consistent success causing them to be called intelligent. Similarly consciousness becomes teleological in moral contexts: We shouldn't eat chickens because they are called Conscious, rather than 'these properties of chickens mean we shouldn't eat them, and chickens also qualify as conscious'.

So it is that I have recently been very skeptical of the term 'consciousness' (though grant that it can sometimes be a useful shorthand), and hence my question to you: Have I overlooked any counts in favour of the term 'consciousness'?

Truth: It's Not That Great

33 ChrisHallquist 04 May 2014 10:07PM

Rationality is pretty great. Just not quite as great as everyone here seems to think it is.

-Yvain, "Extreme Rationality: It's Not That Great"

The folks most vocal about loving "truth" are usually selling something. For preachers, demagogues, and salesmen of all sorts, the wilder their story, the more they go on about how they love truth...

The people who just want to know things because they need to make important decisions, in contrast, usually say little about their love of truth; they are too busy trying to figure stuff out.

-Robin Hanson, "Who Loves Truth Most?"

A couple weeks ago, Brienne made a post on Facebook that included this remark: "I've also gained a lot of reverence for the truth, in virtue of the centrality of truth-seeking to the fate of the galaxy." But then she edited to add a footnote to this sentence: "That was the justification my brain originally threw at me, but it doesn't actually quite feel true. There's something more directly responsible for the motivation that I haven't yet identified."

I saw this, and commented:

<puts rubber Robin Hanson mask on>

What we have here is a case of subcultural in-group signaling masquerading as something else. In this case, proclaiming how vitally important truth-seeking is is a mark of your subculture. In reality, the truth is sometimes really important, but sometimes it isn't.

</rubber Robin Hanson mask>

In spite of the distancing pseudo-HTML tags, I actually believe this. When I read some of the more extreme proclamations of the value of truth that float around the rationalist community, I suspect people are doing in-group signaling—or perhaps conflating their own idiosyncratic preferences with rationality. As a mild antidote to this, when you hear someone talking about the value of the truth, try seeing if the statement still makes sense if you replace "truth" with "information."

This standard gives many statements about the value of truth its stamp of approval. After all, information is pretty damn valuable. But statements like "truth seeking is central to the fate of the galaxy" look a bit suspicious. Is information-gathering central to the fate of the galaxy? You could argue that statement is kinda true if you squint at it right, but really it's too general. Surely it's not just any information that's central to shaping the fate of the galaxy, but information about specific subjects, and even then there are tradeoffs to make.

This is an example of why I suspect "effective altruism" may be better branding for a movement than "rationalism." The "rationalism" branding encourages the meme that truth-seeking is great we should do lots and lots of it because truth is so great. The effective altruism movement, on the other hand, recognizes that while gathering information about the effectiveness of various interventions is important, there are tradeoffs to be made between spending time and money on gathering information vs. just doing whatever currently seems likely to have the greatest direct impact. Recognize information is valuable, but avoid analysis paralysis.

Or, consider statements like:

  • Some truths don't matter much.
  • People often have legitimate reasons for not wanting others to have certain truths.
  • The value of truth often has to be weighed against other goals.

Do these statements sound heretical to you? But what about:

  • Information can be perfectly accurate and also worthless. 
  • People often have legitimate reasons for not wanting other people to gain access to their private information. 
  • A desire for more information often has to be weighed against other goals. 

I struggled to write the first set of statements, though I think they're right on reflection. Why do they sound so much worse than the second set? Because the word "truth" carries powerful emotional connotations that go beyond its literal meaning. This isn't just true for rationalists—there's a reason religions have sayings like, "God is Truth" or "I am the way, the truth, and the life." "God is Facts" or "God is Information" don't work so well.

There's something about "truth"—how it readily acts as an applause light, a sacred value which must not be traded off against anything else. As I type that, a little voice in me protests "but truth really is sacred"... but once we can't say there's some limit to how great truth is, hello affective death spiral.

Consider another quote, from Steven Kaas, that I see frequently referenced on LessWrong: "Promoting less than maximally accurate beliefs is an act of sabotage. Don’t do it to anyone unless you’d also slash their tires, because they’re Nazis or whatever." Interestingly, the original blog included a caveat—"we may have to count everyday social interactions as a partial exception"—which I never see quoted. That aside, the quote has always bugged me. I've never had my tires slashed, but I imagine it ruins your whole day. On the other hand, having less than maximally accurate beliefs about something could ruin your whole day, but it could very easily not, depending on the topic.

Furthermore, sometimes sharing certain information doesn't just have little benefit, it can have substantial costs, or at least substantial risks. It would seriously trivialize Nazi Germany's crimes to compare it to the current US government, but I don't think that means we have to promote maximally accurate beliefs about ourselves to the folks at the NSA. Or, when negotiating over the price of something, are you required to promote maximally accurate beliefs about the highest price you'd be willing to pay, even if the other party isn't willing to reciprocate and may respond by demanding that price?

Private information is usually considered private precisely because it has limited benefit to most people, but sharing it could significantly harm the person whose private information it is. A sensible ethic around information needs to be able to deal with issues like that. It needs to be able to deal with questions like: is this information that is in the public interest to know? And is there a power imbalance involved? My rule of thumb is: secrets kept by the powerful deserve extra scrutiny, but so conversely do their attempts to gather other people's private information. 

"Corrupted hardware"-type arguments can suggest you should doubt your own justifications for deceiving others. But parallel arguments suggest you should doubt your own justifications for feeling entitled to information others might have legitimate reasons for keeping private. Arguments like, "well truth is supremely valuable," "it's extremely important for me to have accurate beliefs," or "I'm highly rational so people should trust me" just don't cut it.

Finally, being rational in the sense of being well-calibrated doesn't necessarily require making truth-seeking a major priority. Using the evidence you have well doesn't necessarily mean gathering lots of new evidence. Often, the alternative to knowing the truth is not believing falsehood, but admitting you don't know and living with the uncertainty.

A Visualization of Nick Bostrom’s Superintelligence

31 AmandaEHouse 23 July 2014 12:24AM

Through a series of diagrams, this article will walk through key concepts in Nick Bostrom’s Superintelligence. The book is full of heavy content, and though well written, its scope and depth can make it difficult to grasp the concepts and mentally hold them together. The motivation behind making these diagrams is not to repeat an explanation of the content, but rather to present the content in such a way that the connections become clear. Thus, this article is best read and used as a supplement to Superintelligence.

 

Note: Superintelligence is now available in the UK. The hardcover is coming out in the US on September 3. The Kindle version is already available in the US as well as the UK.


Roadmap: there are two diagrams, both presented with an accompanying description. The two diagrams are combined into one mega-diagram at the end.

 

 

 

Figure 1: Pathways to Superintelligence

 

 

Figure 1 displays the five pathways toward superintelligence that Bostrom describes in chapter 2 and returns to in chapter 14 of the text. According to Bostrom, brain-computer interfaces are unlikely to yield superintelligence. Biological cognition, i.e., the enhancement of human intelligence, may yield a weak form of superintelligence on its own. Additionally, improvements to biological cognition could feed back into driving the progress of artificial intelligence or whole brain emulation. The arrows from networks and organizations likewise indicate technologies feeding back into AI and whole brain emulation development.

 

Artificial intelligence and whole brain emulation are two pathways that can lead to fully realized superintelligence. Note that neuromorphic is listed under artificial intelligence, but an arrow connects from whole brain emulation to neuromorphic. In chapter 14, Bostrom suggests that neuromorphic is a potential outcome of incomplete or improper whole brain emulation. Synthetic AI includes all the approaches to AI that are not neuromorphic; other terms that have been used are algorithmic or de novo AI.

continue reading »

Failures of an embodied AIXI

27 So8res 15 June 2014 06:29PM

Building a safe and powerful artificial general intelligence seems a difficult task. Working on that task today is particularly difficult, as there is no clear path to AGI yet. Is there work that can be done now that makes it more likely that humanity will be able to build a safe, powerful AGI in the future? Benja and I think there is: there are a number of relevant problems that it seems possible to make progress on today using formally specified toy models of intelligence. For example, consider recent program equilibrium results and various problems of self-reference.

AIXI is a powerful toy model used to study intelligence. An appropriately-rewarded AIXI could readily solve a large class of difficult problems. This includes computer vision, natural language recognition, and many other difficult optimization tasks. That these problems are all solvable by the same equation — by a single hypothetical machine running AIXI — indicates that the AIXI formalism captures a very general notion of "intelligence".

However, AIXI is not a good toy model for investigating the construction of a safe and powerful AGI. This is not just because AIXI is uncomputable (and its computable counterpart AIXItl infeasible). Rather, it's because AIXI cannot self-modify. This fact is fairly obvious from the AIXI formalism: AIXI assumes that in the future, it will continue being AIXI. This is a fine assumption for AIXI to make, as it is a very powerful agent and may not need to self-modify. But this inability limits the usefulness of the model. Any agent capable of undergoing an intelligence explosion must be able to acquire new computing resources, dramatically change its own architecture, and keep its goals stable throughout the process. The AIXI formalism lacks tools to study such behavior.

This is not a condemnation of AIXI: the formalism was not designed to study self-modification. However, this limitation is neither trivial nor superficial: even though an AIXI may not need to make itself "smarter", real agents may need to self-modify for reasons other than self-improvement. The fact that an embodied AIXI cannot self-modify leads to systematic failures in situations where self-modification is actually necessary. One such scenario, made explicit using Botworld, is explored in detail below.

In this game, one agent will require another agent to precommit to a trade by modifying its code in a way that forces execution of the trade. AIXItl, which is unable to alter its source code, is not able to implement the precommitment, and thus cannot enlist the help of the other agent.

Afterwards, I discuss a slightly more realistic scenario in which two agents have an opportunity to cooperate, but one agent has a computationally expensive "exploit" action available and the other agent can measure the waste heat produced by computation. Again, this is a scenario where an embodied AIXItl fails to achieve a high payoff against cautious opponents.

Though scenarios such as these may seem improbable, they are not strictly impossible. Such scenarios indicate that AIXI — while a powerful toy model — does not perfectly capture the properties desirable in an idealized AGI.

continue reading »

View more: Next