Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
[I'm unsure how much this rehashes things 'everyone knows already' - if old hat, feel free to downvote into oblivion. My other motivation for the cross-post is the hope it might catch the interest of someone with a stronger mathematical background who could make this line of argument more robust]
Many outcomes of interest have pretty good predictors. It seems that height correlates to performance in basketball (the average height in the NBA is around 6'7"). Faster serves in tennis improve one's likelihood of winning. IQ scores are known to predict a slew of factors, from income, to chance of being imprisoned, to lifespan.
What is interesting is the strength of these relationships appear to deteriorate as you advance far along the right tail. Although 6'7" is very tall, is lies within a couple of standard deviations of the median US adult male height - there are many thousands of US men taller than the average NBA player, yet are not in the NBA. Although elite tennis players have very fast serves, if you look at the players serving the fastest serves ever recorded, they aren't the very best players of their time. It is harder to look at the IQ case due to test ceilings, but again there seems to be some divergence near the top: the very highest earners tend to be very smart, but their intelligence is not in step with their income (their cognitive ability is around +3 to +4 SD above the mean, yet their wealth is much higher than this) (1).
The trend seems to be that although we know the predictors are correlated with the outcome, freakishly extreme outcomes do not go together with similarly freakishly extreme predictors. Why?
Too much of a good thing?
One candidate explanation would be that more isn't always better, and the correlations one gets looking at the whole population doesn't capture a reversal at the right tail. Maybe being taller at basketball is good up to a point, but being really tall leads to greater costs in terms of things like agility. Maybe although having a faster serve is better all things being equal, but focusing too heavily on one's serve counterproductively neglects other areas of one's game. Maybe a high IQ is good for earning money, but a stratospherically high IQ has an increased risk of productivity-reducing mental illness. Or something along those lines.
I would guess that these sorts of 'hidden trade-offs' are common. But, the 'divergence of tails' seems pretty ubiquitous (the tallest aren't the heaviest, the smartest parents don't have the smartest children, the fastest runners aren't the best footballers, etc. etc.), and it would be weird if there was always a 'too much of a good thing' story to be told for all of these associations. I think there is a more general explanation.
The simple graphical explanation
[Inspired by this essay from Grady Towers]
Suppose you make a scatter plot of two correlated variables. Here's one I grabbed off google, comparing the speed of a ball out of a baseball pitchers hand compared to its speed crossing crossing the plate:
It is unsurprising to see these are correlated (I'd guess the R-square is > 0.8). But if one looks at the extreme end of the graph, the very fastest balls out of the hand aren't the very fastest balls crossing the plate, and vice versa. This feature is general. Look at this data (again convenience sampled from googling 'scatter plot') of quiz time versus test score:
Given a correlation, the envelope of the distribution should form some sort of ellipse, narrower as the correlation goes stronger, and more circular as it gets weaker:
The thing is, as one approaches the far corners of this ellipse, we see 'divergence of the tails': as the ellipse doesn't sharpen to a point, there are bulges where the maximum x and y values lie with sub-maximal y and x values respectively:
So this offers an explanation why divergence at the tails is ubiquitous. Providing the sample size is largeish, and the correlation not to tight (the tighter the correlation, the larger the sample size required), one will observe the ellipses with the bulging sides of the distribution (2).
Hence the very best basketball players aren't the tallest (and vice versa), the very wealthiest not the smartest, and so on and so forth for any correlated X and Y. If X and Y are "Estimated effect size" and "Actual effect size", or "Performance at T", and "Performance at T+n", then you have a graphical display of winner's curse and regression to the mean.
An intuitive explanation of the graphical explanation
It would be nice to have an intuitive handle on why this happens, even if we can be convinced that it happens. Here's my offer towards an explanation:
The fact that a correlation is less than 1 implies that other things matter to an outcome of interest. Although being tall matters for being good at basketball, strength, agility, hand-eye-coordination matter as well (to name but a few). The same applies to other outcomes where multiple factors play a role: being smart helps in getting rich, but so does being hard working, being lucky, and so on.
For a toy model, pretend these height, strength, agility and hand-eye-coordination are independent of one another, gaussian, and additive towards the outcome of basketball ability with equal weight.(3) So, ceritus paribus, being taller will make one better at basketball, and the toy model stipulates there aren't 'hidden trade-offs': there's no negative correlation between height and the other attributes, even at the extremes. Yet the graphical explanation suggests we should still see divergence of the tails: the very tallest shouldn't be the very best.
The intuitive explanation would go like this: Start at the extreme tail - +4SD above the mean for height. Although their 'basketball-score' gets a massive boost from their height, we'd expect them to be average with respect to the other basketball relevant abilities (we've stipulated they're independent). Further, as this ultra-tall population is small, this population won't have a very high variance: with 10 people at +4SD, you wouldn't expect any of them to be +2SD in another factor like agility.
Move down the tail to slightly less extreme values - +3SD say. These people don't get such a boost to their basketball score for their height, but there should be a lot more of them (if 10 at +4SD, around 500 at +3SD), this means there is a lot more expected variance in the other basketball relevant activities - it is much less surprising to find someone +3SD in height and also +2SD in agility, and in the world where these things were equally important, they would 'beat' someone +4SD in height but average in the other attributes. Although a +4SD height person will likely be better than a given +3SD height person, the best of the +4SDs will not be as good as the best of the much larger number of +3SDs
The trade-off will vary depending on the exact weighting of the factors, which explain more of the variance, but the point seems to hold in the general case: when looking at a factor known to be predictive of an outcome, the largest outcome values will occur with sub-maximal factor values, as the larger population increases the chances of 'getting lucky' with the other factors:
So that's why the tails diverge.
Endnote: EA relevance
I think this is interesting in and of itself, but it has relevance to Effective Altruism, given it generally focuses on the right tail of various things (What are the most effective charities? What is the best career? etc.) It generally vindicates worries about regression to the mean or winner's curse, and suggests that these will be pretty insoluble in all cases where the populations are large: even if you have really good means of assessing the best charities or the best careers so that your assessments correlate really strongly with what ones actually are the best, the very best ones you identify are unlikely to be actually the very best, as the tails will diverge.
This probably has limited practical relevance. Although you might expect that one of the 'not estimated as the very best' charities is in fact better than your estimated-to-be-best charity, you don't know which one, and your best bet remains your estimate (in the same way - at least in the toy model above - you should bet a 6'11" person is better at basketball than someone who is 6'4".)
There may be spread betting or portfolio scenarios where this factor comes into play - perhaps instead of funding AMF to diminishing returns when its marginal effectiveness dips below charity #2, we should be willing to spread funds sooner.(4) Mainly, though, it should lead us to be less self-confident.
1. One might look at the generally modest achievements of people in high-IQ societies as further evidence, but there are worries about adverse selection.
2. One needs a large enough sample to 'fill in' the elliptical population density envelope, and the tighter the correlation, the larger the sample needed to fill in the sub-maximal bulges. The old faithful case is an example where actually you do get a 'point', although it is likely an outlier.
3. If you want to apply it to cases where the factors are positively correlated - which they often are - just use the components of the other factors that are independent of the factor of interest. I think, but I can't demonstrate, the other stipulations could also be relaxed.
4. I'd intuit, but again I can't demonstrate, the case for this becomes stronger with highly skewed interventions where almost all the impact is focused in relatively low probability channels, like averting a very specified existential risk.
A friend recently posted a link on his Facebook page to an informational graphic about the alleged link between the MMR vaccine and autism. It said, if I recall correctly, that out of 60 studies on the matter, not one had indicated a link.
Presumably, with 95% confidence.
This bothered me. What are the odds, supposing there is no link between X and Y, of conducting 60 studies of the matter, and of all 60 concluding, with 95% confidence, that there is no link between X and Y?
Answer: .95 ^ 60 = .046. (Use the first term of the binomial distribution.)
So if it were in fact true that 60 out of 60 studies failed to find a link between vaccines and autism at 95% confidence, this would prove, with 95% confidence, that studies in the literature are biased against finding a link between vaccines and autism.
Building a safe and powerful artificial general intelligence seems a difficult task. Working on that task today is particularly difficult, as there is no clear path to AGI yet. Is there work that can be done now that makes it more likely that humanity will be able to build a safe, powerful AGI in the future? Benja and I think there is: there are a number of relevant problems that it seems possible to make progress on today using formally specified toy models of intelligence. For example, consider recent program equilibrium results and various problems of self-reference.
AIXI is a powerful toy model used to study intelligence. An appropriately-rewarded AIXI could readily solve a large class of difficult problems. This includes computer vision, natural language recognition, and many other difficult optimization tasks. That these problems are all solvable by the same equation — by a single hypothetical machine running AIXI — indicates that the AIXI formalism captures a very general notion of "intelligence".
However, AIXI is not a good toy model for investigating the construction of a safe and powerful AGI. This is not just because AIXI is uncomputable (and its computable counterpart AIXItl infeasible). Rather, it's because AIXI cannot self-modify. This fact is fairly obvious from the AIXI formalism: AIXI assumes that in the future, it will continue being AIXI. This is a fine assumption for AIXI to make, as it is a very powerful agent and may not need to self-modify. But this inability limits the usefulness of the model. Any agent capable of undergoing an intelligence explosion must be able to acquire new computing resources, dramatically change its own architecture, and keep its goals stable throughout the process. The AIXI formalism lacks tools to study such behavior.
This is not a condemnation of AIXI: the formalism was not designed to study self-modification. However, this limitation is neither trivial nor superficial: even though an AIXI may not need to make itself "smarter", real agents may need to self-modify for reasons other than self-improvement. The fact that an embodied AIXI cannot self-modify leads to systematic failures in situations where self-modification is actually necessary. One such scenario, made explicit using Botworld, is explored in detail below.
In this game, one agent will require another agent to precommit to a trade by modifying its code in a way that forces execution of the trade. AIXItl, which is unable to alter its source code, is not able to implement the precommitment, and thus cannot enlist the help of the other agent.
Afterwards, I discuss a slightly more realistic scenario in which two agents have an opportunity to cooperate, but one agent has a computationally expensive "exploit" action available and the other agent can measure the waste heat produced by computation. Again, this is a scenario where an embodied AIXItl fails to achieve a high payoff against cautious opponents.
Though scenarios such as these may seem improbable, they are not strictly impossible. Such scenarios indicate that AIXI — while a powerful toy model — does not perfectly capture the properties desirable in an idealized AGI.
A few months ago, my friend said the following thing to me: “After seeing Divergent, I finally understand virtue ethics. The main character is a cross between Aristotle and you.”
That was an impossible-to-resist pitch, and I saw the movie. The thing that resonated most with me–also the thing that my friend thought I had in common with the main character–was the idea that you could make a particular decision, and set yourself down a particular course of action, in order to make yourself become a particular kind of person. Tris didn’t join the Dauntless cast because she thought they were doing the most good in society, or because she thought her comparative advantage to do good lay there–she chose it because they were brave, and she wasn’t, yet, and she wanted to be. Bravery was a virtue that she thought she ought to have. If the graph of her motivations even went any deeper, the only node beyond ‘become brave’ was ‘become good.’
(Tris did have a concept of some future world-outcomes being better than others, and wanting to have an effect on the world. But that wasn't the causal reason why she chose Dauntless; as far as I can tell, it was unrelated.)
My twelve-year-old self had a similar attitude. I read a lot of fiction, and stories had heroes, and I wanted to be like them–and that meant acquiring the right skills and the right traits. I knew I was terrible at reacting under pressure–that in the case of an earthquake or other natural disaster, I would freeze up and not be useful at all. Being good at reacting under pressure was an important trait for a hero to have. I could be sad that I didn’t have it, or I could decide to acquire it by doing the things that scared me over and over and over again. So that someday, when the world tried to throw bad things at my friends and family, I’d be ready.
You could call that an awfully passive way to look at things. It reveals a deep-seated belief that I’m not in control, that the world is big and complicated and beyond my ability to understand and predict, much less steer–that I am not the locus of control. But this way of thinking is an algorithm. It will almost always spit out an answer, when otherwise I might get stuck in the complexity and unpredictability of trying to make a particular outcome happen.
I find the different houses of the HPMOR universe to be a very compelling metaphor. It’s not because they suggest actions to take; instead, they suggest virtues to focus on, so that when a particular situation comes up, you can act ‘in character.’ Courage and bravery for Gryffindor, for example. It also suggests the idea that different people can focus on different virtues–diversity is a useful thing to have in the world. (I'm probably mangling the concept of virtue ethics here, not having any background in philosophy, but it's the closest term for the thing I mean.)
I’ve thought a lot about the virtue of loyalty. In the past, loyalty has kept me with jobs and friends that, from an objective perspective, might not seem like the optimal things to spend my time on. But the costs of quitting and finding a new job, or cutting off friendships, wouldn’t just have been about direct consequences in the world, like needing to spend a bunch of time handing out resumes or having an unpleasant conversation. There would also be a shift within myself, a weakening in the drive towards loyalty. It wasn’t that I thought everyone ought to be extremely loyal–it’s a virtue with obvious downsides and failure modes. But it was a virtue that I wanted, partly because it seemed undervalued.
By calling myself a ‘loyal person’, I can aim myself in a particular direction without having to understand all the subcomponents of the world. More importantly, I can make decisions even when I’m rushed, or tired, or under cognitive strain that makes it hard to calculate through all of the consequences of a particular action.
The Less Wrong/CFAR/rationalist community puts a lot of emphasis on a different way of trying to be a hero–where you start from a terminal goal, like “saving the world”, and break it into subgoals, and do whatever it takes to accomplish it. In the past I’ve thought of myself as being mostly consequentialist, in terms of morality, and this is a very consequentialist way to think about being a good person. And it doesn't feel like it would work.
There are some bad reasons why it might feel wrong–i.e. that it feels arrogant to think you can accomplish something that big–but I think the main reason is that it feels fake. There is strong social pressure in the CFAR/Less Wrong community to claim that you have terminal goals, that you’re working towards something big. My System 2 understands terminal goals and consequentialism, as a thing that other people do–I could talk about my terminal goals, and get the points, and fit in, but I’d be lying about my thoughts. My model of my mind would be incorrect, and that would have consequences on, for example, whether my plans actually worked.
Practicing the art of rationality
Recently, Anna Salamon brought up a question with the other CFAR staff: “What is the thing that’s wrong with your own practice of the art of rationality?” The terminal goals thing was what I thought of immediately–namely, the conversations I've had over the past two years, where other rationalists have asked me "so what are your terminal goals/values?" and I've stammered something and then gone to hide in a corner and try to come up with some.
In Alicorn’s Luminosity, Bella says about her thoughts that “they were liable to morph into versions of themselves that were more idealized, more consistent - and not what they were originally, and therefore false. Or they'd be forgotten altogether, which was even worse (those thoughts were mine, and I wanted them).”
I want to know true things about myself. I also want to impress my friends by having the traits that they think are cool, but not at the price of faking it–my brain screams that pretending to be something other than what you are isn’t virtuous. When my immediate response to someone asking me about my terminal goals is “but brains don’t work that way!” it may not be a true statement about all brains, but it’s a true statement about my brain. My motivational system is wired in a certain way. I could think it was broken; I could let my friends convince me that I needed to change, and try to shoehorn my brain into a different shape; or I could accept that it works, that I get things done and people find me useful to have around and this is how I am. For now. I'm not going to rule out future attempts to hack my brain, because Growth Mindset, and maybe some other reasons will convince me that it's important enough, but if I do it, it'll be on my terms. Other people are welcome to have their terminal goals and existential struggles. I’m okay the way I am–I have an algorithm to follow.
Why write this post?
It would be an awfully surprising coincidence if mine was the only brain that worked this way. I’m not a special snowflake. And other people who interact with the Less Wrong community might not deal with it the way I do. They might try to twist their brains into the ‘right’ shape, and break their motivational system. Or they might decide that rationality is stupid and walk away.
I once asked a room full of about 100 neuroscientists whether willpower depletion was a thing, and there was widespread disagreement with the idea. (A propos, this is a great way to quickly gauge consensus in a field.) Basically, for a while some researchers believed that willpower depletion "is" glucose depletion in the prefrontal cortex, but some more recent experiments have failed to replicate this, e.g. by finding that the mere taste of sugar is enough to "replenish" willpower faster than the time it takes blood to move from the mouth to the brain:
Carbohydrate mouth-rinses activate dopaminergic pathways in the striatum–a region of the brain associated with responses to reward (Kringelbach, 2004)–whereas artificially-sweetened non-carbohydrate mouth-rinses do not (Chambers et al., 2009). Thus, the sensing of carbohydrates in the mouth appears to signal the possibility of reward (i.e., the future availability of additional energy), which could motivate rather than fuel physical effort.
-- Molden, D. C. et al, The Motivational versus Metabolic Effects of Carbohydrates on Self-Control. Psychological Science.
Stanford's Carol Dweck and Greg Walden even found that hinting to people that using willpower is energizing might actually make them less depletable:
When we had people read statements that reminded them of the power of willpower like, “Sometimes, working on a strenuous mental task can make you feel energized for further challenging activities,” they kept on working and performing well with no sign of depletion. They made half as many mistakes on a difficult cognitive task as people who read statements about limited willpower. In another study, they scored 15 percent better on I.Q. problems.
-- Dweck and Walden, Willpower: It’s in Your Head? New York Times.
While these are all interesting empirical findings, there’s a very similar phenomenon that’s much less debated and which could explain many of these observations, but I think gets too little popular attention in these discussions:
Willpower is distractible.
Indeed, willpower and working memory are both strongly mediated by the dorsolateral prefontal cortex, so “distraction” could just be the two functions funging against one another. To use the terms of Stanovich popularized by Kahneman in Thinking: Fast and Slow, "System 2" can only override so many "System 1" defaults at any given moment.
So what’s going on when people say "willpower depletion"? I’m not sure, but even if willpower depletion is not a thing, the following distracting phenomena clearly are:
- Physical fatigue (like from running)
- Physical discomfort (like from sitting)
- That specific-other-thing you want to do
- Anxiety about willpower depletion
- Indignation at being asked for too much by bosses, partners, or experimenters...
... and "willpower depletion" might be nothing more than mental distraction by one of these processes. Perhaps it really is better to think of willpower as power (a rate) than energy (a resource).
If that’s true, then figuring out what processes might be distracting us might be much more useful than saying “I’m out of willpower” and giving up. Maybe try having a sip of water or a bit of food if your diet permits it. Maybe try reading lying down to see if you get nap-ish. Maybe set a timer to remind you to call that friend you keep thinking about.
The last two bullets,
- Anxiety about willpower depletion
- Indignation at being asked for too much by bosses, partners, or experimenters...
are also enough to explain why being told willpower depletion isn’t a thing might reduce the effects typically attributed to it: we might simply be less distracted by anxiety or indignation about doing “too much” willpower-intensive work in a short period of time.
Of course, any speculation about how human minds work in general is prone to the "typical mind fallacy". Maybe my willpower is depletable and yours isn’t. But then that wouldn’t explain why you can cause people to exhibit less willpower depletion by suggesting otherwise. But then again, most published research findings are false. But then again the research on the DLPFC and working memory seems relatively old and well established, and distraction is clearly a thing...
All in all, more of my chips are falling on the hypothesis that willpower “depletion” is often just willpower distraction, and that finding and addressing those distractions is probably a better a strategy than avoiding activities altogether in order to "conserve willpower".
Daenerys' Note: This is the last item in the LW Women series. Thanks to all who participated. :)
The following section will be at the top of all posts in the LW Women series.
Several months ago, I put out a call for anonymous submissions by the women on LW, with the idea that I would compile them into some kind of post. There is a LOT of material, so I am breaking them down into more manageable-sized themed posts.
Seven women replied, totaling about 18 pages.
Standard Disclaimer- Women have many different viewpoints, and just because I am acting as an intermediary to allow for anonymous communication does NOT mean that I agree with everything that will be posted in this series. (It would be rather impossible to, since there are some posts arguing opposite sides!)
To the submitters- If you would like to respond anonymously to a comment (for example if there is a comment questioning something in your post, and you want to clarify), you can PM your message and I will post it for you. If this happens a lot, I might create a LW_Women sockpuppet account for the submitters to share.
Please do NOT break anonymity, because it lowers the anonymity of the rest of the submitters.
No seriously. I've grown up and lived in the social circles where female privilege way outweigh male privilege. I've never been sexually assaulted, nor been denied anything because of my gender. I study a male-dominated subject, and most of my friends are polite, deferential feminism-controlled men. I have, however, been able to flirt and sympathise and generally girl-game my way into getting what I want. (Charming guys is fun!) Sure, there will eventually come a point where I'll be disadvantaged in the job market because of my ability to bear children; but I've gotta balance that against the fact that I have the ability to bear children.
In fact, most of the gender problems I personally face stem from biology, so there's not much I can do about them. It sucks that I have to be the one responsible for contraception, and that my attractiveness to men depends largely on my looks but the inverse is not true. But there's not much society can do to change biological facts, so I live with them.
I don't think it's a very disputed fact that women, in general, tend to be more emotional than men. I'm an INFJ, most of my (male) friends are INTJ. With the help of Less Wrong's epistemology and a large pinch of Game, I've achieved a fair degree of luminosity over my inner workings. I'm complicated. I don't think my INTJ friends are this complicated, and the complicatedness is part of the reason why I'm an "F": my intuitions system is useful. It makes me really quite good at people, especially when I can introspect and then apply my conscious to my instincts as well. I don't know how many of the people here are F instead of T, but for anyone who uses intuition a lot, applying proper rationality to introspection (a.k.a. luminosity) is essential. It is so so so easy to rationalise, and it takes effort to just know my instinct without rationalising false reasons for it. I'm not sure the luminosity sequence helps everyone, because everyone works differently, but just being aware of the concept and being on the lookout for ways that work is good.
There's a problem with strong intuition though, and that's that I have less conscious control over my opinions - it's hard enough being aware of them and not rationalising additional reasons for them. I judge ugly women and unsuccessful men. I try to consciously adjust for the effect, but it's hard.
Onto the topic of gender discussions on Less Wrong - it annoys me how quickly things gets irrational. The whole objectification debacle of July 2009 proved that even the best can get caught up in it (though maybe things have got better since 2009?). I was confused in the same way Luke was: I didn't see anything wrong with objectification. I objectify people all the time, but I still treat them as agents when I need to. Porn is great, but it doesn't mean I'm going to find it harder to befriend a porn star. I objectify Eliezer Yukowsky because he's a phenomenon on the internet more than a flesh-and-blood person to me, but that doesn't mean I'd have difficulty interacting with a flesh-and-blood Eliezer. On the whole, Less Wrong doesn't do well at talking about controversial topics, even though we know how to. Maybe we just need to work harder. Maybe we need more luminosity. I would love for Less Wrong to be a place where all things could just be discussed rationally.
There's another reason that I come out on a different side to most women in feminism and gender discussions though, and this is the bit I'm only saying because it's anonymous. I'm not a typical woman. I act, dress and style feminine because I enjoy feeling like a princess. I am most fulfilled when I'm in a M-dom f-sub relationship. My favourite activity is cooking and my honest-to-god favourite place in the house is the kitchen. I take pride in making awesome sandwiches. I just can't alieve it's offensive when I hear "get in the kitchen", because I'd just be like "ok! :D". I love sex, and I value getting better at it. I want to be able to have sex like a porn star. Suppressing my gag reflex is one of the most useful things I learned all year. I love being hit on and seduced by men. When I dress sexy, it is because male attention turns me on. I love getting wolf whistles. Because of luminosity and self-awareness, I'm ever-conscious of the vagina tingle. I'm aware of when I'm turned on, and I don't rationalise it away. And the same testosterone that makes me good at a male-dominated subject, makes sure I'm really easily turned on.
I understand that all these things are different when I'm consenting and I'm viewed as an agent and all that. But it's just hard to understand other girls being offended when I'm not, because it's much harder to empathise with someone you don't agree with. Not generalising from one example is hard.
Understanding other girls is hard.
Many adults maintain their intelligence through a dedication to study or hard work. I suspect this is related to sub-optimal levels of careful introspection among intellectuals.
If someone asks you what you want for yourself in life, do you have the answer ready at hand? How about what you want for others? Human values are complex, which means your talents and technical knowledge should help you think about them. Just as in your work, complexity shouldn't be a curiosity-stopper. It means "think", not "give up now."
But there are so many terrible excuses stopping you...
Too busy studying? Life is the exam you are always taking. Are you studying for that? Did you even write yourself a course outline?
Too busy helping? Decision-making is the skill you are aways using, or always lacking, as much when you help others as yourself. Isn't something you use constantly worth improving on purpose?
Too busy thinking to learn about your brain? That's like being too busy flying an airplane to learn where the engines are. Yes, you've got passengers in real life, too: the people whose lives you affect.
Emotions too irrational to think about them? Irrational emotions are things you don't want to think for you, and therefore are something you want to think about. By analogy, children are often irrational, and no one sane concludes that we therefore shouldn't think about their welfare, or that they shouldn't exist.
So set aside a date. Sometime soon. Write yourself some notes. Find that introspective friend of yours, and start solving for happiness. Don't have one? For the first time in history, you've got LessWrong.com!
Reasons to make the effort:
Happiness is a pairing between your situation and your disposition. Truly optimizing your life requires adjusting both variables: what happens, and how it affects you.
You are constantly changing your disposition. The question is whether you'll do it with a purpose. Your experiences change you, and you affect those, as well as how you think about them, which also changes you. It's going to happen. It's happening now. Do you even know how it works? Put your intelligence to work and figure it out!
The road to harm is paved with ignorance. Using your capability to understand yourself and what you're doing is a matter of responsibility to others, too. It makes you better able to be a better friend.
You're almost certainly suffering from Ugh Fields: unconscious don't-think-about-it reflexes that form via Pavlovian conditioning. The issues most in need of your attention are often ones you just happen not to think about for reasons undetectable to you.
How not to waste the effort:
Don't wait till you're sad. Only thinking when you're sad gives you a skew perspective. Don't infer that you can think better when you're sad just because that's the only time you try to be thoughtful. Sadness often makes it harder to think: you're farther from happiness, which can make it more difficult to empathize with and understand. Nonethess we often have to think when sad, because something bad may have happened that needs addressing.
Introspect carefully, not constantly. Don't interrupt your work every 20 minutes to wonder whether it's your true purpose in life. Respect that question as something that requires concentration, note-taking, and solid blocks of scheduled time. In those times, check over your analysis by trying to confound it, so lingering doubts can be justifiably quieted by remembering how thorough you were.
Re-evaluate on an appropriate time-scale. Try devoting a few days before each semester or work period to look at your life as a whole. At these times you'll have accumulated experience data from the last period, ripe and ready for analysis. You'll have more ideas per hour that way, and feel better about it. Before starting something new is also the most natural and opportune time to affirm or change long term goals. Then, barring large unexpecte d opportunities, stick to what you decide until the next period when you've gathered enough experience to warrant new reflection.
(The absent minded driver is a mathematical example of how planning outperforms constant re-evaluation. When not engaged in a deep and careful introspection, we're all absent minded drivers to a degree.)
Lost about where to start? I think Alicorn's story is an inspiring one. Learn to understand and defeat procrastination/akrasia. Overcome your cached selves so you can grow freely (definitely read their possible strategies at the end). Foster an everyday awareness that you are a brain, and in fact more like two half-brains.
These suggestions are among the top-rated LessWrong posts, so they'll be of interest to lots of intellectually-minded, rationalist-curious individuals. But you have your own task ahead of you, that only you can fulfill.
So don't give up. Don't procrastinate it. If you haven't done it already, schedule a day and time right now when you can realistically assess
- how you want your life to affect you and other people, and
- what you must change to better achieve this.
Eliezer has said I want you to live. Let me say:
I want you to be better at your life.
Update: I'm liveblogging the fundraiser here.
Read our strategy below, then give here!
As previously announced, MIRI is participating in a massive 24-hour fundraiser on May 6th, called SV Gives. This is a unique opportunity for all MIRI supporters to increase the impact of their donations. To be successful we'll need to pre-commit to a strategy and see it through. If you plan to give at least $10 to MIRI sometime this year, during this event would be the best time to do it!
We need all hands on deck to help us win the following prize as many times as possible:
$2,000 prize for the nonprofit that has the most individual donors in an hour, every hour for 24 hours.
To paraphrase, every hour, there is a $2,000 prize for the organization that has the most individual donors during that hour. That's a total of $48,000 in prizes, from sources that wouldn't normally give to MIRI. The minimum donation is $10, and an individual donor can give as many times as they want. Therefore we ask our supporters to:
- give $10 an hour, during every hour of the fundraiser that they are awake (I'll be up and donating for all 24 hours!);
- for those whose giving budgets won't cover all those hours, see below for list of which hours you should privilege; and
- publicize this effort as widely as possible.
International donors, we especially need your help!
MIRI has a strong community of international supporters, and this gives us a distinct advantage! While North America sleeps, you'll be awake, ready to target all of the overnight $2,000 hourly prizes.
I'm pleased to announce the first annual survey of effective altruists. This is a short survey of around 40 questions (generally multiple choice), which several collaborators and I have put a great deal of work into and would be very grateful if you took. I'll offer $250 of my own money to one participant.
Take the survey at http://survey.effectivealtruismhub.com/
The survey should yield some interesting results such as EAs' political and religious views, what actions they take, and the causes they favour and donate to. It will also enable useful applications which will be launched immediately afterwards, such as a map of EAs with contact details and a cause-neutral register of planned donations or pledges which can be verified each year. I'll also provide an open platform for followup surveys and other actions people can take. If you'd like to suggest questions, email me or comment.
Anonymised results will be shared publicly and not belong to any individual or organisation. The most robust privacy practices will be followed, with clear opt-ins and opt-outs.
I'd like to thank Jacy Anthis, Ben Landau-Taylor, David Moss and Peter Hurford for their help.
Other surveys' results, and predictions for this one
Other surveys have had intriguing results. For example, Joey Savoie and Xio Kikauka's interviewed 42 often highly active EAs over Skype, and found that they generally had left-leaning parents, donated on average 10%, and were altruistic before becoming EAs. The time they spent on EA activities was correlated with the percentage they donated (0.4), the time their parents spend volunteering (0.3), and the percentage of their friends who were EAs (0.3).
80,000 Hours also released a questionnaire and, while this was mainly focused on their impact, it yielded a list of which careers people plan to pursue: 16% for academia, 9% for both finance and software engineering, and 8% for both medicine and non-profits.
I'd be curious to hear people's predictions as to what the results of this survey will be. You might enjoy reading or sharing them here. For my part, I'd imagine we have few conservatives or even libertarians, are over 70% male, and have directed most of our donations to poverty charities.
I first wrote up the following post, then happened to run into Holden Karnofsky in person and asked him a much-shortened form of the question verbally. My attempt to recount Holden's verbal reply is also given further below. I was moderately impressed by Holden's response because I had not thought of it when listing out possible replies, but I don't understand yet why Holden's response should be true. Since GiveWell has recently posted about objections to GiveDirectly and replies, I decided to go ahead and post this now.
A question for GiveWell:
Your current #2 top-rated charity is GiveDirectly, which gives one-time gifts of $1000 over 9 months, directly to poor recipients in Kenya via M-PESA.
Givewell tries for high standards of evidence of efficacy and cost-effectiveness. As I understand it, you don't just want the charity to be arguably cost effective, you want a very high probability that the charity is cost-effective.
The main evidence I've seen cited for direct giving is that the recipients who received the $1000 are then substantially better off 9 months later compared to people who aren't.
While I can imagine arguments that could repair the obvious objection to this reasoning, I haven't seen yet how the resulting evidence about cost-effectiveness could rise again to the epistemic standards one would expect of Givewell's #2 evidence-based charity.
The obvious objection is as follows: Suppose the Kenyan government simply printed new shillings and handed out $1000 of such shillings to the same recipients targeted by GiveDirectly. Although the recipients would be better off than non-recipients, this might not reflect any improvement in net utility in Kenya because no new resources were created by printing the money.
There are of course obvious replies to this obvious objection:
(1) Because the shillings handed out by GiveDirectly are purchased on the foreign currency exchange market using U. S. dollars, and would otherwise have been spent in Kenya in other ways, we should not expect any inflation of the shilling, and should expect an increase in Kenyan consumption of foreign goods corresponding to the increased price of shillings implied by GiveDirectly adding their marginal demand to the auction and thereby raising the marginal price of all shillings sold. The primary mechanism of action by which GiveDirectly benefits Kenya is by raising the price of shillings in the foreign exchange market and making more hard currency available to sellers of shillings. So far as I can tell, this argument ought to generalize: Any argument that the Kenyan government could not accomplish most of the same good by printing shillings will mean that the primary mechanism of GiveWell's effectiveness must be the U.S. dollars being exchanged for the shillings on the foreign currency market. This in turn means that GiveDirectly could accomplish most of its good by buying the same shillings on the foreign currency market and burning them.
(Or to sharpen the total point of this article: The sum of the good accomplished by GiveDirectly should equal:
- The good accomplished by the Kenyan government printing shillings and distributing them to the same recipients;
- plus the good accomplished by GiveDirectly then purchasing shillings on the foreign exchange market using US dollars, and burning them.
Indeed, since these mechanisms of action seem mostly independent, we ought to be able to state a percentage of good accomplished which is allegedly attributed to each, summing to 1. E.g. maybe 80% of the good would be achieved by printing shillings and distributing them to the same recipients, and 20% would be achieved by purchasing shillings on the foreign exchange market and burning them. But then we have mostly the same questions as before about how to generate wealth by printing shillings.)
(2) Inequality in Kenya is such that redistributing the supply of shillings toward the very poor increases utility in Kenya. Thus the Kenyan government could accomplish as much good as GiveDirectly by printing an equivalent number of shillings and giving them to the same recipients. This would create inflation that is a loss to other Kenyans, some of them also very poor, but so much of the shilling supply is held by the rich that the net results are favorable. Printing shillings can create happiness because it shifts resources from making speedboats for the rich to making corrugated iron roofs for the poor.
(It would be nice if the Kenyan government just printed shillings for GiveDirectly to use, but this the Kenyan government will not realistically do. Effective altruists must live in the real world, and in the real world GiveDirectly will only accomplish its goals with the aid of effective altruists. One cannot live in the should-universe where Kenya's government is taking up the burden. Effective altruists should reason as if the Kenya government consists of plastic dolls who cannot be the locus of responsibility instead of them - that's heroic epistemology 101. Maybe there will eventually be returns on lobbying for Minimum Guaranteed Income in Kenya if the programs work, but that's for tomorrow, not right now.)
(3) Like the European Union, Kenya is not printing enough shillings under standard economic theory. (I have no idea if this is plausibly true for Kenya in particular.) If the government printed shillings and gave them to the same recipients, this would create real wealth in Kenya because the economy was operating below capacity and velocity of trade would pick up. The shillings purchased by GiveDirectly would otherwise have stayed in bank accounts rather than going to other Kenyans. Note that this contradicts the argument step in (1) where we said that the purchased shillings would otherwise have been spent elsewhere, so you should have questioned one argument step or the other.
(4) Village moneylenders and bosses can successfully extract most surplus generated within their villages by raising rents or demanding bribes. The only way that individuals can escape the grasp of moneylenders and rentiers is with a one-time gift that was not expected and which the moneylenders and bosses could not arrange to capture. The government could accomplish as much good as GiveDirectly by printing the same number of shillings and giving them to the same people in an unpredictable pattern. This would create some inflation but village moneylenders or bosses would ease off on people from whom they couldn't extract as much value, whereas the one-time gift recipients can purchase capital goods that will make them permanently better off in ways that don't allow the new value to be extracted by moneylenders or bosses.
If I recall correctly, GiveDirectly uses the example of a family using some of the gift money to purchase a corrugated iron roof. From my perspective the obvious objection is that they could just be purchasing a corrugated iron roof that would've gone to someone else and raising the prices of roofs. (1) says that Kenya has more foreign exchange on hands and can import, not one more corrugated iron roof, but a variety of other foreign goods; (2) says that the resources used in the corrugated iron roof would otherwise have been used to make a speedboat; (3) says that a new trade takes place in which somebody makes a corrugated iron roof that wouldn't have been manufactured otherwise; and (4) says that the village moneylenders usually adjust their interest rates so as to prevent anyone from saving up enough money to buy a corrugated iron roof.
The trouble is that all of these mechanisms of action seem much harder to measure and be sure of, than the measurable outcomes for gift recipients vs. non-recipients.
To reiterate, the sum of the good accomplished by GiveDirectly should equal the good accomplished by the Kenyan government printing shillings and distributing them to the same recipients, plus the good accomplished by GiveDirectly purchasing shillings on the foreign exchange market using US dollars and then burning them. It seems to me to be difficult to arrive at a state of strong evidence about either of the two terms in this sum, with respect to any mechanism of action I've thought of so far.
With respect to the second term in this sum: GiveDirectly buying shillings on the foreign exchange market and burning them might create wealth, but it's hard to see how you would measure this over the relevant amounts, and no such evidence was cited in the recommendation of GiveDirectly as the #2 charity.
With respect to the first term in this sum: Under the Bayesian definition of evidence, strong evidence is evidence we are unlikely to see when the theory is false. Even in the absence of any mechanism whereby printing nominal shillings creates happiness or wealth, we would still expect to find that the wealth and happiness of gift recipients exceeded the wealth of non-recipients. So measuring that the gift recipients are wealthier and happier is not strong or even medium evidence that printing nominal shillings creates wealth, unless I'm missing something here. Our posterior that printing shillings and giving them to certain people would create net wealth in any given quantity, should roughly equal our prior, after updating on the stated experimental evidence.
When I posed a shortened form of this question to Holden Karnofsky, he replied (roughly, I am trying to rephrase from memory):
It seems to me that this is a perverse decomposition of the benefit accomplished. There's no inflation in the shilling because you're buying them, and since this is true, decomposing the benefit into an operation that does inflationary damage as a side effect, and then another operation that makes up for the inflation, is perverse. It's like criticizing the Against Malaria Foundation based on a hypothetical which involves the mosquito nets being made from the flesh of babies and then adding another effect which saves the lives of other babies. Since this is a perverse sum involving a strange extra side effect, it's okay that we can't get good estimates involving either of the terms in it.
Please keep in mind that this is Holden's off-the-cuff, non-written in-person response as rephrased by Eliezer Yudkowsky from imperfect memory.
With that said, I've thought about (what I think was) Holden's answer and I feel like I'm still missing something. I agree that if U.S. dollars were being sent directly to Kenyan recipients and used only to purchase foreign goods, so that foreign goods were being directly sent from the U.S. to Kenyan recipients, then improvement in measured outcome for recipients compared to non-recipients would be an appropriate metric, and that the decomposition would be perverse. But if the received money, in the form of Kenyan shillings, is being used primarily to purchase Kenyan goods, and causing those goods to be shipped to one villager rather than another while also possibly increasing velocity of trade, remedying inequality, and enabling completely different actors to buy some amount of foreign goods, then I honestly don't understand why this scenario should have the same causal mechanisms as the scenario where foreign goods are being shipped in from outside the country. And then I honestly don't understand why measured improvements for one Kenyan over another should be a good proxy for aggregate welfare change to the country.
I may be missing something that an economist would find obvious or I may have misunderstood Holden's reply. But to me, my sum seems like an obvious causal decomposition of the effects in Kenya, neither of whose terms can be estimated well. I don't understand why I should expect the uncertainty in these two estimates to cancel out when they are added; I don't understand what background causal model yields this conclusion.
To be clear, I personally would guess that the U.S. would be net better off, if the Federal Reserve directly sent everyone in the U.S. with income under $20K/year a one-time $6,000 check with the money phasing out at a 10% rate up to $80K/year. This is because, in order of importance:
- I buy the analogous market monetarist argument (3) that the U.S. is printing too little money.
- I buy the analogous argument (2) about inequality.
- (However, I also somewhat suspect that some analogous form of (4) is going on with poor people somehow systematically having all but a certain amount of value extracted from them, which is in general how a modern country can have only 2% instead of 95% of the population being farmers, and yet there are still people living hand-to-mouth. I would worry that a predictable, universal one-time gift of $6K would not defeat this phenomenon, and that the gift money will just be extracted again somehow. In the case of Minimum Guaranteed Income, I would worry that the labor share of income will drop proportionally to small amounts of MGI as wages are just bid down by people who can live on less. Or something. This would be a much longer discussion and the ideas are much less simple than the above two notions, probably also less important. I'm just mentioning it again because of my long-term puzzlement with the question "Why are there still poor people after agricultural productivity rose by a factor of 100?")
What I wouldn't say is that my belief in the above is as strong as my belief in, say, the intelligence explosion. I'd guess that the printing operation would do more good than harm, but it's not what I would call a strong evidence-based conclusion. If we're going to be okay with that standard of argument generally, then the top charity under that standard of reasoning, generally and evenhandedly applied, ought to work out to some charity that does science and technology research. (X-risk minimization might seem substantially 'weirder' than that, but the best science-funding charities should be only equally weird.) And I wouldn't measure the excess of happiness of gift-recipients compared to non-recipients in a pilot program, and call this a good estimate of the net good if a Minimum Guaranteed Income were universally adopted.
So to reiterate, my question to Givewell is not "Why do you think GiveDirectly might maybe end up doing some good anyway?" but "Does GiveDirectly rise to the standards required for your #2 evidence-based charity?"
View more: Next