Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In Praise of Maximizing – With Some Caveats

22 wallowinmaya 15 March 2015 07:40PM

Most of you are probably familiar with the two contrasting decision making strategies "maximizing" and "satisficing", but a short recap won't hurt (you can skip the first two paragraphs if you get bored): Satisficing means selecting the first option that is good enough, i.e. that meets or exceeds a certain threshold of acceptability. In contrast, maximizing means the tendency to search for so long until the best possible option is found.

Research indicates (Schwartz et al., 2002) that there are individual differences with regard to these two decision making strategies. That is, some individuals – so called ‘maximizers’ – tend to extensively search for the optimal solution. Other people – ‘satisficers’ – settle for good enough1. Satisficers, in contrast to maximizers, tend to accept the status quo and see no need to change their circumstances2.

When the subject is raised, maximizing usually gets a bad rap. For example, Schwartz et al. (2002) found "negative correlations between maximization and happiness, optimism, self-esteem, and life satisfaction, and positive correlations between maximization and depression, perfectionism, and regret."

So should we all try to become satisficers? At least some scientists and the popular press seem to draw this conclusion:

Maximisers miss out on the psychological benefits of commitment, leaving them less satisfied than their more contented counterparts, the satisficers. ...Current research is trying to understand whether they can change. High-level maximisers certainly cause themselves a lot of grief.

I beg to differ. Satisficers may be more content with their lives, but most of us don't live for the sake of happiness alone. Of course, satisficing makes sense when not much is at stake3. However, maximizing also can prove beneficial, for the maximizers themselves and for the people around them, especially in the realm of knowledge, ethics, relationships and when it comes to more existential issues – as I will argue below4.

Belief systems and Epistemology

Ideal rationalists could be thought of as epistemic maximizers: They try to notice slight inconsistencies in their worldview, take ideas seriously, beware wishful thinking, compartmentalization, rationalizations, motivated reasoning, cognitive biases and other epistemic sins. Driven by curiosity, they don't try to confirm their prior beliefs, but wish to update them until they are maximally consistent and maximally correspondent with reality. To put it poetically, ideal rationalists as well as great scientists don't content themselves to wallow in the mire of ignorance but are imbued with the Faustian yearning to ultimately understand whatever holds the world together in its inmost folds.

In contrast, consider the epistemic habits of the average Joe Christian: He will certainly profess that having true beliefs is important to him. But he doesn't go to great lengths to actually make this happen. For example, he probably believes in an omnipotent and beneficial being that created our universe. Did he impartially weigh all available evidence to reach this conclusion? Probably not. More likely is that he merely shares the beliefs of his parents and his peers. However, isn't he bothered by the problem of evil or Occam's razor? And what about all those other religions whose adherents believe with the same certainty in different doctrines?

Many people don’t have good answers to these questions. Their model of how the world works is neither very coherent nor accurate but it's comforting and good enough. They see little need to fill the epistemic gaps and inconsistencies in their worldview or to search for a better alternative. Thus, one could view them as epistemic satisficers. Of course, all of us exhibit this sort of epistemic laziness from time to time. In the words of Jonathan Haidt (2013):

We take a position, look for evidence that supports it, and if we find some evidence—enough so that our position “makes sense”—we stop thinking.

Usually, I try to avoid taking cheap shots at religion and therefore I want to note that similar points apply to many non-theistic belief systems.

Ethics

Let's go back to average Joe: he presumably obeys the dictates of the law and his religion and occasionally donates to (ineffective) charities. Joe probably thinks that he is a “good” person and many people would likely agree. This leads us to an interesting question: how do we typically judge the morality of our own actions?

Let's delve into the academic literature and see what it has to offer: In one exemplary study, Sachdeva et al. (2009) asked participants to write a story about themselves using either morally positive words (e.g. fair, nice) or morally negative words (e.g. selfish, mean). Afterwards, the participants were asked if and how much they would like to donate to a charity of their choice. The result: Participants who wrote a story containing the positive words donated only one fifth as much as those who wrote a story with negative words.

This effect is commonly referred to as moral licensing: People with a recently boosted moral self-concept feel like they have done enough and see no need to improve the world even further. Or, as McGonigal (2011) puts it (emphasis mine):

When it comes to right and wrong, most of us are not striving for moral perfection. We just want to feel good enough – which then gives us permission to do whatever we want.

Another well known phenomenon is scope neglect. One explanation for scope neglect is the "purchase of moral satisfaction" proposed by Kahneman and Knetsch (1992): Most people don't try to do as much good as possible with their money, they only spend just enough cash to create a "warm-fuzzy feeling" in themselves.

Phenomenons like "moral licensing" and "purchase of moral satisfaction" indicate that it is all too human to only act as altruistic as is necessary to feel or seem good enough. This could be described as "ethical satisficing" because people just follow the course of action that meets or exceeds a certain threshold of moral goodness. They don't try to carry out the morally optimal action or an approximation thereof (as measured by their own axiology).

I think I cited enough academic papers in the last paragraphs so let's get more speculative: Many, if not most people5 tend to be intuitive deontologists6. Deontology basically posits that some actions are morally required, and some actions are morally forbidden. As long as you do perform the morally required ones and don't engage in morally wrong actions you are off the hook. There is no need to do more, no need to perform supererogatory acts. Not neglecting your duties is good enough. In short, deontology could also be viewed as ethical satisficing (see footnote 7 for further elaboration).

In contrast, consider deontology's arch-enemy: Utilitarianism. Almost all branches of utilitarianism share the same principal idea: That one should maximize something for as many entities as possible. Thus, utilitarianism could be thought of as ethical maximizing8.

Effective altruists are an even better example for ethical maximizers because they actually try to identify and implement (or at least pretend to try) the most effective approaches to improve the world. Some conduct in-depth research and compare the effectiveness of hundreds of different charities to find the ones that save the most lives with as little money as possible. And rumor has it there are people who have even weirder ideas about how to ethically optimize literally everything. But more on this later.

Friendships and conversations

Humans intuitively assume that the desires and needs of other people are similar to their own ones. Consequently, I thought that everyone secretly yearns to find like-minded companions with whom one can talk about one’s biggest hopes as well as one’s greatest fears and form deep, lasting friendships.

But experience tells me that I was probably wrong, at least to some degree: I found it quite difficult to have these sorts of conversations with a certain kind of people, especially in groups (luckily, I’ve found also enough exceptions). It seems that some people are satisfied as long as their conversations meet a certain, not very high threshold of acceptability. Similar observations could be made about their friendships in general. One could call them social or conversational satisficers. By the way, this time research actually suggests that conversational maximizing is probably better for your happiness than small talk (Mehl et al., 2008).

Interestingly, what could be called "pluralistic superficiality" may account for many instances of small talk and superficial friendships since everyone experiences this atmosphere of boring triviality but thinks that the others seem to enjoy the conversations. So everyone is careful not to voice their yearning for a more profound conversation, not realizing that the others are suppressing similar desires.

Crucial Considerations and the Big Picture

On to the last section of this essay. It’s even more speculative and half-baked than the previous ones, but it may be the most interesting, so bear with me.

Research suggests that many people don’t even bother to search for answers to the big questions of existence. For example, in a representative sample of 603 Germans, 35% of the participants could be classified as existentially indifferent, that is they neither think their lives are meaningful nor suffer from this lack of meaning (T. Schnell, 2008).

The existential thirst of the remaining 65% is presumably harder to satisfy, but how much harder? Many people don't invest much time or cognitive resources in order to ascertain their actual terminal values and how to optimally reach them – which is arguably of the utmost importance. Instead they appear to follow a mental checklist containing common life goals (one could call them "cached goals") such as a nice job, a romantic partner, a house and probably kids. I’m not saying that such goals are “bad” – I also prefer having a job to sleeping under the bridge and having a partner to being alone. But people usually acquire and pursue their (life) goals unsystematically and without much reflection which makes it unlikely that such goals exhaustively reflect their idealized preferences. Unfortunately, many humans are so occupied by the pursuit of such goals that they are forced to abandon further contemplation of the big picture.

Furthermore, many of them lack the financial, intellectual or psychological capacities to ponder complex existential questions. I'm not blaming subsistence farmers in Bangladesh for not reading more about philosophy, rationality or the far future. But there are more than enough affluent, highly intelligent and inquisitive people who certainly would be able to reflect about crucial considerations. Instead, they spend most of their waking hours maximizing nothing but the money in their bank accounts or interpreting the poems of some arabic guy from the 7th century9.

Generally, many people seem to take the current rules of our existence for granted and content themselves with the fundamental evils of the human condition such as aging, needless suffering or death. Whatever the reason may be, they don't try to radically change the rules of life and their everyday behavior seems to indicate that they’ve (gladly?) accepted their current existence and the human condition in general. One could call them existential satisficers.

Contrast this with the mindset of transhumanism. Generally, transhumanists are not willing to accept the horrors of nature and realize that human nature itself is deeply flawed. Thus, transhumanists want to fundamentally alter the human condition and aim to eradicate, for example, aging, unnecessary suffering and ultimately death. Through various technologies transhumanists desire to create an utopia for everyone. Thus, transhumanism could be thought of as existential maximizing10.

However, existential maximizing and transhumanism are not very popular. Quite the opposite, existential satisficing – accepting the seemingly unalterable human condition – has a long philosophical tradition. To give some examples: The otherwise admirable Stoics believed that the whole universe is pervaded and animated by divine reason. Consequently, one should cultivate apatheia and calmly accept one's fate. Leibniz even argued that we already live in the best of all possible worlds. The mindset of existential satisficing can also be found in Epicureanism and arguably in Buddhism. Lastly, religions like Christianity or Islam are generally against transhumanism, partly because this amounts to “playing God”. Which is understandable from their point of view because why bother fundamentally transforming the human condition if everything will be perfect in heaven anyway?

One has to grant ancient philosophers that they couldn't even imagine that one day humanity would acquire the technological means to fundamentally alter the human condition. Thus it is no wonder that Epicurus argued that death is not to be feared or that the Stoics believed that disease or poverty are not really bad: It is all too human to invent rationalizations for the desirability of actually undesirable, but (seemingly) inevitable things – be it death or the human condition itself.

But many contemporary intellectuals can't be given the benefit of the doubt. They argue explicitly against trying to change the human condition. To name a few: Bernard Williams believed that death gives life meaning. Francis Fukuyama called transhumanism the world's most dangerous idea. And even Richard Dawkins thinks that the fear of death is "whining" and that the desire for immortality is "presumptuous"11:

Be thankful that you have a life, and forsake your vain and presumptuous desire for a second one.

With all that said, "run-off-the-mill" transhumanism arguably still doesn't go far enough. There are at least two problems I can see: 1) Without a benevolent superintelligent singleton "Moloch" (to use Scott Alexander's excellent wording) will never be defeated. 2) We are still uncertain about ontology, decision theory, epistemology and our own terminal values. Consequently, we need some kind of process which can help us to understand those things or we will probably fail to rearrange reality until it conforms with our idealized preferences.

Therefore, it could be argued that the ultimate goal is the creation of a benevolent superintelligence or Friendly AI (FAI) whose values are aligned with ours. There are of course numerous objections to the whole superintelligence strategy in general and to FAI in particular, but I won’t go into detail here because this essay is already too long.

Nevertheless – however unlikely – it seems possible that with the help of a benevolent superintelligence we could abolish all gratuitous suffering and achieve an optimal mode of existence. We could become posthuman beings with god-like intellects, our ecstasy outshining the surrounding stars, and transforming the universe until one happy day all wounds are healed, all despair dispelled and every (idealized) desire fulfilled. To many this seems like sentimental and wishful eschatological speculation but for me it amounts to ultimate existential maximizing12, 13.

Conclusion

The previous paragraphs shouldn’t fool one into believing that maximizing has no serious disadvantages. The desire to aim higher, become stronger and to always behave in an optimally goal-tracking way can easily result in psychological overload and subsequent surrender. Furthermore, it seems that adopting the mindset of a maximizer increases the tendency to engage in upward social comparisons and counterfactual thinking which contribute to depression as research has shown.

Moreover, there is much to be learnt from stoicism and satisficing in general: Life isn't always perfect and there are things one cannot change; one should accept one's shortcomings – if they are indeed unalterable; one should make the best of one's circumstances. In conclusion, better be a happy satisficer whose moderate productivity is sustainable than be a stressed maximizer who burns out after one year. See also these two essays which make similar points.

All that being said, I still favor maximizing over satisficing. If our ancestors had all been satisficers we would still be picking lice off each other’s backs14. And only by means of existential maximizing can we hope to abolish the aforementioned existential evils and all needless suffering – even if the chances seem slim.

[Originally posted a longer, more personal version of this essay on my own blog]

Footnotes

[1] Obviously this is not a categorical classification, but a dimensional one.

[2] To put it more formally: The utility function of the ultimate satisficer would assign the same (positive) number to each possible world, i.e. the ultimate satisficer would be satisfied with every possible world. The less possible worlds you are satisfied with (i.e. the higher your threshold of acceptability), the less possible worlds exist between which you are indifferent, the less of a satisficer and the more of a maximizer you are. Also note: Satisficing is not irrational in itself. Furthermore, I’m talking about the somewhat messy psychological characteristics and (revealed) preferences of human satisficers/maximizers. Read these posts if you want to know more about satisficing vs. maximizing with regard to AIs.

[3] Rational maximizers take the value of information and opportunity costs into account.

[4] Instead of "maximizer" I could also have used the term "optimizer".

[5] E.g. in the "Fat Man" version of the famous trolley dilemma, something like 90% of subjects don't push a fat man onto the track, in order to save 5 other people. Also, utilitarians like Peter Singer don't exactly get rave reviews from most folks. Although there is some conflicting research (Johansson-Stenman, 2012). Furthermore, the deontology vs. utilitarianism distinction itself is limited. See e.g. "The Righteous Mind" by Jonathan Haidt.

[6] Of course, most people are not strict deontologists. They are also intuitive virtue ethicists and care about the consequences of their actions.

[7] Admittedly, one could argue that certain versions of deontology are about maximally not violating certain rules and thus could be viewed as ethical maximizing. However, in the space of all possible moral actions there exist many actions between which a deontologist is indifferent, namely all those actions that exceed the threshold of moral acceptability (i.e. those actions that are not violating any deontological rule). To illustrate this with an example: Visiting a friend and comforting him for 4 hours or using the same time to work and subsequently donating the earned money to a charity are both morally equivalent from the perspective of (many) deontological theories – as long as one doesn’t violate any deontological rule in the process. We can see that this parallels satisficing.

Contrast this with (classical) utilitarianism: In the space of all possible moral actions there is only one optimal moral action for an utilitarian and all other actions are morally worse. An (ideal) utilitarian searches for and implements the optimal moral action (or tries to approximate it because in real life one is basically never able to identify, let alone carry out the optimal moral action). This amounts to maximizing. Interestingly, this inherent demandingness has often been put forward as a critique of utilitarianism (and other sorts of consequentialism) and satisficing consequentialism has been proposed as a solution (Slote, 1984). Further evidence for the claim that maximizing is generally viewed with suspicion.

[8] The obligatory word of caution here: following utilitarianism to the letter can be self-defeating if done in a naive way.

[9] Nick Bostrom (2014) expresses this point somewhat harshly:

A colleague of mine likes to point out that a Fields Medal (the highest honor in mathematics) indicates two things about the recipient: that he was capable of accomplishing something important, and that he didn't.

As a general point: Too many people end up as money-, academia-, career- or status-maximizers although those things often don’t reflect their (idealized) preferences.

[10] Of course there are lots of utopian movements like socialism, communism or the Zeitgeist movement. But all those movements make the fundamental mistake of ignoring or at least heavily underestimating the importance of human nature. Creating utopia merely through social means is impossible because most of us are, by our very nature, too selfish, status-obsessed and hypocritical and cultural indoctrination can hardly change this. To deny this, is to simply misunderstand the process of natural selection and evolutionary psychology. Secondly, even if a socialist utopia were to come true, there still would exist unrequited love, disease, depression and of course death. To abolish those things one has to radically transform the human condition itself.

[11] Here is another quote:

We are going to die, and that makes us the lucky ones. Most people are never going to die because they are never going to be born. [….] We privileged few, who won the lottery of birth against all odds, how dare we whine at our inevitable return to that prior state from which the vast majority have never stirred?

― Richard Dawkins in "Unweaving the Rainbow"

[12] It’s probably no coincidence that Yudkowsky named his blog "Optimize Literally Everything" which adequately encapsulates the sentiment I tried to express here.

[13] Those interested in or skeptical of the prospect of superintelligent AI, I refer to "Superintelligence: Paths, Dangers and Strategies" by Nick Bostrom.

[14] I stole this line from Bostrom’s “In Defense of Posthuman Dignity”.

References

Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.

Haidt, J. (2013). The righteous mind: Why good people are divided by politics and religion. Random House LLC.

Johansson-Stenman, O. (2012). Are most people consequentialists? Economics Letters, 115 (2), 225-228.

Kahneman, D., & Knetsch, J. L. (1992). Valuing public goods: the purchase of moral satisfaction. Journal of environmental economics and management, 22(1), 57-70.

McGonigal, K. (2011). The Willpower Instinct: How Self-Control Works, Why It Matters, and What You Can Do to Get More of It. Penguin.

Mehl, M. R., Vazire, S., Holleran, S. E., & Clark, C. S. (2010). Eavesdropping on Happiness Well-Being Is Related to Having Less Small Talk and More Substantive Conversations. Psychological Science, 21(4), 539-541.

Sachdeva, S., Iliev, R., & Medin, D. L. (2009). Sinning saints and saintly sinners the paradox of moral self-regulation. Psychological science, 20(4), 523-528.

Schnell, T. (2010). Existential indifference: Another quality of meaning in life. Journal of Humanistic Psychology, 50(3), 351-373.

Schwartz, B. (2000). Self determination: The tyranny of freedom. American Psychologist, 55, 79–88.

Schwartz, B., Ward, A., Monterosso, J., Lyubomirsky, S., White, K., & Lehman, D. R. (2002). Maximizing versus satisficing: happiness is a matter of choice. Journal of personality and social psychology, 83(5), 1178.

Slote, M. (1984). “Satisficing Consequentialism”. Proceedings of the Aristotelian Society, 58: 139–63.

Why the tails come apart

115 Thrasymachus 01 August 2014 10:41PM

[I'm unsure how much this rehashes things 'everyone knows already' - if old hat, feel free to downvote into oblivion. My other motivation for the cross-post is the hope it might catch the interest of someone with a stronger mathematical background who could make this line of argument more robust]

[Edit 2014/11/14: mainly adjustments and rewording in light of the many helpful comments below (thanks!). I've also added a geometric explanation.]

Many outcomes of interest have pretty good predictors. It seems that height correlates to performance in basketball (the average height in the NBA is around 6'7"). Faster serves in tennis improve one's likelihood of winning. IQ scores are known to predict a slew of factors, from income, to chance of being imprisoned, to lifespan.

What's interesting is what happens to these relationships 'out on the tail': extreme outliers of a given predictor are seldom similarly extreme outliers on the outcome it predicts, and vice versa. Although 6'7" is very tall, it lies within a couple of standard deviations of the median US adult male height - there are many thousands of US men taller than the average NBA player, yet are not in the NBA. Although elite tennis players have very fast serves, if you look at the players serving the fastest serves ever recorded, they aren't the very best players of their time. It is harder to look at the IQ case due to test ceilings, but again there seems to be some divergence near the top: the very highest earners tend to be very smart, but their intelligence is not in step with their income (their cognitive ability is around +3 to +4 SD above the mean, yet their wealth is much higher than this) (1).

The trend seems to be that even when two factors are correlated, their tails diverge: the fastest servers are good tennis players, but not the very best (and the very best players serve fast, but not the very fastest); the very richest tend to be smart, but not the very smartest (and vice versa). Why?

Too much of a good thing?

One candidate explanation would be that more isn't always better, and the correlations one gets looking at the whole population doesn't capture a reversal at the right tail. Maybe being taller at basketball is good up to a point, but being really tall leads to greater costs in terms of things like agility. Maybe although having a faster serve is better all things being equal, but focusing too heavily on one's serve counterproductively neglects other areas of one's game. Maybe a high IQ is good for earning money, but a stratospherically high IQ has an increased risk of productivity-reducing mental illness. Or something along those lines.

I would guess that these sorts of 'hidden trade-offs' are common. But, the 'divergence of tails' seems pretty ubiquitous (the tallest aren't the heaviest, the smartest parents don't have the smartest children, the fastest runners aren't the best footballers, etc. etc.), and it would be weird if there was always a 'too much of a good thing' story to be told for all of these associations. I think there is a more general explanation.

The simple graphical explanation

[Inspired by this essay from Grady Towers]

Suppose you make a scatter plot of two correlated variables. Here's one I grabbed off google, comparing the speed of a ball out of a baseball pitchers hand compared to its speed crossing crossing the plate:

It is unsurprising to see these are correlated (I'd guess the R-square is > 0.8). But if one looks at the extreme end of the graph, the very fastest balls out of the hand aren't the very fastest balls crossing the plate, and vice versa. This feature is general. Look at this data (again convenience sampled from googling 'scatter plot') of this:

Or this:

Or this:

Given a correlation, the envelope of the distribution should form some sort of ellipse, narrower as the correlation goes stronger, and more circular as it gets weaker: (2)

The thing is, as one approaches the far corners of this ellipse, we see 'divergence of the tails': as the ellipse doesn't sharpen to a point, there are bulges where the maximum x and y values lie with sub-maximal y and x values respectively:

So this offers an explanation why divergence at the tails is ubiquitous. Providing the sample size is largeish, and the correlation not too tight (the tighter the correlation, the larger the sample size required), one will observe the ellipses with the bulging sides of the distribution. (3)

Hence the very best basketball players aren't the very tallest (and vice versa), the very wealthiest not the very smartest, and so on and so forth for any correlated X and Y. If X and Y are "Estimated effect size" and "Actual effect size", or "Performance at T", and "Performance at T+n", then you have a graphical display of winner's curse and regression to the mean.

An intuitive explanation of the graphical explanation

It would be nice to have an intuitive handle on why this happens, even if we can be convinced that it happens. Here's my offer towards an explanation:

The fact that a correlation is less than 1 implies that other things matter to an outcome of interest. Although being tall matters for being good at basketball, strength, agility, hand-eye-coordination matter as well (to name but a few). The same applies to other outcomes where multiple factors play a role: being smart helps in getting rich, but so does being hard working, being lucky, and so on.

For a toy model, pretend that wealth is wholly explained by two factors: intelligence and conscientiousness. Let's also say these are equally important to the outcome, independent of one another and are normally distributed. (4) So, ceteris paribus, being more intelligent will make one richer, and the toy model stipulates there aren't 'hidden trade-offs': there's no negative correlation between intelligence and conscientiousness, even at the extremes. Yet the graphical explanation suggests we should still see divergence of the tails: the very smartest shouldn't be the very richest.

The intuitive explanation would go like this: start at the extreme tail - +4SD above the mean for intelligence, say. Although this gives them a massive boost to their wealth, we'd expect them to be average with respect to conscientiousness (we've stipulated they're independent). Further, as this ultra-smart population is small, we'd expect them to fall close to the average in this other independent factor: with 10 people at +4SD, you wouldn't expect any of them to be +2SD in conscientiousness.

Move down the tail to less extremely smart people - +3SD say. These people don't get such a boost to their wealth from their intelligence, but there should be a lot more of them (if 10 at +4SD, around 500 at +3SD), this means one should expect more variation in conscientiousness - it is much less surprising to find someone +3SD in intelligence and also +2SD in conscientiousness, and in the world where these things were equally important, they would 'beat' someone +4SD in intelligence but average in conscientiousness. Although a +4SD intelligence person will likely be better than a given +3SD intelligence person (the mean conscientiousness in both populations is 0SD, and so the average wealth of the +4SD intelligence population is 1SD higher than the 3SD intelligence people), the wealthiest of the +4SDs will not be as good as the best of the much larger number of +3SDs. The same sort of story emerges when we look at larger numbers of factors, and in cases where the factors contribute unequally to the outcome of interest.

When looking at a factor known to be predictive of an outcome, the largest outcome values will occur with sub-maximal factor values, as the larger population increases the chances of 'getting lucky' with the other factors:

So that's why the tails diverge.

 

A parallel geometric explanation

There's also a geometric explanation. The R-square measure of correlation between two sets of data is the same as the cosine of the angle between them when presented as vectors in N-dimensional space (explanations, derivations, and elaborations here, here, and here). (5) So here's another intuitive handle for tail divergence:

Grant a factor correlated with an outcome, which we represent with two vectors at an angle theta, the inverse cosine equal the R-squared. 'Reading off the expected outcome given a factor score is just moving along the factor vector and multiplying by cosine theta to get the distance along the outcome vector. As cos theta is never greater than 1, we see regression to the mean. The geometrical analogue to the tails coming apart is the absolute difference in length along factor versus length along outcome|factor scales with the length along the factor; the gap between extreme values of a factor and the less extreme values of the outcome grows linearly as the factor value gets more extreme. For concreteness (and granting normality), an R-square of 0.5 (corresponding to an angle of sixty degrees) means that +4SD (~1/15000) on a factor will be expected to be 'merely' +2SD (~1/40) in the outcome - and an R-square of 0.5 is remarkably strong in the social sciences, implying it accounts for half the variance.(6) The reverse - extreme outliers on outcome are not expected to be so extreme an outlier on a given contributing factor - follows by symmetry.

 

Endnote: EA relevance

I think this is interesting in and of itself, but it has relevance to Effective Altruism, given it generally focuses on the right tail of various things (What are the most effective charities? What is the best career? etc.) It generally vindicates worries about regression to the mean or winner's curse, and suggests that these will be pretty insoluble in all cases where the populations are large: even if you have really good means of assessing the best charities or the best careers so that your assessments correlate really strongly with what ones actually are the best, the very best ones you identify are unlikely to be actually the very best, as the tails will diverge.

This probably has limited practical relevance. Although you might expect that one of the 'not estimated as the very best' charities is in fact better than your estimated-to-be-best charity, you don't know which one, and your best bet remains your estimate (in the same way - at least in the toy model above - you should bet a 6'11" person is better at basketball than someone who is 6'4".)

There may be spread betting or portfolio scenarios where this factor comes into play - perhaps instead of funding AMF to diminishing returns when its marginal effectiveness dips below charity #2, we should be willing to spread funds sooner.(6) Mainly, though, it should lead us to be less self-confident.


1. Given income isn't normally distributed, using SDs might be misleading. But non-parametric ranking to get a similar picture: if Bill Gates is ~+4SD in intelligence, despite being the richest man in america, he is 'merely' in the smartest tens of thousands. Looking the other way, one might look at the generally modest achievements of people in high-IQ societies, but there are worries about adverse selection.

2. As nshepperd notes below, this depends on something like multivariate CLT. I'm pretty sure this can be weakened: all that is needed, by the lights of my graphical intuition, is that the envelope be concave. It is also worth clarifying the 'envelope' is only meant to illustrate the shape of the distribution, rather than some boundary that contains the entire probability density: as suggested by homunq: it is an 'pdf isobar' where probability density is higher inside the line than outside it. 

3. One needs a large enough sample to 'fill in' the elliptical population density envelope, and the tighter the correlation, the larger the sample needed to fill in the sub-maximal bulges. The old faithful case is an example where actually you do get a 'point', although it is likely an outlier.

 

4. It's clear that this model is fairly easy to extend to >2 factor cases, but it is worth noting that in cases where the factors are positively correlated, one would need to take whatever component of the factors which are independent of one another.

5. My intuition is that in cartesian coordinates the R-square between correlated X and Y is actually also the cosine of the angle between the regression lines of X on Y and Y on X. But I can't see an obvious derivation, and I'm too lazy to demonstrate it myself. Sorry!

6. Another intuitive dividend is that this makes it clear why you can by R-squared to move between z-scores of correlated normal variables, which wasn't straightforwardly obvious to me.

7. I'd intuit, but again I can't demonstrate, the case for this becomes stronger with highly skewed interventions where almost all the impact is focused in relatively low probability channels, like averting a very specified existential risk.

Effective Altruism Summit is One Month Away

6 Nevin 08 July 2014 11:39PM

This is a followup to Ben's post announcing the 2014 EA Summit.

The Effective Altruism Summit is now exactly one month away.

This year, 175 EAs will gather in Berkeley, CA for a two-day conference -- the largest gathering of people in the EA movement to date.

We still have spots left, and we are especially interested in having people who are new to the movement and aren't yet working on something related to EA full-time. The event is a great place to meet everyone who is serious about EA, learn a whole lot about the different projects people are working on, build friendships, and start collaborations.


There will be people there from:
  • The Center for Applied Rationality
  • GiveWell
  • GiveDirectly
  • The Machine Intelligence Research Institute
  • The Future of Humanity Institute
  • The Life You Can Save
  • 80,000 Hours
  • Giving What We Can
  • Leverage Research
-- and others.

The event costs $600, but we can offer discounted tickets to people who can't pay full price. If you are interested in coming but money is a barrier, please don't be shy -- let us know through the form on the website and we will do everything we can to get you a spot. If you can pay full price, you'll be helping to cover other EAs. None of the organizations involved will profit from the event.

You can get more info and register to attend by filling out the form on the summit website.

The Retreat, mentioned in Ben's previous post, is full. If you note interest in the retreat on your Summit registration, we'll let you know if any space opens up last-minute.

Questions? Email effectivealtruismsummit@gmail.com for fastest response, or post in this thread for public response.

Introducing .impact

18 peter_hurford 05 January 2014 03:35PM

.impact is a new network of volunteers coordinating effective altruist projects.

There are many project ideas that could be really useful for the effective altruist community. There are people with the skills and free time to make things happen but who lack guidance or support. .impact aims to provide infrastructure to get people and useful projects together. We hope to help volunteers learn useful skills, meet great people, and create something substantial.  

We're soon launching Skillshare.im, a place to share skills and services for free.  We've collaborated on several Trello boards to organize projects, research topics, and useful resources.  We’ve brainstormed and started outlining projects like a vegetarian advocacy study, an EA wiki, and argument mapping software.  We’ve had several weekly group hangouts and discussions with a variety of individuals.  Most of our general discussion holds place in our Facebook group, which now has 114 members and seems to be growing organically at a rate of 5 per week.

 

Our Purpose and Values

We’re guided first and foremost by a desire to do the most good. This is our purpose. But it’s difficult to do this without having some additional values. The following are heuristics we think will best guide future volunteering in order to optimize our purpose. These are will be changed as we gain experience.

We value action.  "Help people" is a good rule, and it’s often a more useful one than "understand how to help people optimally." There appears to be a lot of low-hanging fruit—we can achieve a lot by simply motivating people to do something.

We value effectiveness. We encourage and promote projects according to our expectations of their impact and probability of success. We vet and brainstorm ideas before putting them into action. We use lean methodology to get things out quickly and then decide whether to expand, pivot, or end a project.

We value openness and transparency. Our meetings and projects are documented; published work is open source or creative commons. We’ll release information on the success of applications, and we’ll publish lessons we’ve learned on our wiki and blog.

We value decentralization. We believe that volunteers will do best with little outside authority. We will try to limit individual ideologies in favor of collective opinions. Important decisions will be decided through voting whenever possible. 

 

Get Involved

Interested in working on a project? Already working away on something, but want more support? Interested in learning a particular skill, like computer programming or research? We need you!

If you would like to meet the existing community or would like help finding a project we’d be happy to talk to you. You can also join our Facebook group or look through our Trello board of projects.

-

(Also cross-posted on our website.)

Optimizing Fuzzies And Utilons: The Altruism Chip Jar

95 orthonormal 01 January 2011 06:53PM

Related: Purchase Fuzzies and Utilons Separately

We genuinely want to do good in the world; but also, we want to feel as if we're doing good, via heuristics that have been hammered into our brains over the course of our social evolution. The interaction between these impulses (in areas like scope insensitivity, refusal to quantify sacred values, etc.) can lead to massive diminution of charitable impact, and can also suck the fun out of the whole process. Even if it's much better to write a big check at the end of the year to the charity with the greatest expected impact than it is to take off work every Thursday afternoon and volunteer at the pet pound, it sure doesn't feel as rewarding. And of course, we're very good at finding excuses to stop doing costly things that don't feel rewarding, or at least to put them off.

But if there's one thing I've learned here, it's that lamenting our irrationality should wait until one's properly searched for a good hack. And I think I've found one.

Not just that, but I've tested it out for you already.

This summer, I had just gone through the usual experience of being asked for money for a nice but inefficient cause, turning them down, and feeling a bit bad about it. I made a mental note to donate some money to a more efficient cause, but worried that I'd forget about it; it's too much work to make a bunch of small donations over the year (plus, if done by credit card, the fees take a bigger cut that way) and there's no way I'd remember that day at the end of the year.

Unless, that is, I found some way to keep track of it.

So I made up several jars with the names of charities I found efficient (SIAI and VillageReach) and kept a bunch of poker chips near them. Starting then, whenever I felt like doing a good deed (and especially if I'd passed up an opportunity to do a less efficient one), I'd take a chip of an appropriate value and toss it in the jar of my choice. I have to say, this gave me much more in the way of warm fuzzies than if I'd just waited and made up a number at the end of the year.

And now I've added up and made my contributions: $1,370 to SIAI and $566 to VillageReach.

continue reading »

Circular Altruism vs. Personal Preference

7 Vladimir_Nesov 26 October 2009 01:43AM

Suppose there is a diagnostic procedure that allows to catch a relatively rare disease with absolute precision. If left untreated, the disease if fatal, but when diagnosed it's easily treatable (I suppose there are some real-world approximations). The diagnostics involves an uncomfortable procedure and inevitable loss of time. At what a priori probability would you not care to take the test, leaving this outcome to chance? Say, you decide it's 0.0001%.

Enter timeless decision theory. Your decision to take or not take the test may be as well considered a decision for the whole population (let's also assume you are typical and everyone is similar in this decision). By deciding to personally not take the test, you've decided that most people won't take the test, and thus, for example, with 0.00005% of the population having the condition, about 3000 people will die. While personal tradeoff is fixed, this number obviously depends on the size of the population.

It seems like a horrible thing to do, making a decision that results in 3000 deaths. Thus, taking the test seems like a small personal sacrifice for this gift to others. Yet this is circular: everyone would be thinking that, reversing decision solely to help others, not benefiting personally. Nobody benefits.

Obviously, together with 3000 lives saved, there is a factor of 6 billion accepting the test, and that harm is also part of the outcome chosen by the decision. If everyone personally prefers to not take the test, then inflicting the opposite on the whole population is only so much worse.

Or is it?

continue reading »

Rationalists lose when others choose

-10 PhilGoetz 16 June 2009 05:50PM

At various times, we've argued over whether rationalists always win.  I posed Augustine's paradox of optimal repentance to argue that, in some situations, rationalists lose.  One criticism of that paradox is that its strongest forms posit a God who penalizes people for being rational.  My response was, So what?  Who ever said that nature, or people, don't penalize rationality?

There are instances where nature penalizes the rational.  For instance, revenge is irrational, but being thought of as someone who would take revenge gives advantages.1

continue reading »

Altruist Coordination -- Central Station

5 MBlume 27 March 2009 10:24PM

Related to: Can Humanism Match Religion's Output?

I thought it would be helpful for us to have a central space to pool information about various organizations to which we might give our money and/or time.  Honestly, a wiki would be ideal, but it seems this should do nicely.

Comment to this post with the name of an organization, and a direct link to where we can donate to them.  Provide a summary of the group's goals, and their plans for reaching them.  If you can link to outside confirmation of the group's efficiency and effectiveness, please do so.

Respond to these comments adding information about the named group, whether to criticize or praise it.

Hopefully with the voting system, we should be able to collect the most relevent information we have available reasonably quickly.

If you choose to contribute to a group, respond to that group's comment with a dollar amount, so that we can all see how much we have raised for each organization.

Feel free to replace "dollar amount" with "dollar amount/month" in the above, if you wish to make such a commitment.  Please do not do this unless you are (>95%) confident that said commitment will last at least a year.

If possible, mention this page, or this site, while donating.

Soulless morality

20 PhilGoetz 14 March 2009 09:48PM

Follow-up to: So you say you're an altruist

The responses to So you say you're an altruist indicate that people have split their values into two categories:

  1. values they use to decide what they want
  2. values that are admissible for moral reasoning

(where 2 is probably a subset of 1 for atheists, and probably nearly disjoint from 1 for Presbyterians).

You're reading Less Wrong.  You're a rationalist.  You've put a lot of effort into education, and learning the truth about the world.  You value knowledge and rationality and truth a lot.

Someone says you should send all your money to Africa, because this will result in more human lives.

What happened to the value you placed on knowledge and rationality?

There is little chance that any of the people you save in Africa will get a good post-graduate education and then follow that up by rejecting religion, embracing rationality, and writing Less Wrong posts.

Here you are, spending a part of your precious life reading Less Wrong.  If you spend 10% of your life on the Web, you are saying that that activity is worth at least 1/10th of a life, and that lives with no access to the Web are worth less than lives with access.  If you value rationality, then lives lived rationally are more valuable than lives lived irrationally.  If you think something has a value, you have to give it the same value in every equation.  Not doing so is immoral.  You can't use different value scales for everyday and moral reasoning.

Society tells you to work to make yourself more valuable.  Then it tells you that when you reason morally, you must assume that all lives are equally valuable.  You can't have it both ways.  If all lives have equal value, we shouldn't criticize someone who decides to become a drug addict on welfare.  Value is value, regardless of which equation it's in at the moment.

continue reading »

Higher Purpose

26 Eliezer_Yudkowsky 23 January 2009 09:58AM

Followup toSomething to Protect, Superhero Bias

Long-time readers will recall that I've long been uncomfortable with the idea that you can adopt a Cause as a hedonic accessory:

"Unhappy people are told that they need a 'purpose in life', so they should pick out an altruistic cause that goes well with their personality, like picking out nice living-room drapes, and this will brighten up their days by adding some color, like nice living-room drapes."

But conversely it's also a fact that having a Purpose In Life consistently shows up as something that increases happiness, as measured by reported subjective well-being.

One presumes that this works equally well hedonically no matter how misguided that Purpose In Life may be—no matter if it is actually doing harm—no matter if the means are as cheap as prayer.  Presumably, all that matters for your happiness is that you believe in it.  So you had better not question overmuch whether you're really being effective; that would disturb the warm glow of satisfaction you paid for.

And here we verge on Zen, because you can't deliberately pursue "a purpose that takes you outside yourself", in order to take yourself outside yourself.  That's still all about you.

Which is the whole Western concept of "spirituality" that I despise:  You need a higher purpose so that you can be emotionally healthy.  The external world is just a stream of victims for you to rescue.

continue reading »

View more: Next