The red paperclip theory of status

41 Morendil 12 July 2010 11:08PM

Followup to: The Many Faces of Status (This post co-authored by Morendil and Kaj Sotala - see note at end of post.)

In brief: status is a measure of general purpose optimization power in complex social domains, mediated by "power conversions" or "status conversions".

What is status?

Kaj previously proposed a definition of status as "the ability to control (or influence) the group", but several people pointed out shortcomings in that. One can influence a group without having status, or have status without having influence. As a glaring counterexample, planting a bomb is definitely a way of influencing a group's behavior, but few would consider it to be a sign of status.

But the argument of status as optimization power can be made to work with a couple of additional assumptions. By "optimization power", recall that we mean "the ability to steer the future in a preferred direction". In general, we recognize optimization power after the fact by looking at outcomes. Improbable outcomes that rank high in an agent's preferences attest to that agent's power. For the purposes of this post, we can in fact use "status" and "power" interchangeably.

In the most general sense, status is the general purpose ability to influence a group. An analogy to intelligence is useful here. A chess computer is very skilled at the domain of chess, but has no skill in any other domain. Intuitively, we feel like a chess computer is not intelligent, because it has no cross-domain intelligence. Likewise, while planting bombs is a very effective way of causing certain kinds of behavior in groups, intuitively it doesn't feel like status because it can only be effectively applied to a very narrow set of goals. In contrast, someone with high status in a social group can push the group towards a variety of different goals. We call a certain type of general purpose optimization power "intelligence", and another type of general purpose optimization power "status". Yet the ability to make excellent chess moves is still a form of intelligence, but only a very narrow one.

continue reading »

If You Like This Orange...

-27 [deleted] 01 April 2015 02:42AM

If you like this orange you must like that orange.  Well, maybe.  Tastes change, and maybe I already had an orange a little while ago, and maybe I'm not in the mood while someone else would be glad to have it, so it doesn't follow that because I liked this orange I must like that orange.

Comparing oranges and oranges seems like a set of two objects, but it's really four.  There's you, there's the orange, there's the other orange, and there's the perceived relation between you and the two oranges.  When it's just you and the oranges, things usually find a simple way work themselves out.

But when someone else comes into the room it's seldom oranges and oranges.   Other people are ever ready to tell you what you like.  If you like this orange you must like that apple, because they're both fruit.  Nah, can't stand apples unless they are baked.  It doesn't matter that they are both fruit, I don't care for apples.  Then the helping helpers will infer the inverse.  If you like this orange you can't like that apple.  Watch me - I'll like an apple just to spite you, or choke it down because there aren't any oranges to be had.

The nonsense comparisons just get more nonsensical.  If you like this orange you must like that color orange, you must!  That's the way it's always gone!  Well, I say if you like this orange you must like that porcupine.  See how silly it sounds?  As long as someone sees that fourth object in the set, a connection between the two things and you, they will hard-sell you that the orange and the very-not-orange are fully fungible.

That fourth object in the set, the perceived relation between the other three, gets its power from being invisible and assumed.  The assumption of relations in the set overpowers all the other objects in the set.  If you like this orange you are an orange-ist, because there's (a) you (b) the orange (c) your liking of the orange and (d) anybody that likes that orange is an orange-ist, that's the relation between you and the orange caused by your liking it.  The invisible fourth object in the set, the assumption of a relation, is now a stand-in for you.  You are no longer a person who in one place, in one time, in one way, liked an orange.  You are are an orange-ist.

If you are friends with that guy / read that book, and that guy / book exposed that idea, and that whole other guy with that idea did that thing, then you did that thing!  The four step process of replacing the man with a mannequin is the start of superstition.  Religion is realized in the replacement of the representation for the real.  Hard to believe that belief is so beleaguered but right here on this very planet in this very year there are nations where if you draw the wrong cartoon, read the wrong poem, or question the wrong answer, you go to prison.  Or worse.

Here's how they make the rotten trolly run.  If you said this one thing this one time then you believe - no, you are - this other thing.  A clergyman is not only a clergyman, they are a Good Person.  Good People do Good Deeds, and if the clergyman doesn't do good deeds, or if he does bad deeds, well, he's still a Good Person.  All four stations of Goodnessity are there: the clergyman, the Good Deeds clergymen are associated with, Good Deeds associated with Good People, and halleluia! clergymen are Good People.  And oh my but the four stations of Badnessism are there as well.  If you tell that one joke then you're a Bad Person.  That joke has the Bad Word in it, Bad People use that Bad Word, Bad People do Bad Deeds, so you did a Bad Deed!

It's four things. You, that thing you like, another thing and the proposed connection between the things. That connection is presented as more important than you.  The evidence shows that nothing is more to me than myself.  I'd not be here to tell you if this was not the case.  What other people think and do about me has its influences, but I don't confuse that with right or wrong or especially not Rights and Sins.  Egoism is the school of thought closest to my own, and that association draws from my own luster.

The pressure to be packed in a package deal comes in many forms.  Don't like too many kinds of art or music, be part of a scene.  Don't hold political or philosophical views, be a member of a party or a school.  Don't be online, be in a social network.  And most of all don't have a yen for truth, beauty and strength - be spiritual.

When the crowd crowns you with a trait, you're trapped.  To be identified as a whole by one of your parts is cutting.  Oh you're a massage therapist?  I have this pinch in my back.  You're a car mechanic?  You know, my car is just outside.  You do stand-up?  Tell me a joke, funny guy.  I heard you're a porn star, is that right?  Let's see those tits.  So you're a professional wrestler, eh?  I like that other wrestler better, the nice guy.  In every variation we are made out to be not ourselves but the thing other people think you are.  Man, that dude's a racist.  Heil hitler, you cartoon-drawer!  Her over there, she has a suicidal level of self-hatred and is an active enemy of all women.  She quit her job to be a mom when she was in her 20s.  There's something just creepy about that family down the hall, they're always happy.  Yeah, they're Mormons.  Fake vegan meat supports the aesthetic of carnivore culture.  No one more intolerant than the loud champions of toleration, no one more ready to divide than the unifiers of diversity.

In the United States, a slave knew he had a place: that of a slave.  In India, an Untouchable knew he had a place: that of an Untouchable.  The modern moral minders, starting with Stalin onward, developed a different delineator.  If you are seen to stray too far from the approved set of beliefs, you have no place.  You are to be stripped of your job, your career, your credentials, your home and your money.  The Good Guys in the White Hats are ever vigilant for any infraction.  Call them the improperatzzi.  What a remarkable coincidence that the virtue they advocate is the same as the group they are a member of.

I can't say I judge all men in all moments anew.  I've also decided to not ask you to do so.  That sounds too much like work.  I don't have the time or energy, much less the inclination, to always cast aside generalities, stereotypes, and biases.  In this very essay I may lump a whole spectrum of people I disagree with into the base categories of liars and fools.  But you and I both know some people are just jerks, and some people are solid citizens.  I'm a member of some groups, a friend of others.  Everyone I don't like has me in common.  If it suits me I'll give you a chance, but maybe I'm busy or angry that day and you're just going be hidden behind what I think of you based on some other thing at some other time.  You'll live.  My opinion isn't even all that important to me.

The troubles come when people decide that those who are different aren't to live.  Except for liars and fools, everyone on the planet knows that the Religion of Peace currently holds the title belt for murdering those who think or act differently than they do.  I keep hearing that there's a majority of Muslims who aren't like that, but I also keep not hearing about what they are doing to enlighten their brothers and sisters who keep misunderstanding Islam in the same way, century after century.  Maybe the numbers are there for the majority to reform the minority, but let's see some action.  A sound public shaming is a good start, and in this regard I do my part.  But again - I limit myself to that most pathetic and un-magical of all activities, writing, when I disagree.  The beheaders, the child-rapers, the enslavers, the kidnappers, the hijackers, the perpetually grieved - the Muslims - not so much.

There's no controversy, only a nontroversy.  A man can like music by ADULT. and Mildred Bailey.  A man can know a great deal about far right politics without being of the far right.  A man can be interested in beliefs about UFOs without believing in UFOs.  The scolds and the bullies secretly know this but don't want you in on their game.  They know what is bad for other people because they've seen the evidence - but somehow, they saw the evidence and didn't suffer from the exposure.  They are good enough to tell you what's good for you, but you aren't.  No thank you, you pinch-faced busybodies, I'll decide for myself what I like and do and think and believe.  I'll even take my lumps for the luxury.

The heart wants what the heart wants.  So does the groin.  I've made up a name for those who think otherwise: quantisexual.  A quantisexual is deeply invested in quantifying sex.  Who can have sex with who, what the arrangement is named, who shares that name and who doesn't.  Who is doing it right, who is doing it right but for the wrong reasons, who is doing it all wrong.  Not satisfied with the real-life cooties you can get from sex, a quantisexual invents forms of ritual contamination and cleanliness.  If you have even one stray thought about your own sex, you're bisexual.  If you're bisexual then you're queer.  If you're queer then you have to support all the other queers in all their queeriosities.  Even if you don't have sex at all there's a whole slew of cooties you can accessorize yourself with like 'cis' and 'demisexual' and 'asexual.'  The name for a thing becomes more important than the thing itself, like sheets being more sexy than what goes on between them.  The alphabet soup of alt-sex has more rules and restrictions than the Roman Catholic Church.  Quantisexuality is a fetish.  Hip hip hooray if you were born that way or if, by pretending it's your thing, you get to join the right in-groups.  Sex will go on without your names for it.

Standing at the rich banquet of life, far too many go with a cuisine they've been gifted by someone not even alive to share the meal.  Only these foods go together, and only in this order, and in this amount.  Not because to do otherwise leads to sickness or death, but because, well, other people might... see...  See what?  Me getting a few of these and a few of those, concerned less than they, enjoying more than they.  You do go on if you must keep kosher, hold halal and avoid fish on Friday.  All the more for me, pal, or maybe I'll just have a bite and be done.  What we do and like isn't limited to one item from column A and two items from column B.  Life is not a family meal or a package deal.  Beliefs and interests are all a big mess and probably not very important, so pull them together in a way that makes sense to you.  Just don't insist I sign on to your supper club.

The thing you like is the thing you like.  You didn't used to like it, and maybe you won't like it later.  You don't have to explain or understand it.  You don't have to get my approval for it.  If it stops working for you, you stop working for it.  Move on, and I'll be doing the same.

- Trevor Blake is the author of Confessions of a Failed Egoist.

Slate Star Codex: alternative comment threads on LessWrong?

28 tog 27 March 2015 09:05PM

Like many Less Wrong readers, I greatly enjoy Slate Star Codex; there's a large overlap in readership. However, the comments there are far worse, not worth reading for me. I think this is in part due to the lack of LW-style up and downvotes. Have there ever been discussion threads about SSC posts here on LW? What do people think of the idea occasionally having them? Does Scott himself have any views on this, and would he be OK with it?

Update:

The latest from Scott:

I'm fine with anyone who wants reposting things for comments on LW, except for posts where I specifically say otherwise or tag them with "things i will regret writing"

In this thread some have also argued for not posting the most hot-button political writings.

Would anyone be up for doing this? Ataxerxes started with "Extremism in Thought Experiments is No Vice"

Defeating the Villain

29 Zubon 26 March 2015 09:43PM

We have a recurring theme in the greater Less Wrong community that life should be more like a high fantasy novel. Maybe that is to be expected when a quarter of the community came here via Harry Potter fanfiction, and we also have rationalist group houses named after fantasy locations, descriptions of community members in terms of character archetypes and PCs versus NPCs, semi-serious development of the new atheist gods, and feel free to contribute your favorites in the comments.

A failure mode common to high fantasy novels as well as politics is solving all our problems by defeating the villain. Actually, this is a common narrative structure for our entire storytelling species, and it works well as a narrative structure. The story needs conflict, so we pit a sympathetic protagonist against a compelling antagonist, and we reach a satisfying climax when the two come into direct conflict, good conquers evil, and we live happily ever after.

This isn't an article about whether your opponent really is a villain. Let's make the (large) assumption that you have legitimately identified a villain who is doing evil things. They certainly exist in the world. Defeating this villain is a legitimate goal.

And then what?

Defeating the villain is rarely enough. Building is harder than destroying, and it is very unlikely that something good will spontaneously fill the void when something evil is taken away. It is also insufficient to speak in vague generalities about the ideals to which the post-[whatever] society will adhere. How are you going to avoid the problems caused by whatever you are eliminating, and how are you going to successfully transition from evil to good?

In fantasy novels, this is rarely an issue. The story ends shortly after the climax, either with good ascending or time-skipping to a society made perfect off-camera. Sauron has been vanquished, the rightful king has been restored, cue epilogue(s). And then what? Has the Chosen One shown skill in diplomacy and economics, solving problems not involving swords? What was Aragorn's tax policy? Sauron managed to feed his armies from a wasteland; what kind of agricultural techniques do you have? And indeed, if the book/series needs a sequel, we find that a problem at least as bad as the original fills in the void.

Reality often follows that pattern. Marx explicitly had no plan for what happened after you smashed capitalism. Destroy the oppressors and then ... as it turns out, slightly different oppressors come in and generally kill a fair percentage of the population. It works on the other direction as well; the fall of Soviet communism led not to spontaneous capitalism but rather kleptocracy and Vladmir Putin. For most of my lifetime, a major pillar of American foreign policy has seemed to be the overthrow of hostile dictators (end of plan). For example, Muammar Gaddafi was killed in 2011, and Libya has been in some state of unrest or civil war ever since. Maybe this is one where it would not be best to contribute our favorites in the comments.

This is not to say that you never get improvements that way. Aragorn can hardly be worse than Sauron. Regression to the mean perhaps suggests that you will get something less bad just by luck, as Putin seems clearly less bad than Stalin, although Stalin seems clearly worse than almost any other regime change in history. Some would say that causing civil wars in hostile countries is the goal rather than a failure of American foreign policy, which seems a darker sort of instrumental rationality.

Human flourishing is not the default state of affairs, temporarily suppressed by villainy. Spontaneous order is real, but it still needs institutions and social technology to support it.

Defeating the villain is a (possibly) necessary but (almost certainly) insufficient condition for bringing about good.

One thing I really like about this community is that projects tend to be conceived in the positive rather than the negative. Please keep developing your plans not only in terms of "this is a bad thing to be eliminated" but also "this is a better thing to be created" and "this is how I plan to get there."

Political topics attract participants inclined to use the norms of mainstream political debate, risking a tipping point to lower quality discussion

37 emr 26 March 2015 12:14AM

(I hope that is the least click-baity title ever.)

Political topics elicit lower quality participation, holding the set of participants fixed. This is the thesis of "politics is the mind-killer".

Here's a separate effect: Political topics attract mind-killed participants. This can happen even when the initial participants are not mind-killed by the topic. 

Since outreach is important, this could be a good thing. Raise the sanity water line! But the sea of people eager to enter political discussions is vast, and the epistemic problems can run deep. Of course not everyone needs to come perfectly prealigned with community norms, but any community will be limited in how robustly it can handle an influx of participants expecting a different set of norms. If you look at other forums, it seems to take very little overt contemporary political discussion before the whole place is swamped, and politics becomes endemic. As appealing as "LW, but with slightly more contemporary politics" sounds, it's probably not even an option. You have "LW, with politics in every thread", and "LW, with as little politics as we can manage".  

That said, most of the problems are avoided by just not saying anything that patterns matches too easily to current political issues. From what I can tell, LW has always had tons of meta-political content, which doesn't seem to cause problems, as well as standard political points presented in unusual ways, and contrarian political opinions that are too marginal to raise concern. Frankly, if you have a "no politics" norm, people will still talk about politics, but to a limited degree. But if you don't even half-heartedly (or even hypocritically) discourage politics, then a open-entry site that accepts general topics will risk spiraling too far in a political direction. 

As an aside, I'm not apolitical. Although some people advance a more sweeping dismissal of the importance or utility of political debate, this isn't required to justify restricting politics in certain contexts. The sort of the argument I've sketched (I don't want LW to be swamped by the worse sorts of people who can be attracted to political debate) is enough. There's no hypocrisy in not wanting politics on LW, but accepting political talk (and the warts it entails) elsewhere. Of the top of my head, Yvain is one LW affiliate who now largely writes about more politically charged topics on their own blog (SlateStarCodex), and there are some other progressive blogs in that direction. There are libertarians and right-leaning (reactionary? NRx-lbgt?) connections. I would love a grand unification as much as anyone, (of course, provided we all realize that I've been right all along), but please let's not tell the generals to bring their armies here for the negotiations.

I'm the new moderator

87 NancyLebovitz 13 January 2015 11:21PM

Viliam Bur made the announcement in Main, but not everyone checks main, so I'm repeating it here.

During the following months my time and attention will be heavily occupied by some personal stuff, so I will be unable to function as a LW moderator. The new LW moderator is... NancyLebovitz!

From today, please direct all your complaints and investigation requests to Nancy. Please not everyone during the first week. That can be a bit frightening for a new moderator.

There are a few old requests I haven't completed yet. I will try to close everything during the following days, but if I don't do it till the end of January, then I will forward the unfinished cases to Nancy, too.

Long live the new moderator!

Announcing the 2014 program equilibrium iterated PD tournament

24 tetronian2 31 July 2014 12:24PM

Last year, AlexMennen ran a prisoner's dilemma tournament with bots that could see each other's source code, which was dubbed a "program equilibrium" tournament. This year, I will be running a similar tournament. Here's how it's going to work: Anyone can submit a bot that plays the iterated PD against other bots. Bots can not only remember previous rounds, as in the standard iterated PD, but also run perfect simulations of their opponent before making a move. Please see the github repo for the full list of rules and a brief tutorial.

There are a few key differences this year:

1) The tournament is in Haskell rather than Scheme.

2) The time limit for each round is shorter (5 seconds rather than 10) but the penalty for not outputting Cooperate or Defect within the time limit has been reduced.

3) Bots cannot directly see each other's source code, but they can run their opponent, specifying the initial conditions of the simulation, and then observe the output.

All submissions should be emailed to pdtournament@gmail.com or PM'd to me here on LessWrong by September 15th, 2014. LW users with 50+ karma who want to participate but do not know Haskell can PM me with an algorithm/psuedocode, and I will translate it into a bot for them. (If there is a flood of such requests, I would appreciate some volunteers to help me out.)

Why the tails come apart

114 Thrasymachus 01 August 2014 10:41PM

[I'm unsure how much this rehashes things 'everyone knows already' - if old hat, feel free to downvote into oblivion. My other motivation for the cross-post is the hope it might catch the interest of someone with a stronger mathematical background who could make this line of argument more robust]

[Edit 2014/11/14: mainly adjustments and rewording in light of the many helpful comments below (thanks!). I've also added a geometric explanation.]

Many outcomes of interest have pretty good predictors. It seems that height correlates to performance in basketball (the average height in the NBA is around 6'7"). Faster serves in tennis improve one's likelihood of winning. IQ scores are known to predict a slew of factors, from income, to chance of being imprisoned, to lifespan.

What's interesting is what happens to these relationships 'out on the tail': extreme outliers of a given predictor are seldom similarly extreme outliers on the outcome it predicts, and vice versa. Although 6'7" is very tall, it lies within a couple of standard deviations of the median US adult male height - there are many thousands of US men taller than the average NBA player, yet are not in the NBA. Although elite tennis players have very fast serves, if you look at the players serving the fastest serves ever recorded, they aren't the very best players of their time. It is harder to look at the IQ case due to test ceilings, but again there seems to be some divergence near the top: the very highest earners tend to be very smart, but their intelligence is not in step with their income (their cognitive ability is around +3 to +4 SD above the mean, yet their wealth is much higher than this) (1).

The trend seems to be that even when two factors are correlated, their tails diverge: the fastest servers are good tennis players, but not the very best (and the very best players serve fast, but not the very fastest); the very richest tend to be smart, but not the very smartest (and vice versa). Why?

Too much of a good thing?

One candidate explanation would be that more isn't always better, and the correlations one gets looking at the whole population doesn't capture a reversal at the right tail. Maybe being taller at basketball is good up to a point, but being really tall leads to greater costs in terms of things like agility. Maybe although having a faster serve is better all things being equal, but focusing too heavily on one's serve counterproductively neglects other areas of one's game. Maybe a high IQ is good for earning money, but a stratospherically high IQ has an increased risk of productivity-reducing mental illness. Or something along those lines.

I would guess that these sorts of 'hidden trade-offs' are common. But, the 'divergence of tails' seems pretty ubiquitous (the tallest aren't the heaviest, the smartest parents don't have the smartest children, the fastest runners aren't the best footballers, etc. etc.), and it would be weird if there was always a 'too much of a good thing' story to be told for all of these associations. I think there is a more general explanation.

The simple graphical explanation

[Inspired by this essay from Grady Towers]

Suppose you make a scatter plot of two correlated variables. Here's one I grabbed off google, comparing the speed of a ball out of a baseball pitchers hand compared to its speed crossing crossing the plate:

It is unsurprising to see these are correlated (I'd guess the R-square is > 0.8). But if one looks at the extreme end of the graph, the very fastest balls out of the hand aren't the very fastest balls crossing the plate, and vice versa. This feature is general. Look at this data (again convenience sampled from googling 'scatter plot') of this:

Or this:

Or this:

Given a correlation, the envelope of the distribution should form some sort of ellipse, narrower as the correlation goes stronger, and more circular as it gets weaker: (2)

The thing is, as one approaches the far corners of this ellipse, we see 'divergence of the tails': as the ellipse doesn't sharpen to a point, there are bulges where the maximum x and y values lie with sub-maximal y and x values respectively:

So this offers an explanation why divergence at the tails is ubiquitous. Providing the sample size is largeish, and the correlation not too tight (the tighter the correlation, the larger the sample size required), one will observe the ellipses with the bulging sides of the distribution. (3)

Hence the very best basketball players aren't the very tallest (and vice versa), the very wealthiest not the very smartest, and so on and so forth for any correlated X and Y. If X and Y are "Estimated effect size" and "Actual effect size", or "Performance at T", and "Performance at T+n", then you have a graphical display of winner's curse and regression to the mean.

An intuitive explanation of the graphical explanation

It would be nice to have an intuitive handle on why this happens, even if we can be convinced that it happens. Here's my offer towards an explanation:

The fact that a correlation is less than 1 implies that other things matter to an outcome of interest. Although being tall matters for being good at basketball, strength, agility, hand-eye-coordination matter as well (to name but a few). The same applies to other outcomes where multiple factors play a role: being smart helps in getting rich, but so does being hard working, being lucky, and so on.

For a toy model, pretend that wealth is wholly explained by two factors: intelligence and conscientiousness. Let's also say these are equally important to the outcome, independent of one another and are normally distributed. (4) So, ceteris paribus, being more intelligent will make one richer, and the toy model stipulates there aren't 'hidden trade-offs': there's no negative correlation between intelligence and conscientiousness, even at the extremes. Yet the graphical explanation suggests we should still see divergence of the tails: the very smartest shouldn't be the very richest.

The intuitive explanation would go like this: start at the extreme tail - +4SD above the mean for intelligence, say. Although this gives them a massive boost to their wealth, we'd expect them to be average with respect to conscientiousness (we've stipulated they're independent). Further, as this ultra-smart population is small, we'd expect them to fall close to the average in this other independent factor: with 10 people at +4SD, you wouldn't expect any of them to be +2SD in conscientiousness.

Move down the tail to less extremely smart people - +3SD say. These people don't get such a boost to their wealth from their intelligence, but there should be a lot more of them (if 10 at +4SD, around 500 at +3SD), this means one should expect more variation in conscientiousness - it is much less surprising to find someone +3SD in intelligence and also +2SD in conscientiousness, and in the world where these things were equally important, they would 'beat' someone +4SD in intelligence but average in conscientiousness. Although a +4SD intelligence person will likely be better than a given +3SD intelligence person (the mean conscientiousness in both populations is 0SD, and so the average wealth of the +4SD intelligence population is 1SD higher than the 3SD intelligence people), the wealthiest of the +4SDs will not be as good as the best of the much larger number of +3SDs. The same sort of story emerges when we look at larger numbers of factors, and in cases where the factors contribute unequally to the outcome of interest.

When looking at a factor known to be predictive of an outcome, the largest outcome values will occur with sub-maximal factor values, as the larger population increases the chances of 'getting lucky' with the other factors:

So that's why the tails diverge.

 

A parallel geometric explanation

There's also a geometric explanation. The R-square measure of correlation between two sets of data is the same as the cosine of the angle between them when presented as vectors in N-dimensional space (explanations, derivations, and elaborations here, here, and here). (5) So here's another intuitive handle for tail divergence:

Grant a factor correlated with an outcome, which we represent with two vectors at an angle theta, the inverse cosine equal the R-squared. 'Reading off the expected outcome given a factor score is just moving along the factor vector and multiplying by cosine theta to get the distance along the outcome vector. As cos theta is never greater than 1, we see regression to the mean. The geometrical analogue to the tails coming apart is the absolute difference in length along factor versus length along outcome|factor scales with the length along the factor; the gap between extreme values of a factor and the less extreme values of the outcome grows linearly as the factor value gets more extreme. For concreteness (and granting normality), an R-square of 0.5 (corresponding to an angle of sixty degrees) means that +4SD (~1/15000) on a factor will be expected to be 'merely' +2SD (~1/40) in the outcome - and an R-square of 0.5 is remarkably strong in the social sciences, implying it accounts for half the variance.(6) The reverse - extreme outliers on outcome are not expected to be so extreme an outlier on a given contributing factor - follows by symmetry.

 

Endnote: EA relevance

I think this is interesting in and of itself, but it has relevance to Effective Altruism, given it generally focuses on the right tail of various things (What are the most effective charities? What is the best career? etc.) It generally vindicates worries about regression to the mean or winner's curse, and suggests that these will be pretty insoluble in all cases where the populations are large: even if you have really good means of assessing the best charities or the best careers so that your assessments correlate really strongly with what ones actually are the best, the very best ones you identify are unlikely to be actually the very best, as the tails will diverge.

This probably has limited practical relevance. Although you might expect that one of the 'not estimated as the very best' charities is in fact better than your estimated-to-be-best charity, you don't know which one, and your best bet remains your estimate (in the same way - at least in the toy model above - you should bet a 6'11" person is better at basketball than someone who is 6'4".)

There may be spread betting or portfolio scenarios where this factor comes into play - perhaps instead of funding AMF to diminishing returns when its marginal effectiveness dips below charity #2, we should be willing to spread funds sooner.(6) Mainly, though, it should lead us to be less self-confident.


1. Given income isn't normally distributed, using SDs might be misleading. But non-parametric ranking to get a similar picture: if Bill Gates is ~+4SD in intelligence, despite being the richest man in america, he is 'merely' in the smartest tens of thousands. Looking the other way, one might look at the generally modest achievements of people in high-IQ societies, but there are worries about adverse selection.

2. As nshepperd notes below, this depends on something like multivariate CLT. I'm pretty sure this can be weakened: all that is needed, by the lights of my graphical intuition, is that the envelope be concave. It is also worth clarifying the 'envelope' is only meant to illustrate the shape of the distribution, rather than some boundary that contains the entire probability density: as suggested by homunq: it is an 'pdf isobar' where probability density is higher inside the line than outside it. 

3. One needs a large enough sample to 'fill in' the elliptical population density envelope, and the tighter the correlation, the larger the sample needed to fill in the sub-maximal bulges. The old faithful case is an example where actually you do get a 'point', although it is likely an outlier.

 

4. It's clear that this model is fairly easy to extend to >2 factor cases, but it is worth noting that in cases where the factors are positively correlated, one would need to take whatever component of the factors which are independent of one another.

5. My intuition is that in cartesian coordinates the R-square between correlated X and Y is actually also the cosine of the angle between the regression lines of X on Y and Y on X. But I can't see an obvious derivation, and I'm too lazy to demonstrate it myself. Sorry!

6. Another intuitive dividend is that this makes it clear why you can by R-squared to move between z-scores of correlated normal variables, which wasn't straightforwardly obvious to me.

7. I'd intuit, but again I can't demonstrate, the case for this becomes stronger with highly skewed interventions where almost all the impact is focused in relatively low probability channels, like averting a very specified existential risk.

Consider giving an explanation for your deletion this time around. "Harry Yudkowsky and the Methods of Postrationality: Chapter One: Em Dashes Colons and Ellipses, Littérateurs Go Wild"

3 Will_Newsome 08 July 2014 02:53AM

My stupid fanfic chapter was banned without explanation so I reposted it; somehow it was at +7 when it was deleted and I think silently deleting upvoted posts is a disservice to LessWrong. I requested that a justification be given in the comments if it were to be deleted again, so LessWrong readers could consider whether or not that justification is aligned with what they want from LessWrong. Also I would like to make clear that this fanfic is primarily a medium for explaining some ideas that people on LessWrong often ask me about; that it is also a lighthearted critique of Yudkowskyanism is secondary, and if need be I will change the premise so that the medium doesn't drown out the message. But really, I wouldn't think a lighthearted parody of a lighthearted parody would cause such offense.

 

The original post has been unbanned and can be found here, so I've edited this post to just be about the banning.

Downvote stalkers: Driving members away from the LessWrong community?

39 Ander 02 July 2014 12:40AM

Last month I saw this post: http://lesswrong.com/lw/kbc/meta_the_decline_of_discussion_now_with_charts/ addressing whether the discussion on LessWrong was in decline.  As a relatively new user who had only just started to post comments, my reaction was: “I hope that LessWrong isn’t in decline, because the sequences are amazing, and I really like this community.  I should try to write a couple articles myself and post them!  Maybe I could do an analysis/summary of certain sequences posts, and discuss how they had helped me to change my mind”.   I started working on writing an article.

Then I logged into LessWrong and saw that my Karma value was roughly half of what it had been the day before.   Previously I hadn’t really cared much about Karma, aside from whatever micro-utilons of happiness it provided to see that the number slowly grew because people generally liked my comments.   Or at least, I thought I didn’t really care, until my lizard brain reflexes reacted to what it perceived as an assault on my person.

 

Had I posted something terrible and unpopular that had been massively downvoted during the several days since my previous login?  No, in fact my ‘past 30 days’ Karma was still positive.  Rather, it appeared that everything I had ever posted to LessWrong now had a -1 on it instead of a 0. Of course, my loss probably pales in comparison to that of other, more prolific posters who I have seen report this behavior.

So what controversial subject must I have commented on in order to trigger this assault?  Well, let’s see, in the past week  I had asked if anyone had any opinions of good software engineer interview questions I could ask a candidate.  I posted in http://lesswrong.com/lw/kex/happiness_and_children/ that I was happy to not have children, and finally, here in what appears to me to be by far the most promising candidate:http://lesswrong.com/r/discussion/lw/keu/separating_the_roles_of_theory_and_direct/  I replied to a comment about global warming data, stating that I routinely saw headlines about data supporting global warming. 

 

Here is our scenario: A new user is attempting to participate on a message board that values empiricism and rationality, posted that evidence supports that climate change is real.  (Wow, really rocking the boat here!)    Then, apparently in an effort to ‘win’ this discussion by silencing opposition, someone went and downvoted every comment this user had ever made on the site.   Apparently they would like to see LessWrong be a bastion of empiricism and rationality and [i]climate change denial[/i] instead? And the way to achieve this is not to have a fair and rational discussion of the existing empirical data, but rather to simply Karmassassinate anyone who would oppose them?

 

Here is my hypothesis: The continuing problem of karma downvote stalkers is contributing to the decline of discussion on the site.    I definitely feel much less motivated to try and contribute anything now, and I have been told by multiple other people at LessWrong meetings things such as “I used to post a lot on LessWrong, but then I posted X, and got mass downvoted, so now I only comment on Yvain’s blog”.  These anecdotes are, of course, only very weak evidence to support my claim.  I wish I could provide more, but I will have to defer to any readers who can supply more.

 

Perhaps this post will simply trigger more retribution, or maybe it will trigger an outswelling of support, or perhaps just be dismissed by people saying I should’ve posted it to the weekly discussion thread instead.   Whatever the outcome, rather than meekly leaving LessWrong and letting my 'stalker' win, I decided to open a discussion about the issue.  Thank you!

View more: Prev | Next