Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Conservation of Expected Jury Probability

jkaufman 22 August 2014 03:25PM

The New York Times has a calculator to explain how getting on a jury works. They have a slider at the top indicating how likely each of the two lawyers think you are to side with them, and as you answer questions it moves around. For example, if you select that your occupation is "blue collar" then it says "more likely to side with plaintiff" while "white collar" gives "more likely to side with defendant". As you give it more information the pointer labeled "you" slides back and forth, representing the lawyers' ongoing revision of their estimates of you. Let's see what this looks like.

Initial
Selecting "Over 30"
Selecting "Under 30"

For several other questions, however, the options aren't matched. If your household income is under $50k then it will give you "more likely to side with plaintiff" while if it's over $50k then it will say "no effect on either lawyer". This is not how conservation of expected evidence works: if learning something pushes you in one direction, then learning its opposite has to push you in the other.

Let's try this with some numbers. Say people's leanings are:

income probability of siding with plaintiff probability of siding with defendant
>$50k 50% 50%
<$50k 70% 30%
Before asking you your income the lawyers' best guess is you're equally likely to be earning >$50k as <$50k because $50k's the median [1]. This means they'd guess you're 60% likely to side with the plaintiff: half the people in your position earn over >$50k and will be approximately evenly split while the other half of people who could be in your position earn under <$50k and would favor the plaintiff 70-30, and averaging these two cases gives us 60%.

So the lawyers best guess for you is that you're at 60%, and then they ask the question. If you say ">$50k" then they update their estimate for you down to 50%, if you say "<$50k" they update it up to 70%. "No effect on either lawyer" can't be an option here unless the question gives no information.


[1] Almost; the median income in the US in 2012 was $51k. (pdf)

Comment author: jkaufman 31 July 2014 02:34:54PM 1 point [-]

How is the tournament going to work in terms of what plays what? Is every bot going to play against every other bot? Is there a big pool, where successful bots get more copies and then we run for a while? Can a person submit multiple bots?

The best strategy depends heavily on the expected opponents. For example, a bot that tries to detect itself in order to cooperate only makes sense if it might play itself.

Comment author: Wei_Dai 19 July 2014 08:28:14PM 15 points [-]

In other words, the efficient market hypothesis. There is no way to beat the market.

EMH is the reason I didn't bother looking. All my money is in index funds, I told my parents to put all their money in index funds, etc. But after stumbling into assets with returns in the 100x-1000x range (or 100% to 500% annualized), twice, it seems time to update a bit.

Comment author: jkaufman 23 July 2014 04:16:53PM 1 point [-]

after stumbling into assets with returns in the 100x-1000x range, twice, it seems time to update a bit.

How many assets have you purchased that didn't turn out to be valuable? A friend just looked up the price on a lego set that had been sitting in his room unopened since he was a kid (right after he opened it and built it...) and found that it was worth ~500x what it originally sold for. But most toys people buy and leave unopened are going to be nearly worthless a decade later.

Comment author: army1987 22 July 2014 07:10:26PM 0 points [-]

As I said: do you know how serious the offers are? Do you know why exactly weidai.com may be worth $100k?

I'm not sure what someone who wants to buy a domain name named after its current owner is thinking of doing with it, but I think there's a non-negligible chance it'd turn out to be something the namesake of the domain name wouldn't like at all.

Comment author: jkaufman 23 July 2014 04:11:08PM 2 points [-]

I'd be somewhat worried about this if I were selling jefftk.com or something, but "wei" and "dai" without tones could mean many things. I don't remember much of my Chinese, but looking at a dictionary I see:

wei: place, seat, not, because, become, tiny, tail, yes, taste
dai: doctor, belt, dynasty, stay, wait, going to, bag, wear, dangerous, lazy

Now, not all of these combinations will mean what they look like they might mean, but there are a lot of reasonable things "wei dai" could mean aside from a person's name.

(It also looks like "wei dai" can mean "grave danger".)

Comment author: jkaufman 27 June 2014 08:31:12PM *  16 points [-]

I looked into this a couple years ago and wrote up what I found:

Summary: the research isn't that good, is all correlational, and how parenting affects your happiness varies widely by demographics (age, gender, income). Neither a simple "parenting makes people happy" nor a "parenting makes people miserable" are justified.

http://lesswrong.com/lw/erj/parenting_and_happiness/

Comment author: benkuhn 19 June 2014 02:55:30AM 4 points [-]

I think you're being a little uncharitable to people who promote interventions that seem positional (e.g. greater educational attainment). It may be true that college degrees are purely for signalling and hence positional goods, but:

(a) it improves aggregate welfare for people to be able to send costly signals, so we shouldn't just get rid of college degrees;

(b) if an intervention improves college graduation rate, it (hopefully) is not doing this by handing out free diplomas, but rather by effecting some change in the subjects that makes them more capable of sending the costly signal of graduating from college, which is an absolute improvement.

Similarly, while height increase has no plausible mechanism for improving absolute wellbeing, some mechanisms for improving absolute wellbeing are measured using height as a proxy (most prominently nutritional status in developing countries).

It should definitely be a warning sign if an intervention seems only to promote a positional good, but it's more complex than it seems to determine what's actually positional.

Comment author: jkaufman 19 June 2014 03:21:26PM *  1 point [-]

"effecting some change in the subjects that makes them more capable of sending the costly signal of graduating from college, which is an absolute improvement"

It depends. Consider a government subsidy for college tuition. This increases the number of people who go to and then graduate college, but it also makes the signal less costly.

But I basically agree with "it's more complex than it seems to determine what's actually positional". The difficulty of determining how much of an observed benefit is absolute vs positional is a lot of what I'm talking about here.

Comment author: somervta 19 June 2014 03:27:25AM 0 points [-]

Because each additional dollar is less valuable, however, we would expect this transfer to make the group as a whole worse off.

grumble grumble only if the people the money went from were drawn from the same or similar distribution as the person it goes to.

Comment author: jkaufman 19 June 2014 03:15:10PM 1 point [-]

only if the people the money went from were drawn from the same or similar distribution as the person it goes to

I wrote "take $1 from 10k randomly selected people and give that $10k it to one randomly selected person". Reading it back this implies you use the same distribution for both selections, but it sounds like that's not how you read it? How would you phrase this idea differently?

Comment author: sixes_and_sevens 18 June 2014 03:17:31PM *  1 point [-]

By curious coincidence I've been reading about positional goods elsewhere this week, and thinking along similar lines.

Are there any positional goods that aren't reasonably well-captured as signalling? There are various conditions that need to be in place in order for a signal to be of value, so if positional goods are principally a case of signalling, such conditions could offer some indication as to whether an intervention provides positional or intrinsic value.

ETA: I've just had a flip through the Wikipedia article for positional goods, and the "see also" section includes a link to Narcissistic Personality Disorder. There is no explanation on the talk page.

Comment author: jkaufman 18 June 2014 04:44:31PM 2 points [-]

Consider the "take $1 from each of 10k people at random and give it all to another person chosen at random" example. The benefit there seems to be relative/positional but it's not a case of signaling.

Comment author: Lumifer 18 June 2014 04:22:33PM 3 points [-]

Are there any positional goods that aren't reasonably well-captured as signalling?

Depends on whether you count signaling to yourself as signaling.

There are cases of rich people buying stolen art (and other collectables) that they would never be able to publicly admit owning. But presumably the ownership of that rare and hidden art piece warms the cockles of their hearts...

Comment author: jkaufman 18 June 2014 04:42:50PM 2 points [-]

Can't they show off their stolen goods to particular other people, in confidence, indicating something like "I am so, rich and ruthless that I have this amazing piece of stolen artwork, and I trust you enough to let you in on this secret even though you could destroy me with it"?

Relative and Absolute Benefit

12 jkaufman 18 June 2014 01:56PM

Someone comes to you claiming to have an intervention that dramatically improves life outcomes. They tell you that all people have some level of X, determined by a mixture of genetics and biology, and they show you evidence that their intervention is cheap and effective at increasing X and separately that higher levels of X are correlated with greater life success. You're skeptical, so they show you there's a strong dose response effect, but you're still not happy about the correlational nature of their evidence. So they go off and do a randomized controlled trial, applying their intervention to randomly chosen individuals and comparing their outcomes with people who aren't supplied the intervention. The improvement still shows up, and with a large effect size!

What's missing is evidence that the intervention helps people in an absolute sense, instead of simply by improving their relative social position. For example, say X is height, we're just looking at men, and we're getting them to wear lifts in their shoes. While taller men do earn more, and are generally more successful along various metrics, we don't think this is because being taller makes you smarter, healthier, or more conscientious. If all people became 1" taller it would be very inconvenient but we wouldn't expect this to affect people's life outcomes very much.

Attributes like X are also weird because they put parents in a strange position. If you're mostly but not completely altruistic you might want more X for your own child but think that campaigns to give X to other people's children are not useful: if X is just about relative position then for every person you "bring up" that way other people are slightly brought down in a way that balances the overall outcome to "basically no effect".

College degrees, especially in fields that don't directly teach skills in demand by employers, may belong in this category. Employers hire college graduates over highschool graduates, and this hiring advantage does remain as you increase college enrollment, but if another 10% of people get English degrees is everyone better off in agreggate?

Some interventions are pretty clearly not in this category. If an operation saves someone's life or cures them of something painful they're pretty clearly better off. The difference here is we have an absolute measurement of well-being, in this case "how healthy are you?", and we can see this remaining constant in the control group. Unfortunately, this isn't always enough: if our intervention was "take $1 from 10k randomly selected people and give that $10k it to one randomly selected persion" we would see that the person gaining $10k was better off but not be able to see any harm to the other people because the change in their situation was too small to measure with our tests. Because each additional dollar is less valuable, however, we would expect this transfer to make the group as a whole worse off. So "absolute measures of wellbeing apparently remaining constant in the control group" isn't enough.

How do we get around this? While we can't run an experiment with half the world's people as "treatment" and the other half as "control", one thing we can do is look at isolated groups where we really can apply the intervention to a large fraction of the people. Take the height example. If instead we were to randomly make half the people in a treatment population 1/2" taller, and this treatment population was embedded in a much larger society, the positional losses in the non-treatment group would be too diffuse to measure. But if we limit to one small community with limited churn and apply the treatment to half the people, then if (as I expect) it's entirely a relative benefit we should see the control group do worse on absolute measurements of wellbeing.

Another way to avoid interventions that mostly give positional benefit is to keep mechanisms in mind. Height increase has no plausible mechanism for improving absolute wellbeing, while focused skills training does. This isn't ideal, because you can have non-intuitive mechanisms or miss the main way an intervention leads to your measured outcome, but it can still catch some of these.

What else can we do?

I also posted this on my blog.

View more: Next