Comment author: Thrasymachus 03 August 2014 09:34:44PM 3 points [-]

Thanks for this important spot - I don't think it is a nitpick at all. I'm switching jobs at the moment, but I'll revise the post (and diagrams) in light of this. It might be a week though, sorry!

Comment author: homunq 22 August 2014 04:04:38PM *  2 points [-]

Bump.

(I realize you're busy, this is just a friendly reminder.)

Also, I added one clause to my comment above: the bit about "imperfectly measured", which is of course usually the case in the real world.

Comment author: homunq 02 August 2014 05:58:39PM *  10 points [-]

Great article overall. Regression to the mean is a key fact of statistics, and far too few people incorporate it into their intuition.

But there's a key misunderstanding in the second-to-last graph (the one with the drawn-in blue and red "outcome" and "factor"). The black line, indicating a correlation of 1, corresponds to nothing in reality. The true correlation is the line from the vertical tangent point at the right (marked) to the vertical tangent point at the left (unmarked). If causality indeed runs from "factor" (height) to "outcome" (skill), that's how much extra skill an extra helping of height will give you. Thus, the diagonal red line should follow this direction, not be parallel to the 45 degree black line. If you draw this line, you'll notice that each point on it has equal vertical distance to the top and bottom of the elliptical "envelope" (which is, of course, not a true envelope for all the probability mass, just an indication that probability density is higher for any point inside than any point outside).

Things are a little more complex if the correlation is due to a mutual cause, "reverse" causation (from "outcome" to "factor"), or if "factor" is imperfectly measured. In that case, the line connecting the vertical tangents may not correspond to anything in reality, though it's still what you should follow to get the "right" (minimum expected squared error) answer.

This may seem to be a nitpick, but to me, this kind of precision is key to getting your intuition right.

Comment author: Davidmanheim 13 January 2014 05:32:12PM 0 points [-]

Thank you for the clarification; despite this, cardinal utility is difficult because it assumes that we care about different preferences the same amount, or definably different amounts.

Unless there is a commodity that can adequately represent preferences (like money) and a fair redistribution mechanism, we still have problems maximizing overall welfare.

Comment author: homunq 16 March 2014 12:43:09PM 1 point [-]

No argument here. It's hard to build a good social welfare function in theory (ie, even if you can assume away information limitations), and harder in practice (with people actively manipulating it). My point was that it is a mistake to think that Arrow showed it was impossible.

(Also: I appreciate the "thank you", but it would feel more sincere if it came with an upvote.)

Comment author: Davidmanheim 03 December 2013 05:31:46PM *  4 points [-]

(Edit, Later. This is related to the top level replies by CarlShulman and V-V, but I think it's a more general issue, or at least a more general way of putting the same issues.)

I'm wondering about a different effect: over-quantification and false precision leading to bad choices in optimization as more effort goes into the most efficient utility maximization charities.

If we have metrics, and we optimize for them, anything that our metrics distort or exclude will have an exaggerated exclusion from our conversation. For instance, if we agree that maximizing human health is important, and use evidence that shows that something like fighting disease or hunger in fact has a huge positive effect on human health, we can easily optimize towards significant population growth, then a crash due to later resource constraints or food production volatility, killing billions. (It is immaterial if this describes reality, the phenomenon of myopic optimization still stands.)

Given that we advocate optimizing, are we, as rationalists, likely to fall prey to this sort of behavior when we pick metrics? If we don't understand the system more fully, the answer is probably yes; there will always be unanticipated side-effects in incompletely understood systems, by definition, and the more optimized a system becomes, the less stable it is to shocks.

More diversity of investment to lower priority goals and alternative ideals, meaning less optimization, as currently occurs, seem likely to mitigate these problems.

Comment author: homunq 21 December 2013 07:29:23PM *  2 points [-]

I think you've done better than CarlShulman and V_V at expressing what I see as the most fundamental problem with EA: the fact that it is biased towards the easily- and short-term- measurable, while (it seems to me) the most effective interventions are often neither.

In other words: how do you avoid the pathologies of No Child Left Behind, where "reform" becomes synonymous with optimizing to a flawed (and ultimately, costly) metric?

This issue is touched by the original post, but not at all deeply.

Comment author: Davidmanheim 03 December 2013 05:44:07PM 3 points [-]

Many of these issues seem related to arrow's impossibility theorem; if groups have genuinely different values, and we optimize for one set not another, ants get tiny apartments and people starve, or we destroy the world economy because we discount too much, etc.

To clarify, I think LessWrong thinks most issues are simple, because we know little about them; we want to just fix it. As an example, poverty isn't solved for good reasons; it's hard to balance incentives and growth, and deal with heterogeneity, there exist absolute limits on current wealth and the ability to move it around, and the competing priorities of nations and individuals. It's not unsolved because people are too stupid to give money to feed the poor charities. We underestimate the rest of of the world because we're really good at one thing, and think everyone is stupid for not being good at it - and even if we're right, we're not good at (understanding) many other things, and some of those things matter for fixing these problems.

Comment author: homunq 21 December 2013 07:23:09PM 4 points [-]

Note: Arrow's Impossibility Theorem is not actually a serious philosophical hurdle for a utilitarian (though related issues such as the Gibbard-Satterthwaite theorem may be). That is to say: it is absolutely trivial to create a social utility function which meets all of Arrow's "impossible" criteria, if you simply allow cardinal instead of just ordinal utility. (Arrow's theorem is based on a restriction to ordinal cases.)

Comment author: DylanEvans 12 December 2013 06:31:34PM 5 points [-]

To my mind, the worst thing about the EA movement are its delusions of grandeur. Both individually and collectively, the EA people I have met display a staggering and quite sickening sense of their own self-importance. They think they are going to change the world, and yet they have almost nothing to show for their efforts except self-congratulatory rhetoric. It would be funny if it wasn't so revolting.

Comment author: homunq 21 December 2013 07:15:16PM 1 point [-]

Upvoted because I think this is a real issue, though I'm far from sure whether I'd put it at "worst".

Comment author: SaidAchmiz 07 December 2013 08:05:26AM 1 point [-]

Playing the devil's advocate is when Alice is arguing for some position, and Bob is arguing against it, even though he does not actually disagree with Alice (perhaps because he wants to help Alice strengthen her arguments, clarify her views, etc.).

Hypothetical apostasy is when Alice plays her own devil's advocate, in essence, with no Bob involved.

Comment author: homunq 21 December 2013 01:01:43PM *  2 points [-]

... And that is not a new idea either. "Allow me to play the devil's advocate for a moment" is a thing people say even when they are expressing support before and after that moment.

Comment author: homunq 30 October 2013 05:26:59PM -1 points [-]

Your probability theory here is flawed. The question is not about P(A&B), the probability that both are true, but about P(A|B), the probability that A is true given that B is true. If A is "has cancer" and B is "cancer test is positive", then we calculate P(A|B) as P(B|A)P(A)/P(B); that is, if there's a 1/1000 chance of cancer and and the test is right 99/100, then P(A|B) is .99.001/(.001.99+.999.01) which is about 1 in 10.

Comment author: homunq 03 November 2013 11:08:30AM 0 points [-]

Can anyone explain why the parent was downvoted? I don't get it. I hope there's a better reason than the formatting fail.

Comment author: Luke_A_Somers 31 October 2013 03:33:25PM 1 point [-]

Arrow's theorem considers your options holding others fixed - and does its analysis knowing them. But when you're actually filling out your ballot, you don't have access to that kind of information. So it doesn't prove that there aren't systems where the risk/reward from going strategic is poor under more realistic conditions.

Is there such a theorem?

Comment author: homunq 31 October 2013 07:58:39PM *  0 points [-]

This is a key question. The general answer is:

  1. For realistic cases, there is no such theorem, and so the task of choosing a good system is a lot about choosing one which doesn't reward strategy in realistic cases.

  2. Roughly speaking, my educated intuition is that strategic payoffs grow insofar as you know that the distinctions you care about are orthogonal to what the average/modal/median voter cares about. So insofar as you are average/modal/median, your strategic incentive should be low; which is a way of saying that a good voting system can have low strategy for most voters in most elections.

2a. It may be possible to make this intuition rigorous, and prove that no system can make strategy non-viable for the orthogonal-preferenced voter. However, that would involve a lot of statistics and random variables.... I guess that's what I'm learning in my PhD so eventually I may be up to taking on this proof.

  1. The exception, the realistic case where there are a number of voters who have an interest that's orthogonal to the average voter, is a case called the chicken dilemma, which I'll talk about a lot more in section 6. Chicken strategy is by far the trickiest realistic strategy to design away.
Comment author: MrMind 31 October 2013 10:31:36AM 0 points [-]

This is a terrific post, worth chopping into several pieces and made a sequence of its own.
I just have one quibble: Arrovian instead of Arrowian?

Comment author: homunq 31 October 2013 07:52:54PM 0 points [-]

Yup. That's what people say. I don't know what the general rule is, but it's definitely right for this case.

View more: Prev | Next