Comment author: 13 January 2014 05:32:12PM 0 points [-]

Thank you for the clarification; despite this, cardinal utility is difficult because it assumes that we care about different preferences the same amount, or definably different amounts.

Unless there is a commodity that can adequately represent preferences (like money) and a fair redistribution mechanism, we still have problems maximizing overall welfare.

Comment author: 16 March 2014 12:43:09PM 0 points [-]

No argument here. It's hard to build a good social welfare function in theory (ie, even if you can assume away information limitations), and harder in practice (with people actively manipulating it). My point was that it is a mistake to think that Arrow showed it was impossible.

(Also: I appreciate the "thank you", but it would feel more sincere if it came with an upvote.)

Comment author: 03 December 2013 05:31:46PM *  3 points [-]

(Edit, Later. This is related to the top level replies by CarlShulman and V-V, but I think it's a more general issue, or at least a more general way of putting the same issues.)

I'm wondering about a different effect: over-quantification and false precision leading to bad choices in optimization as more effort goes into the most efficient utility maximization charities.

If we have metrics, and we optimize for them, anything that our metrics distort or exclude will have an exaggerated exclusion from our conversation. For instance, if we agree that maximizing human health is important, and use evidence that shows that something like fighting disease or hunger in fact has a huge positive effect on human health, we can easily optimize towards significant population growth, then a crash due to later resource constraints or food production volatility, killing billions. (It is immaterial if this describes reality, the phenomenon of myopic optimization still stands.)

Given that we advocate optimizing, are we, as rationalists, likely to fall prey to this sort of behavior when we pick metrics? If we don't understand the system more fully, the answer is probably yes; there will always be unanticipated side-effects in incompletely understood systems, by definition, and the more optimized a system becomes, the less stable it is to shocks.

More diversity of investment to lower priority goals and alternative ideals, meaning less optimization, as currently occurs, seem likely to mitigate these problems.

Comment author: 21 December 2013 07:29:23PM *  1 point [-]

I think you've done better than CarlShulman and V_V at expressing what I see as the most fundamental problem with EA: the fact that it is biased towards the easily- and short-term- measurable, while (it seems to me) the most effective interventions are often neither.

In other words: how do you avoid the pathologies of No Child Left Behind, where "reform" becomes synonymous with optimizing to a flawed (and ultimately, costly) metric?

This issue is touched by the original post, but not at all deeply.

Comment author: 03 December 2013 05:44:07PM 4 points [-]

Many of these issues seem related to arrow's impossibility theorem; if groups have genuinely different values, and we optimize for one set not another, ants get tiny apartments and people starve, or we destroy the world economy because we discount too much, etc.

To clarify, I think LessWrong thinks most issues are simple, because we know little about them; we want to just fix it. As an example, poverty isn't solved for good reasons; it's hard to balance incentives and growth, and deal with heterogeneity, there exist absolute limits on current wealth and the ability to move it around, and the competing priorities of nations and individuals. It's not unsolved because people are too stupid to give money to feed the poor charities. We underestimate the rest of of the world because we're really good at one thing, and think everyone is stupid for not being good at it - and even if we're right, we're not good at (understanding) many other things, and some of those things matter for fixing these problems.

Comment author: 21 December 2013 07:23:09PM 2 points [-]

Note: Arrow's Impossibility Theorem is not actually a serious philosophical hurdle for a utilitarian (though related issues such as the Gibbard-Satterthwaite theorem may be). That is to say: it is absolutely trivial to create a social utility function which meets all of Arrow's "impossible" criteria, if you simply allow cardinal instead of just ordinal utility. (Arrow's theorem is based on a restriction to ordinal cases.)

Comment author: 12 December 2013 06:31:34PM 4 points [-]

To my mind, the worst thing about the EA movement are its delusions of grandeur. Both individually and collectively, the EA people I have met display a staggering and quite sickening sense of their own self-importance. They think they are going to change the world, and yet they have almost nothing to show for their efforts except self-congratulatory rhetoric. It would be funny if it wasn't so revolting.

Comment author: 21 December 2013 07:15:16PM 1 point [-]

Upvoted because I think this is a real issue, though I'm far from sure whether I'd put it at "worst".

Comment author: 07 December 2013 08:05:26AM 1 point [-]

Playing the devil's advocate is when Alice is arguing for some position, and Bob is arguing against it, even though he does not actually disagree with Alice (perhaps because he wants to help Alice strengthen her arguments, clarify her views, etc.).

Hypothetical apostasy is when Alice plays her own devil's advocate, in essence, with no Bob involved.

Comment author: 21 December 2013 01:01:43PM *  1 point [-]

... And that is not a new idea either. "Allow me to play the devil's advocate for a moment" is a thing people say even when they are expressing support before and after that moment.

Comment author: 30 October 2013 05:26:59PM -1 points [-]

Your probability theory here is flawed. The question is not about P(A&B), the probability that both are true, but about P(A|B), the probability that A is true given that B is true. If A is "has cancer" and B is "cancer test is positive", then we calculate P(A|B) as P(B|A)P(A)/P(B); that is, if there's a 1/1000 chance of cancer and and the test is right 99/100, then P(A|B) is .99.001/(.001.99+.999.01) which is about 1 in 10.

Comment author: 03 November 2013 11:08:30AM 0 points [-]

Can anyone explain why the parent was downvoted? I don't get it. I hope there's a better reason than the formatting fail.

Comment author: 31 October 2013 03:33:25PM 1 point [-]

Arrow's theorem considers your options holding others fixed - and does its analysis knowing them. But when you're actually filling out your ballot, you don't have access to that kind of information. So it doesn't prove that there aren't systems where the risk/reward from going strategic is poor under more realistic conditions.

Is there such a theorem?

Comment author: 31 October 2013 07:58:39PM *  -1 points [-]

This is a key question. The general answer is:

1. For realistic cases, there is no such theorem, and so the task of choosing a good system is a lot about choosing one which doesn't reward strategy in realistic cases.

2. Roughly speaking, my educated intuition is that strategic payoffs grow insofar as you know that the distinctions you care about are orthogonal to what the average/modal/median voter cares about. So insofar as you are average/modal/median, your strategic incentive should be low; which is a way of saying that a good voting system can have low strategy for most voters in most elections.

2a. It may be possible to make this intuition rigorous, and prove that no system can make strategy non-viable for the orthogonal-preferenced voter. However, that would involve a lot of statistics and random variables.... I guess that's what I'm learning in my PhD so eventually I may be up to taking on this proof.

1. The exception, the realistic case where there are a number of voters who have an interest that's orthogonal to the average voter, is a case called the chicken dilemma, which I'll talk about a lot more in section 6. Chicken strategy is by far the trickiest realistic strategy to design away.
Comment author: 31 October 2013 10:31:36AM 0 points [-]

This is a terrific post, worth chopping into several pieces and made a sequence of its own.
I just have one quibble: Arrovian instead of Arrowian?

Comment author: 31 October 2013 07:52:54PM 0 points [-]

Yup. That's what people say. I don't know what the general rule is, but it's definitely right for this case.

Comment author: 30 October 2013 10:05:13PM *  0 points [-]

You can't make up just one scenario and its result and say that you have a voting rule; a rule must give results for all possible scenarios.

I think I see how the grandparent was confusing. I was assuming that the voting rule was something like plurality voting, with enough sophistication to make it a well-defined rule.

What I meant to do was define two dictatorship criteria which differ from Arrow's, which apply to individuals under voting rules, rather than applying to rules. Plurality voting (with a bit more sophistication) is a voting rule. Bob choosing for everyone is a voting rule. But the rule where Bob chooses for everyone has an a priori dictator- Bob. (He's also an a posteriori dictator, which isn't surprising.)

Plurality voting as a voting rule does not empower an a priori dictator as I defined that in the grandparent. But it is possible to find a situation under plurality voting where an a posteriori dictator exists; that is, we cannot say that plurality voting is free from a posteriori dictators. That is what the nondictatorship criterion (which is applied to voting rules!) means- for a rule to satisfy nondictatorship, it must be impossible to construct a situation where that voting rule empowers an a posteriori dictator.

Because unanimity and IIA imply not nondictatorship, for any election which satisfies unanimity and IIA, you can carefully select a ballot and report just that ballot as the group preferences. But that's because it's impossible for the group to prefer A>B>C with no individual member preferring A>B>C, and so there is guaranteed to be an individual who mirrors the group, not an individual who determines the group. Since individuals determining group preferences is what is politically dangerous, I'm not worried about the 'nondictatorship' criterion, because I'm not worried about mirroring.

I'm not going to rewrite Arrow's whole paper here but that's really what he proved.

I've read it; I've read Yu's proof; I've read Barbera's proof, I've read Geanakoplos's proof, I've read Hansen's proof. (Hansen's proof does follow a different strategy from the one I discussed, but I came across it after writing the grandparent.) I'm moderately confident I know what the theorem means. I'm almost certain that our disagreement stems from different uses of the phrase "a priori dictator," and so hope that disagreement will disappear quickly.

Comment author: 31 October 2013 02:23:08AM *  0 points [-]

I, too, hope that our disagreement will soon disappear. But as far as I can see, it's clearly not a semantic disagreement; one of us is just wrong. I'd say it's you.

So. Say there are 3 voters, and without loss of generality, voter 1 prefers A>B>C. Now, for every one of the 21 distinct combinations for the other two, you have to write down who wins, and I will find either an (a priori, determinative; not mirror) dictator or a non-IIA scenario.

ABC ABC: A

ABC ACB: A

ABC BAC: ?... you fill in these here

ABC BCA: ?

ABC CAB: .

ABC CBA: .

ACB ACB: .

ACB BAC:

ACB BCA:

ACB CAB:

ACB CBA:

BAC BAC:

BAC BCA:

BAC CAB:

BAC CBA:

BCA BCA:

BCA CAB: .... this one's really the key, but please fill in the rest too.

BCA CBA:

CAB CAB:

CAB CBA:

CBA CBA:

Once you've copied these to your comment I will delete my copies.

Comment author: 30 October 2013 08:02:15PM *  -1 points [-]

By an a priori dictatorship, I mean there is some individual 1 such that $F(R_1,R_2,\ldots,R_N)=R_1\ \forall\ (R_2,\ldots,R_N)\in L(A)^{N-1}$.

By an a posteriori dictatorship, I mean there is some individual 1 such that $\exists (R_2,\ldots,R_N)\in L(A)^N\ s.t.\ F(R_1,\ldots,R_N)=R_1\ \forall\ R_1$

There is obviously not an a priori dictationship for all voting environments under all aggregation rules that satisfy unanimity and IIA. For example, if 9 people prefer A>B>C, and 1 person prefers B>C>A, then society prefers A, regardless of how any specific individual changes their vote (so long as only one vote is changed).

(Note the counterfactual component of my statement- there needs to be an individual who can change the social preference function, not just identify the social preference function.)

But it's not that Mary just happens to turn out to be the pivotal voter between a sea of red on one side and blue on the other.

Every proof of the theorem that I can see operates exactly this way; I'm still not seeing what specific step you think I misunderstand.

Comment author: 30 October 2013 08:25:58PM 1 point [-]

I'm sorry, you really are wrong here. You can't make up just one scenario and its result and say that you have a voting rule; a rule must give results for all possible scenarios. And once you do, you'll realize that the only ones which pass both unanimity and IIA are the ones with an a priori dictatorship. I'm not going to rewrite Arrow's whole paper here but that's really what he proved.

View more: Next