Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: owencb 28 September 2014 05:20:33PM 1 point [-]

Giving them different exponents in the Nash product has some appeal, except that it does seem like NBS without modification is correct in the two-delegate case (where the weight assigned to the different theories is captured properly by the fact that the defection point is more closely aligned with the view of the theory with more weight). If we don't think that's right in the two-delegate case we should have some account of why not.

Comment author: badger 28 September 2014 06:35:56PM 1 point [-]

The issue is when we should tilt outcomes in favor of higher credence theories. Starting from a credence-weighted mixture, I agree theories should have equal bargaining power. Starting from a more neutral disagreement point, like the status quo actions of a typical person, higher credence should entail more power / votes / delegates.

On a quick example, equal bargaining from a credence-weighted mixture tends to favor the lower credence theory compared to weighted bargaining from an equal status quo. If the total feasible set of utilities is {(x,y) | x^2 + y^2 ≤ 1; x,y ≥ 0}, then the NBS starting from (0.9, 0.1) is about (0.95, 0.28) and the NBS starting from (0,0) with theory 1 having nine delegates (i.e. an exponent of nine in the Nash product) and theory 2 having one delegate is (0.98, 0.16).

If the credence-weighted mixture were on the Pareto frontier, both approaches are equivalent.

Comment author: owencb 28 September 2014 03:33:51PM 1 point [-]

I think the the Nash bargaining solution should be pretty good if there are only two members of the parliament, but it's not clear how to scale up to a larger parliament.

Comment author: badger 28 September 2014 04:35:16PM 2 points [-]

For the NBS with more than two agents, you just maximize the product of everyone's gain in utility over the disagreement point. For Kalai-Smodorinsky, you continue to equate the ratios of gains, i.e. picking the point on the Pareto frontier on the line between the disagreement point and vector of ideal utilities.

Agents could be given more bargaining power by giving them different exponents in the Nash product.

Comment author: owencb 28 September 2014 03:36:13PM 3 points [-]

I think there's a fairly natural disagreement point here: the outcome with no trade, which is just a randomisation of the top options of the different theories, with probability according to the credence in that theory.

One possibility to progress is to analyse what happens here in the two-theory case, perhaps starting with some worked examples.

Comment author: badger 28 September 2014 04:22:47PM *  1 point [-]

Alright, a credence-weighted randomization between ideals and then bargaining on equal footing from there makes sense. I was imagining the parliament starting from scratch.

Another alternative would be to use a hypothetical disagreement point corresponding to the worst utility for each theory and giving higher credence theories more bargaining power. Or more bargaining power from a typical person's life (the outcome can't be worse for any theory than a policy of being kind to your family, giving to socially-motivated causes, cheating on your taxes a little, telling white lies, and not murdering).

Comment author: gjm 28 September 2014 09:44:06AM 9 points [-]

Consider the following degenerate case: there is only one decision to be made, and your competing theories assess it as follows.

  • Theory 1: option A is vastly worse than option B.
  • Theory 2: option A is just a tiny bit better than option B.

And suppose you find theory 2 just slightly more probable than theory 1.

Then it seems like any parliamentary model is going to say that theory 2 wins, and you choose option A. That seems like a bad outcome.

Accordingly, I suggest that to arrive at a workable parliamentary model we need to do at least one of the following:

  • Disallow degenerate cases of this kind. (Seems wrong; e.g., suppose you have an important decision to make on your deathbed.)
  • Bite the bullet and say that in the situation above you really are going to choose A over B. (Seems pretty terrible.)
  • Take into account how strongly the delegates feel about the decision, in such a way that you'd choose B in this situation. (Handwavily it feels as if any way of doing this is going to constrain how much "tactical" voting the delegates can engage in.)

As you might gather, I find the last option the most promising.

Comment author: badger 28 September 2014 01:30:35PM 1 point [-]

I agree that some cardinal information needs to enter in the model to generate compromise. The question is whether we can map all theories onto the same utility scale or whether each agent gets their own scale. If we put everything on the same scale, it looks like we're doing meta-utilitarianism. If each agent gets their own scale, compromise still makes sense without meta-value judgments.

Two outcomes is too degenerate if agents get their own scales, so suppose A, B, and C were options, theory 1 has ordinal preferences B > C > A, and theory 2 has preferences A > C > B. Depending on how much of a compromise C is for each agent, the outcome could vary between

  • choosing C (say if C is 99% as good as the ideal for each agent),
  • a 50/50 lottery over A and B (if C is only 1% better than the worst for each), or
  • some other lottery (for instance, 1 thinks C achieves 90% of B and 2 thinks C achieves 40% of A. Then, a lottery with weight 2/3rds on C and 1/3rd on A gives them each 60% of the gain between their best and worst)
Comment author: badger 27 September 2014 11:12:40PM 7 points [-]

My reading of the problem is that a satisfactory Parliamentary Model should:

  • Represent moral theories as delegates with preferences over adopted policies.
  • Allow delegates to stand-up for their theories and bargain over the final outcome, extracting concessions on vital points while letting others policies slide.
  • Restrict delegates' use of dirty tricks or deceit.

Since bargaining in good faith appears to be the core feature, my mind immediately goes to models of bargaining under complete information rather than voting. What are the pros and cons of starting with the Nash bargaining solution as implemented by an alternating offer game?

The two obvious issues are how to translate delegate's preferences into utilities and what the disagreement point is. Assuming a utility function is fairly mild if the delegate has preferences over lotteries. Plus,there's no utility comparison problem even though you need cardinal utilities. The lack of a natural disagreement point is trickier. What intuitions might be lost going this route?

Comment author: Manfred 05 June 2014 11:00:43PM 2 points [-]

Poor Cato.

Cato swapping with Brutus produces the same absolute gains as Antonius swapping with Brutus - is there a strategyproof mechanism that goes that way instead?

How about "a soldier can signal that they don't have the job they want. Then, the people who want to change jobs are ordered into a random loop, and jobs are rotated one place."

Hm, but if Antonius doesn't want his job either, we could end up with a bad outcome. Is Cato really hosed?

Comment author: badger 11 June 2014 02:10:32PM 2 points [-]

It turns out the only Pareto efficient, individually rational (ie everyone never gets something worse than their initial job), and strategyproof mechanism is Top Trading Cycles. In order to make Cato better off, we'd have to violate one of those in some way.

Strategyproof Mechanisms: Possibilities

23 badger 02 June 2014 02:26AM

Despite dictatorships being the only strategyproof mechanisms in general, more interesting strategyproof mechanisms exist for specialized settings. I introduce single-peaked preferences and discrete exchange as two fruitful domains.

Strategyproofness is a very appealing property. When interacting with a strategyproof mechanism, a person is never worse off for being honest (at least in a causal decision-theoretic sense), so there is no need to make conjectures about the actions of others. However, the Gibbard-Satterthwaite theorem showed that dictatorships are the only universal strategyproof mechanisms for choosing from three or more outcomes. If we want to avoid dictatorships while keeping strategyproofness, we’ll have to narrow our attention to specific applications with more structure. In this post, I’ll introduce two restricted domains with more interesting strategyproof mechanisms.

continue reading »
Comment author: edanm 26 May 2014 02:23:09PM 13 points [-]

Reading "The Selfish Gene" teaches enough evolutionary biology to understand what the field is about, to understand the basics of the field, and to be able to converse on it intelligently.

What book can I read that will do the same for me in:

  • Medicine/biology/physiology (e.g. able to understand the very basic concepts of what a doctor does)

  • Law (e.g. able to understand the very basic concepts of working as a lawyer).

Bonus points - if the book on Law explains the practical difference between common-law and civil-law.


Comment author: badger 27 May 2014 09:43:10PM 9 points [-]

Metafilter has a classic thread on "What book is the best introduction to your field?". There are multiple recommendations there for both law and biology.

Comment author: Vaniver 17 May 2014 04:24:20PM 5 points [-]

Um, not quite:

In addition to red and red

Very Henry Ford-ish.

Comment author: badger 17 May 2014 04:36:25PM 0 points [-]


Comment author: Sniffnoy 16 May 2014 04:33:55AM *  3 points [-]

However, since Arrow deals with social welfare functions which take a profile of preferences as input and outputs a full preference ranking, it really says something about aggregating a set of preferences into a single group preference.

I'm going to nitpick here -- it's possible to write down forms of Arrow's theorem where you do get a single output. Of course, in that case, unlike in the usual formulation, you have to make assumptions about what happens when candidates drop out -- considering what you have as a voting system that yields results for an election among any subset of the candidates, rather than just that particular set of candidates. So it's a less convenient formulation for proving things. Formulated this way, though, the IIA condition actually becomes the thing it's usually paraphrased as -- "If someone other than the winner drops out, the winner stays the same."

Edit: Spelling

Comment author: badger 16 May 2014 12:46:36PM 3 points [-]

Since Arrow and GS are equivalent, it's not surprising to see intermediate versions. Thanks for pointing that one out. I still stand by the statement for the common formulation of the theorem. We're hitting the fuzzy lines between what counts as an alternate formulation of the same theorem, a corollary, or a distinct theorem.

View more: Next