Giving them different exponents in the Nash product has some appeal, except that it does seem like NBS without modification is correct in the two-delegate case (where the weight assigned to the different theories is captured properly by the fact that the defection point is more closely aligned with the view of the theory with more weight). If we don't think that's right in the two-delegate case we should have some account of why not.
The issue is when we should tilt outcomes in favor of higher credence theories. Starting from a credence-weighted mixture, I agree theories should have equal bargaining power. Starting from a more neutral disagreement point, like the status quo actions of a typical person, higher credence should entail more power / votes / delegates.
On a quick example, equal bargaining from a credence-weighted mixture tends to favor the lower credence theory compared to weighted bargaining from an equal status quo. If the total feasible set of utilities is {(x,y) | x^2 + y^2 ≤ 1; x,y ≥ 0}, then the NBS starting from (0.9, 0.1) is about (0.95, 0.28) and the NBS starting from (0,0) with theory 1 having nine delegates (i.e. an exponent of nine in the Nash product) and theory 2 having one delegate is (0.98, 0.16).
If the credence-weighted mixture were on the Pareto frontier, both approaches are equivalent.
I think the the Nash bargaining solution should be pretty good if there are only two members of the parliament, but it's not clear how to scale up to a larger parliament.
For the NBS with more than two agents, you just maximize the product of everyone's gain in utility over the disagreement point. For Kalai-Smodorinsky, you continue to equate the ratios of gains, i.e. picking the point on the Pareto frontier on the line between the disagreement point and vector of ideal utilities.
Agents could be given more bargaining power by giving them different exponents in the Nash product.
I think there's a fairly natural disagreement point here: the outcome with no trade, which is just a randomisation of the top options of the different theories, with probability according to the credence in that theory.
One possibility to progress is to analyse what happens here in the two-theory case, perhaps starting with some worked examples.
Alright, a credence-weighted randomization between ideals and then bargaining on equal footing from there makes sense. I was imagining the parliament starting from scratch.
Another alternative would be to use a hypothetical disagreement point corresponding to the worst utility for each theory and giving higher credence theories more bargaining power. Or more bargaining power from a typical person's life (the outcome can't be worse for any theory than a policy of being kind to your family, giving to socially-motivated causes, cheating on your taxes a little, telling white lies, and not murdering).
Consider the following degenerate case: there is only one decision to be made, and your competing theories assess it as follows.
- Theory 1: option A is vastly worse than option B.
- Theory 2: option A is just a tiny bit better than option B.
And suppose you find theory 2 just slightly more probable than theory 1.
Then it seems like any parliamentary model is going to say that theory 2 wins, and you choose option A. That seems like a bad outcome.
Accordingly, I suggest that to arrive at a workable parliamentary model we need to do at least one of the following:
- Disallow degenerate cases of this kind. (Seems wrong; e.g., suppose you have an important decision to make on your deathbed.)
- Bite the bullet and say that in the situation above you really are going to choose A over B. (Seems pretty terrible.)
- Take into account how strongly the delegates feel about the decision, in such a way that you'd choose B in this situation. (Handwavily it feels as if any way of doing this is going to constrain how much "tactical" voting the delegates can engage in.)
As you might gather, I find the last option the most promising.
I agree that some cardinal information needs to enter in the model to generate compromise. The question is whether we can map all theories onto the same utility scale or whether each agent gets their own scale. If we put everything on the same scale, it looks like we're doing meta-utilitarianism. If each agent gets their own scale, compromise still makes sense without meta-value judgments.
Two outcomes is too degenerate if agents get their own scales, so suppose A, B, and C were options, theory 1 has ordinal preferences B > C > A, and theory 2 has preferences A > C > B. Depending on how much of a compromise C is for each agent, the outcome could vary between
- choosing C (say if C is 99% as good as the ideal for each agent),
- a 50/50 lottery over A and B (if C is only 1% better than the worst for each), or
- some other lottery (for instance, 1 thinks C achieves 90% of B and 2 thinks C achieves 40% of A. Then, a lottery with weight 2/3rds on C and 1/3rd on A gives them each 60% of the gain between their best and worst)
My reading of the problem is that a satisfactory Parliamentary Model should:
- Represent moral theories as delegates with preferences over adopted policies.
- Allow delegates to stand-up for their theories and bargain over the final outcome, extracting concessions on vital points while letting others policies slide.
- Restrict delegates' use of dirty tricks or deceit.
Since bargaining in good faith appears to be the core feature, my mind immediately goes to models of bargaining under complete information rather than voting. What are the pros and cons of starting with the Nash bargaining solution as implemented by an alternating offer game?
The two obvious issues are how to translate delegate's preferences into utilities and what the disagreement point is. Assuming a utility function is fairly mild if the delegate has preferences over lotteries. Plus,there's no utility comparison problem even though you need cardinal utilities. The lack of a natural disagreement point is trickier. What intuitions might be lost going this route?
Poor Cato.
Cato swapping with Brutus produces the same absolute gains as Antonius swapping with Brutus - is there a strategyproof mechanism that goes that way instead?
How about "a soldier can signal that they don't have the job they want. Then, the people who want to change jobs are ordered into a random loop, and jobs are rotated one place."
Hm, but if Antonius doesn't want his job either, we could end up with a bad outcome. Is Cato really hosed?
It turns out the only Pareto efficient, individually rational (ie everyone never gets something worse than their initial job), and strategyproof mechanism is Top Trading Cycles. In order to make Cato better off, we'd have to violate one of those in some way.
Reading "The Selfish Gene" teaches enough evolutionary biology to understand what the field is about, to understand the basics of the field, and to be able to converse on it intelligently.
What book can I read that will do the same for me in:
Medicine/biology/physiology (e.g. able to understand the very basic concepts of what a doctor does)
Law (e.g. able to understand the very basic concepts of working as a lawyer).
Bonus points - if the book on Law explains the practical difference between common-law and civil-law.
Thanks!
Metafilter has a classic thread on "What book is the best introduction to your field?". There are multiple recommendations there for both law and biology.
Um, not quite:
In addition to red and red
Very Henry Ford-ish.
However, since Arrow deals with social welfare functions which take a profile of preferences as input and outputs a full preference ranking, it really says something about aggregating a set of preferences into a single group preference.
I'm going to nitpick here -- it's possible to write down forms of Arrow's theorem where you do get a single output. Of course, in that case, unlike in the usual formulation, you have to make assumptions about what happens when candidates drop out -- considering what you have as a voting system that yields results for an election among any subset of the candidates, rather than just that particular set of candidates. So it's a less convenient formulation for proving things. Formulated this way, though, the IIA condition actually becomes the thing it's usually paraphrased as -- "If someone other than the winner drops out, the winner stays the same."
Edit: Spelling
Since Arrow and GS are equivalent, it's not surprising to see intermediate versions. Thanks for pointing that one out. I still stand by the statement for the common formulation of the theorem. We're hitting the fuzzy lines between what counts as an alternate formulation of the same theorem, a corollary, or a distinct theorem.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Would supporting open immigration count as part of effective altruism?
For context on the size of the potential benefit, an additional 1% migration rate would increase world GDP by about 1% (i.e. about one trillion dollars). The main question is the rate of migration if barriers are partially lowered, with estimates varying between 1% and 30%. Completely open migration could double world output. Based on Table 2 of Clemens (2011)