Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Toggle 10 December 2014 08:23:06PM 1 point [-]

This looks very useful. Thanks!

Another one of those interesting questions is whether the pricing system must be equivalent to currency exchange. To what extent are the traditional modes of transaction a legacy of the limitations behind physical coinage, and what degrees of freedom are offered by ubiquitous computation and connectivity? Etc. (I have a lot of questions.)

Comment author: badger 10 December 2014 09:09:08PM 1 point [-]

Results like the Second Welfare Theorem (every efficient allocation can be implemented via competitive equilibrium after some lump-sum transfers) suggests it must be equivalent in theory.

Eric Budish has done some interesting work changing the course allocation system at Wharton to use general equilibrium theory behind the scenes. In the previous system, courses were allocated via a fake money auction where students had to actually make bids. In the new system, students submit preferences and the allocation is computed as the equilibrium starting from "equal incomes".

What benefits do you think a different system might provide, or what problems does monetary exchange have that you're trying to avoid? Extra computation and connectivity should just open opportunities for new markets and dynamic pricing, rather than suggest we need something new.

Comment author: Toggle 09 December 2014 10:37:17PM *  2 points [-]

Maneki Neko is a short story about an AI that manages a kind of gift economy. It's an enjoyable read.

I've been curious about this 'class' of systems for a while now, but I don't think I know enough about economics to ask the questions well. For example- the story supplies a superintelligence to function as a competent central manager, but could such a gift network theoretically exist without being centrally managed (and without trivially reducing to modern forms of currency exchange)? Could a variant of Watson be used to automate the distribution of capital in the same way that it makes a medical dignosis? And so on.

In particular, I'm looking for the intellectual tools that would be used to ask these questions in a more rigorous way; it would be great if I had better ways of figuring out which of these questions are obviously stupid and which are not. Specific disciplines in economics or game theory, perhaps. Things along the lines of LW's Mechanism Design sequence would be fantastic. Can anyone give me a few pointers?

Comment author: badger 10 December 2014 07:35:24PM 5 points [-]

My intuition is every good allocation system will use prices somewhere, whether the users see them or not. The main perk of the story's economy is getting things you need without having to explicitly decide to buy them (ie the down-on-his-luck guy unexpectedly gifted his favorite coffee), and that could be implemented through individual AI agents rather than a central AI.

Fleshing out how this might play out, if I'm feeling sick, my AI agent notices and broadcasts a bid for hot soup. The agents of people nearby respond with offers. The lowest offer might come from someone already in a soup shop who lives next door to me since they'll hardly have to go out of their way. Their agent would notify them to buy something extra and deliver it to me. Once the task is fulfilled, my agent would send the agreed-upon payment. As long as the agents are well-calibrated to our needs and costs, it'd feel like a great gift even if there are auctions and payments behind the scenes.

For pointers, general equilibrium theory studies how to allocate all the goods in an economy. Depending on how you squint at the model, it could be studying centralized or decentralized markets based on money or pure exchange. A Toolbox for Economic Design is fairly accessible texbook on mechanism design that covers lots of allocation topics.

Comment author: bentarm 10 May 2014 12:00:06AM 0 points [-]

The Revelation Principle feels like one of those results that flip flops between trivially obvious and absurdly impossible... I'm currently in an "absurdly powerful" frame of mind.

I guess the principle is mostly useful for impossibility results? Given an arbitrary mechanism, will you usually be able to decompose it to find the associated incentive compatible mechanism?

Comment author: badger 23 November 2014 02:10:01PM 1 point [-]

I'm on board with "absurdly powerful". It underlies the bulk of mechanism design, to the point my advisor complains we've confused it with the entirety of mechanism design.

The principle gives us the entire set of possible outcomes for some solution concept like dominant-strategy equilibrium or Bayes-Nash equilibrium. It works for any search over the set of outcomes, whether that leads to an impossibility result or a constructive result like identifying the revenue-optimal auction.

Given an arbitrary mechanism, it's easy (in principle) to find the associated IC direct mechanism(s). The mechanism defines a game, so we solve the game and find the equilibrium outcomes for each type profile. Once we've found that, the IC direct mechanism just assigns the equilibrium outcome directly. For instance, if everyone's equilibrium strategy in a pay-your-bid/first-price auction was to bid 90% of their value, the direct mechanism assigns the item to the person with the highest value and charges them 90% of their value. Since a game can have multiple equilibria, we have one IC mechanism per outcome. The revelation principle can't answer questions like "Is there a mechanism where every equilibrium (as opposed to some equilibrium) gives a particular outcome?"

Comment author: Lumifer 17 November 2014 07:42:32PM 9 points [-]

an additional 1% migration rate would increase world GDP by about 1% (i.e. about one trillion dollars)

I am having strong doubts about this number. The paper cited is long on handwaving and seems to be entirely too fond of expressions like "should make economists’ jaws hit their desks" and "there appear to be trillion-dollar bills on the sidewalk". In particular, there is the pervasive assumption that people are fungible so transferring a person from a $5,000 GDP/capita economy to a $50,000 GDP/capita economy immediately nets you $45,000 in additional GDP. I don't think this is true.

Comment author: badger 17 November 2014 09:29:29PM 1 point [-]

The paper cited is handwavy and conversational because it isn't making original claims. It's providing a survey for non-specialists. The table I mentioned is a summary of six other papers.

Some of the studies assume workers in poorer countries are permanently 1/3rd or 1/5th as productive as native workers, so the estimate is based on something more like a person transferred from a $5,000 GDP/capita economy to a $50,000 GDP/capita economy is able to produce $10-15K in value.

Comment author: NancyLebovitz 17 November 2014 03:51:49PM 7 points [-]

Would supporting open immigration count as part of effective altruism?

Comment author: badger 17 November 2014 07:19:41PM 1 point [-]

For context on the size of the potential benefit, an additional 1% migration rate would increase world GDP by about 1% (i.e. about one trillion dollars). The main question is the rate of migration if barriers are partially lowered, with estimates varying between 1% and 30%. Completely open migration could double world output. Based on Table 2 of Clemens (2011)

Comment author: owencb 28 September 2014 05:20:33PM 1 point [-]

Giving them different exponents in the Nash product has some appeal, except that it does seem like NBS without modification is correct in the two-delegate case (where the weight assigned to the different theories is captured properly by the fact that the defection point is more closely aligned with the view of the theory with more weight). If we don't think that's right in the two-delegate case we should have some account of why not.

Comment author: badger 28 September 2014 06:35:56PM 1 point [-]

The issue is when we should tilt outcomes in favor of higher credence theories. Starting from a credence-weighted mixture, I agree theories should have equal bargaining power. Starting from a more neutral disagreement point, like the status quo actions of a typical person, higher credence should entail more power / votes / delegates.

On a quick example, equal bargaining from a credence-weighted mixture tends to favor the lower credence theory compared to weighted bargaining from an equal status quo. If the total feasible set of utilities is {(x,y) | x^2 + y^2 ≤ 1; x,y ≥ 0}, then the NBS starting from (0.9, 0.1) is about (0.95, 0.28) and the NBS starting from (0,0) with theory 1 having nine delegates (i.e. an exponent of nine in the Nash product) and theory 2 having one delegate is (0.98, 0.16).

If the credence-weighted mixture were on the Pareto frontier, both approaches are equivalent.

Comment author: owencb 28 September 2014 03:33:51PM 1 point [-]

I think the the Nash bargaining solution should be pretty good if there are only two members of the parliament, but it's not clear how to scale up to a larger parliament.

Comment author: badger 28 September 2014 04:35:16PM 2 points [-]

For the NBS with more than two agents, you just maximize the product of everyone's gain in utility over the disagreement point. For Kalai-Smodorinsky, you continue to equate the ratios of gains, i.e. picking the point on the Pareto frontier on the line between the disagreement point and vector of ideal utilities.

Agents could be given more bargaining power by giving them different exponents in the Nash product.

Comment author: owencb 28 September 2014 03:36:13PM 3 points [-]

I think there's a fairly natural disagreement point here: the outcome with no trade, which is just a randomisation of the top options of the different theories, with probability according to the credence in that theory.

One possibility to progress is to analyse what happens here in the two-theory case, perhaps starting with some worked examples.

Comment author: badger 28 September 2014 04:22:47PM *  1 point [-]

Alright, a credence-weighted randomization between ideals and then bargaining on equal footing from there makes sense. I was imagining the parliament starting from scratch.

Another alternative would be to use a hypothetical disagreement point corresponding to the worst utility for each theory and giving higher credence theories more bargaining power. Or more bargaining power from a typical person's life (the outcome can't be worse for any theory than a policy of being kind to your family, giving to socially-motivated causes, cheating on your taxes a little, telling white lies, and not murdering).

Comment author: gjm 28 September 2014 09:44:06AM 9 points [-]

Consider the following degenerate case: there is only one decision to be made, and your competing theories assess it as follows.

  • Theory 1: option A is vastly worse than option B.
  • Theory 2: option A is just a tiny bit better than option B.

And suppose you find theory 2 just slightly more probable than theory 1.

Then it seems like any parliamentary model is going to say that theory 2 wins, and you choose option A. That seems like a bad outcome.

Accordingly, I suggest that to arrive at a workable parliamentary model we need to do at least one of the following:

  • Disallow degenerate cases of this kind. (Seems wrong; e.g., suppose you have an important decision to make on your deathbed.)
  • Bite the bullet and say that in the situation above you really are going to choose A over B. (Seems pretty terrible.)
  • Take into account how strongly the delegates feel about the decision, in such a way that you'd choose B in this situation. (Handwavily it feels as if any way of doing this is going to constrain how much "tactical" voting the delegates can engage in.)

As you might gather, I find the last option the most promising.

Comment author: badger 28 September 2014 01:30:35PM 1 point [-]

I agree that some cardinal information needs to enter in the model to generate compromise. The question is whether we can map all theories onto the same utility scale or whether each agent gets their own scale. If we put everything on the same scale, it looks like we're doing meta-utilitarianism. If each agent gets their own scale, compromise still makes sense without meta-value judgments.

Two outcomes is too degenerate if agents get their own scales, so suppose A, B, and C were options, theory 1 has ordinal preferences B > C > A, and theory 2 has preferences A > C > B. Depending on how much of a compromise C is for each agent, the outcome could vary between

  • choosing C (say if C is 99% as good as the ideal for each agent),
  • a 50/50 lottery over A and B (if C is only 1% better than the worst for each), or
  • some other lottery (for instance, 1 thinks C achieves 90% of B and 2 thinks C achieves 40% of A. Then, a lottery with weight 2/3rds on C and 1/3rd on A gives them each 60% of the gain between their best and worst)
Comment author: badger 27 September 2014 11:12:40PM 7 points [-]

My reading of the problem is that a satisfactory Parliamentary Model should:

  • Represent moral theories as delegates with preferences over adopted policies.
  • Allow delegates to stand-up for their theories and bargain over the final outcome, extracting concessions on vital points while letting others policies slide.
  • Restrict delegates' use of dirty tricks or deceit.

Since bargaining in good faith appears to be the core feature, my mind immediately goes to models of bargaining under complete information rather than voting. What are the pros and cons of starting with the Nash bargaining solution as implemented by an alternating offer game?

The two obvious issues are how to translate delegate's preferences into utilities and what the disagreement point is. Assuming a utility function is fairly mild if the delegate has preferences over lotteries. Plus,there's no utility comparison problem even though you need cardinal utilities. The lack of a natural disagreement point is trickier. What intuitions might be lost going this route?

View more: Next