This is a simple picture proof to show that if there is any decision process that will find a Pareto outcome for two people, it must be that liars will prosper: there are some circumstances where you would come out ahead if you were to lie about your utility function.

Apart from Pareto, the only other assumption it needs are that if the data is perfectly symmetric, then the outcome will be symmetric as well. We won't even need to use affine independence or other scalings of utility functions.

Now, given Pareto-optimality, symmetry allows us to solve symmetric problems by taking the unique symmetric Pareto option. Two such symmetric problems presented here, and in one of them, one of the two players must be able to prosper by lying.

So first assume Pareto-optimality, symmetry, and (by contradiction) that liars don't prosper. The players are x and y, and we will plot their utilities in the (x,y) plane. The first setup is presented in this figure:


 

There are five pure choices here, of utilities (0,1), (0.95,0.95), (1,0) and the non-Pareto optimal ones at (0.6,0.6) and (0.55,0.55). By symmetry and Pareto-optimality, we know that the outcome has to be (0.95,0.95).

Now player y is going to lie. If Player y can force the outcome off the green line and onto the blue line, then he will profit by lying. He is going to claim that the choice (0.95,0.95) actually only gives him a utility of (0.4), and so is at (0.95, 0.4). This results in this diagram:




The Pareto optimal boundary of this is:




Now, the new outcome must be on the green segment somewhere (including the end points). Or else, as we have seen, player y will have profited by lying. Got that? If liars don't prosper, then the outcome for the above diagram must be on the green segment.

Now let's consider a new setup, namely:




This has choices with utility (1,0), (0.55,0.55), (0,1) and non-Pareto optimal choices (0.4,0.6) and (0.6,0.4). It is symmetric, so the outcome must be (0.55, 0.55) by Pareto-optimality.

Now it's player x's chance to lie. She will lie on two of her choices, claiming that (0.4,0.6) is actually at (0.6,0.6) and that (0.6,0.4) is actually at (0.95, 0.4):




You will no doubt be astounded and amazed to realise that this setup is precisely the same as in the third figure! Now, we know that the outcome for that must lie along the green line between (1,0) and (0.95,0.4). Translating that green line back into the real utility for x, you get:



Any point on that line is better, from x's perspective, than the standard outcome (0.55,0.55) (she will get at least 0.6 in utility on the green line). So, if we accept Pareto-optimality and symmetry, then one of the players has to be able to profit by lying in certain situations.

Pareto-optimality is required: if you waive that condition, then some non-Pareto solutions such as "flip a coin, and the winner gets to decide the outcome" do not allow liars to prosper.

New to LessWrong?

New Comment
33 comments, sorted by Click to highlight new comments since: Today at 2:17 PM

The other assumption is that they liar can make himself believed. Which means there is another assumption of imperfect information across the game.

I wish there was a better science of the economics of imperfect information, or if there is that I would know about it. It seems likely that a very important part of the trading on securities, investment, and derivative markets is completely driven by imperfect information, by information asymmetries or at minimum believe asymmetries.

And then what of lying in multiplayer games? I think of politics, where I get so frustrated that politicians can say the most blatantly untrue things, and I can see them as such, but those things gain them votes.

The other assumption is that they liar can make himself believed. Which means there is another assumption of imperfect information across the game.

I'd prefer to see it as "there is an incentive to lie, if they can get away with it".

I wish there was a better science of the economics of imperfect information, or if there is that I would know about it.

Mechanism design? BTW, this is a special case of a mechanism design result known as the Myerson-Satterthwaite theorem. It's not a novel result by any means. The Gibbard-Satterthwaite theorem has a similar flavor, though it is in fact a separate result--for one thing, it applies to voting systems with three or more agents.

It seems likely that a very important part of the trading on securities, investment, and derivative markets is completely driven by imperfect information, by information asymmetries or at minimum believe asymmetries.

Sure. Most people involved in the financial markets reasonably closely approximate utility-maximizing agents with utility defined in money. So with the exception of when someone has to raise cash to pay for some external expense, every trade in the financial markets is one where the seller thinks what he has is worth less than what he's selling it for, and the buyer thinks it's worth more than what he's paying. (A trade itself has various transaction costs, so if you think it's really an even swap, it's automatically a loss of to trade.) So the vast majority of trading activity is directly a matter of belief asymmetries.

(One way Warren Buffet makes his money, by the way? Berkshire Hathaway has a pattern and practice of watching for profitable small businesses that have to be sold by the heirs to cover the external expense of estate taxes. This exploits the fact that small business heirs, unlike heirs of diversified stock portfolios, do not generally have access to efficient, highly-competitive markets for their inherited assets. It's one of the few ways to make money on the financial markets without having to have consistently more-accurate beliefs about future prices than the other participants in the market; you instead buy assets for a price lower than both parties think they're worth, but which the selling party cannot practically refuse to sell.)

I think of politics, where I get so frustrated that politicians can say the most blatantly untrue things, and I can see them as such, but those things gain them votes.

That's only confusing if you are conflating professing and cheering. It's not about convincing anyone of policy issues, it's about convincing people that you're on their team. I agree that it can certainly be frustrating that this is how people work...

Looks correct and very nice! I tried to think of a simpler proof, but couldn't find any.

[-][anonymous]12y20

Does the liar need to know the other player's utility function in order to lie correctly? I have seen this idea before in a talk about cake-cutting algorithms -- but there, the liar risked ending up worse off if she mis-estimated the utility functions of the other players.

Does the liar need to know the other player's utility function in order to lie correctly?

Yes. He also needs to know the option set.

I get red x's instead of the figures.

Hopefully solved now.

Looks good! Nice use of the x and y labels.

[-][anonymous]12y20

I can't seem to see your images. I just see the filenames. Are they working for you, or is this a problem on my end?

They were working for me, but I tried reloading them in a different format now. Hope it works for you now!

[-][anonymous]12y20

Confirmed, the new format does make the images visible for me.

Introducing liars breaks the symmetry, so that they could just as well be bargaining about which mixed (and so non-symmetric) solution on the original Pareto frontier to play.

Introducing liars breaks the symmetry

I don't understand this comment. The decision procedure is specified in terms of the players' stated utility values, which can already contain lies. It seems reasonable to demand that the procedure should yield a symmetric outcome when given symmetric input.

If lies are seen as strategic considerations, they should be part of the decision problem. I agree that technically we can limit the scope of the official decision to something symmetric, but allowing non-symmetric things to affect this setup seems sufficiently similar to allowing non-symmetric things to happen within the setup, which makes motivation for Stuart's construction unclear to me.

I still don't understand. The idea that the (possibly symmetric) outcome must not make unilateral deviations profitable is just the idea of Nash equilibrium. Do you think it shouldn't be used?

Interesting!

"Wave" should be "waive" in the last line.

Retracted:

More substantively, I don't think I believe this claim:

Now, the new outcome must be on the green segment somewhere (including the end points). Or else, as we have seen, player y will have profited by lying.

Player y would gain .95 utility by being honest. Most of the blue segment is below y=.95.

Edit: I could easily be missing something, but I think this invalidates the proof. Your statement about the blue line in diagram 1 does not hold for diagram 2, but your conclusion depends on it. The outcome (.5,.6) doesn't break any of your rules, but doesn't reward liars.

Remember that player y is lying: the blue segment lies below y=0.95, but only for the fake values that y is claiming. In actual fact, that blue line is always above 0.95 (you can see this on the first diagram).

Possibly my confusion lies in the way values are being re-normalized after player y lies.

In diagram 2, consider the outcome (.5,.6). Even if we re-normalize that outcome by multiplying by the sum of y's real utilities and dividing by the sum of y's fake utilities, .6 * (3.1 / 2.55) =~ .73, well below the default outcome of .95. Am I doing that wrong?

There's no need to renormalise: any outcome on the blue line is a probabilistic mixture between the (0,1) and (0.95,0.95) choices (to use the genuine utilities of these outcomes). This is better for y than the pure (0.95,0.95) option.

Oh, I see. That's why the straight lines are significant: they show that no mixture involving the (.6,.6) point is optimal. Thanks for explaining.

Why not just state that the (0,1) point actually lies on (2,2), and therefore is the best choice?

He can only lie about how much he values the point - not about how much the other player values it.

I may be missing something: for Figure 5, what motivation does Y have to go along with perceived choice (0.95, 0.4), given that in this situation Y does not possess the information possessed (and true) in the previous situation that '(0.95, 0.4)' is actually (0.95, 0.95)?

In Figure 2, (0.6, 0.6) appears symmetrical and Pareto optimal to X. In Figure 5, (0.6, 0.6) appears symmetrical and Pareto optimal to Y. In Figure 2, X has something to gain by choosing/{allowing the choice of} (0.95, 0.4) over (0.6, 0.6) and Y has something to gain by choosing/{allowing the choice of} (0.95, 0.95) over (0.6, 0.6), but in Figure 5, while X has something to gain by choosing/{allowing the choice of} (0.6, 0.4) over (0.5, 0.5), Y has nothing to gain by choosing/{allowing the choice of} (0.95, 0.4) over (0.6, 0.6).

Is there a rule(/process) that I have overlooked?

Going through the setup again, it seems as though in the first situation (0.95, 0.95) would be chosen while looking to X as though Y was charitably going with (0.95, 0.4) instead of insisting on the symmetrical (0.6, 0.6), and that in the second situation Y would insist on the seemingly-symmetrical-and-(0.6, 0.6) (0.4, 0.6) instead of going along with X's desired (0.6, 0.4) or even the actually-symmetrical (0.5, 0.5) (since that would appear {non-Pareto optimal}/{Pareto suboptimal} to Y).

As Stuart_Armstrong explains to me on a different thread, the decision process isn't necessarily picking one of the discrete outcomes, but can pick a probabilistic mixture of outcomes. (.6,.6) doesn't appear Pareto-optimal because it's dominated by, e.g., selecting (.95, .4) with probability p=.6/.95 and (0,1) with probability 1-p.

The point of the proof is that if there is an established procedure that takes as input people's stated utilities about certain choices, and outputs a Pareto outcome, then it must be possible to game it by lying. The motivations of the players aren't taken into account once their preferences are stated.

Rather than X or Y succeeding at gaming it by lying, however, it seems that a disinterested objective procedure that selects by Pareto optimalness and symmetry would then output a (0.6, 0.6) outcome in both cases, causing a -0.35 utility loss for the liar in the first case and a -0.1 utility loss for the liar in the second.

Is there a direct reason that such an established procedure would be influenced by a perceived (0.95, 0.4) option to not choose an X=Y Pareto outcome? (If this is confirmed, then indeed my current position is mistaken. )

(0.6, 0.6) is not Pareto. The "equal Pareto outcome" is the point (19/31,19/31) which is about (0.62,0.62). This is a mixed outcome, the weighted sum of (0,1) and (0.95,0.4) with weights 11/31 and 20/31. In reality, for y's genuine utility, this would be 11/31(0,1) + 20/31(0.95,0.95)=(19/31,30/31), giving y a utility of about 0.97, greater than the 0.95 he would have got otherwise.

(Assuming that it stays on the line of 'what is possible', in any case a higher Y than otherwise, but finding it then according to the constant X--1 - ((19/31) * (1/19)), 30/31, yes...)

I confess I do not understand the significance of the terms mixed outcome and weighted sum in this context, I do not see how the numbers 11/31 and 20/31 have been obtained, and I do not presently see how the same effect can apply in the second situation in which the relative positions of the symmetric point and its (Pareto?) lines have not been shifted, but I now see how in the first situation the point selected can be favourable for Y! (This representing convincing of the underlying concept that I was doubtrful of.) Thank you very much for the time taken to explain this to me!