You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Emile comments on Stupid Questions Open Thread Round 3 - Less Wrong Discussion

8 Post author: OpenThreadGuy 07 July 2012 05:16PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (208)

You are viewing a single comment's thread. Show more comments above.

Comment author: Emile 08 July 2012 08:21:05AM *  2 points [-]

If we avoid the overloaded term "blackmail" and talk of threats vs. trade, Angela is threatening you whereas Julia is offering a trade. I agree that this example shows that "makes you suffer" is not the distinguishing element. It's also interesting that you may not now if the situation is threat or trade (you may not know whether the mistress wants to tell your wife anyway).

Comment author: private_messaging 08 July 2012 11:26:05AM *  5 points [-]

I'm not sure how threats and trade are a real dichotomy rather than two fuzzy categories. Suppose I buy food. That's basic trade. But at the same time a monopoly could raise the price of food a lot, and I would still have to buy it, and now it is the threat of starvation.

I can go fancy(N), and say, I won't pay more than X for food, I would rather starve to death and then they get no more of my money, and if I can make it credible, and if the monopoly reasons in fancy(N-1) manner, they won't raise the price above X because I won't pay, but if monopoly reasons in the fancy(N) manner, it does exact same reasoning and concludes that it should ignore my threat to starve myself to death and not pay.

Most human agents seem to be tit for tat and mirror what ever you are doing, so if you are reasoning "i'll just starve myself to death not to pay" they reason like "i'll just raise the price regardless and the hell with what he does not pay". The blackmail resistant agent is also blackmail resistance resistant.

Comment author: Vladimir_Nesov 08 July 2012 02:15:08PM 2 points [-]

I'm not sure how threats and trade are a real dichotomy rather than two fuzzy categories.

This is my position as well, blackmail probably doesn't need to be considered as a separate case, reasonable behavior in such cases will probably just fall out from a sufficiently savvy bargaining algorithm.

Comment author: TheOtherDave 08 July 2012 02:42:10PM 0 points [-]

I agree with this, incidentally.

Comment author: Emile 08 July 2012 01:24:25PM 2 points [-]

Good point; haggling is a good example of a fuzzy boundary between threats and trade.

If A is willing to sell a widget for any price above $10, and B is willing to buy a widget for any price below $20, and there are no other buyers or sellers, then for any price X strictly between $10 and $20, A saying "I won't sell for less than X" and B saying "I won't sell for more than X" are both threats under my model.

Which means that agents that "naively" precommit to never respond to any threats (the way I understand them) will not reach an agreement when haggling. They'll also fail at the Ultimatum game.

So there needs to be a better model for threats, possibly one that takes shelling points into account; or maybe there should be a special category for "the kind of threats it's beneficial to precommit to ignore".

Comment author: private_messaging 08 July 2012 01:59:54PM *  1 point [-]

Hmm the pre-commitment to ignore would depend on other agents and their pre-pre-commitment to ignore pre-commitments. It just goes recursive like Sherlock Holmes vs Moriarty, and when you go meta and try to look for 'limit' of recursion, it goes recursive again... i have a feeling that it is inherently a rock-paper-skissors situation where you can't cheat like this robot. (I.e. I would suggest, at that point, to try to make a bunch of proofs of impossibility to narrow expectations down somewhat).

Comment author: Vladimir_Nesov 08 July 2012 02:13:40PM *  2 points [-]

It's not possible to coordinate in general against arbitrary opponents, like it's impossible to predict what an arbitrary program does, but it's advantageous for players to eventually coordinate their decisions (on some meta-level of precommitment). On one hand, players want to set prices their way, but on the other they want to close the trade eventually, and this tradeoff keeps the outcome from both extremes ("unfair" prices and impossibility of trade). Players have an incentive to setup some kind of Loebian cooperation (as in these posts), which stops the go-meta regress, although each will try to set the point where cooperation happens in their favor.

Comment author: private_messaging 08 July 2012 02:46:37PM -2 points [-]

I was thinking rather of Halting Problem - like impossibility, along with rock-paper-skissors situation that prevents declaring any one strategy, even the cooperative, as the 'best'.

Comment author: Vladimir_Nesov 08 July 2012 02:54:58PM *  2 points [-]

If difficulty of selecting and implementing a strategy is part of the tradeoff (so that more complicated strategies count as "worse" because of their difficulty, even if they promise an otherwise superior outcome), maybe there are "best" strategies in some sense, like there is a biggest natural number that you can actually write down in 30 seconds. (Such things would of course have the character of particular decisions, not of decision theory.)

Comment author: novalis 08 July 2012 05:24:30PM *  -2 points [-]

There is not a biggest natural number that you can actually write down in thirty seconds -- that's equivalent to Berry's paradox.

Comment author: Vladimir_Nesov 08 July 2012 09:08:21PM *  1 point [-]

Huh? Just start writing. The rule wasn't "the number you can define in 30 seconds", but simply "the number you can write down in 30 seconds". Like the number of strawberries you can eat in 30 seconds, no paradox there!

Comment author: novalis 08 July 2012 09:15:28PM -1 points [-]

I was reading "write down" more generally than "write down each digit of in base ten," but I guess that's not how you meant it.

Comment author: private_messaging 08 July 2012 03:52:41PM -2 points [-]

Hmm if it was a programming contest I would expect non-transitive 'betterness'.

Comment author: Vladimir_Nesov 08 July 2012 04:30:01PM 2 points [-]

Given a fixed state of knowledge about possible opponents and finite number of feasible options for your decision, there will be maximal decisions, even if in an iterated contest the players could cycle their decisions against updated opponents indefinitely.