From the last thread:
From Costanza's original thread (entire text):
"This is for anyone in the LessWrong community who has made at least some effort to read the sequences and follow along, but is still confused on some point, and is perhaps feeling a bit embarrassed. Here, newbies and not-so-newbies are free to ask very basic but still relevant questions with the understanding that the answers are probably somewhere in the sequences. Similarly, LessWrong tends to presume a rather high threshold for understanding science and technology. Relevant questions in those areas are welcome as well. Anyone who chooses to respond should respectfully guide the questioner to a helpful resource, and questioners should be appropriately grateful. Good faith should be presumed on both sides, unless and until it is shown to be absent. If a questioner is not sure whether a question is relevant, ask it, and also ask if it's relevant."
Meta:
- How often should these be made? I think one every three months is the correct frequency.
- Costanza made the original thread, but I am OpenThreadGuy. I am therefore not only entitled but required to post this in his stead. But I got his permission anyway.
Meta:
- I still haven't figured out a satisfactory answer to the previous meta question, how often these should be made. It was requested that I make a new one, so I did.
- I promise I won't quote the entire previous threads from now on. Blockquoting in articles only goes one level deep, anyway.
Good point; haggling is a good example of a fuzzy boundary between threats and trade.
If A is willing to sell a widget for any price above $10, and B is willing to buy a widget for any price below $20, and there are no other buyers or sellers, then for any price X strictly between $10 and $20, A saying "I won't sell for less than X" and B saying "I won't sell for more than X" are both threats under my model.
Which means that agents that "naively" precommit to never respond to any threats (the way I understand them) will not reach an agreement when haggling. They'll also fail at the Ultimatum game.
So there needs to be a better model for threats, possibly one that takes shelling points into account; or maybe there should be a special category for "the kind of threats it's beneficial to precommit to ignore".
Hmm the pre-commitment to ignore would depend on other agents and their pre-pre-commitment to ignore pre-commitments. It just goes recursive like Sherlock Holmes vs Moriarty, and when you go meta and try to look for 'limit' of recursion, it goes recursive again... i have a feeling that it is inherently a rock-paper-skissors situation where you can't cheat like this robot. (I.e. I would suggest, at that point, to try to make a bunch of proofs of impossibility to narrow expectations down somewhat).