From the last thread:
From Costanza's original thread (entire text):
"This is for anyone in the LessWrong community who has made at least some effort to read the sequences and follow along, but is still confused on some point, and is perhaps feeling a bit embarrassed. Here, newbies and not-so-newbies are free to ask very basic but still relevant questions with the understanding that the answers are probably somewhere in the sequences. Similarly, LessWrong tends to presume a rather high threshold for understanding science and technology. Relevant questions in those areas are welcome as well. Anyone who chooses to respond should respectfully guide the questioner to a helpful resource, and questioners should be appropriately grateful. Good faith should be presumed on both sides, unless and until it is shown to be absent. If a questioner is not sure whether a question is relevant, ask it, and also ask if it's relevant."
Meta:
- How often should these be made? I think one every three months is the correct frequency.
- Costanza made the original thread, but I am OpenThreadGuy. I am therefore not only entitled but required to post this in his stead. But I got his permission anyway.
Meta:
- I still haven't figured out a satisfactory answer to the previous meta question, how often these should be made. It was requested that I make a new one, so I did.
- I promise I won't quote the entire previous threads from now on. Blockquoting in articles only goes one level deep, anyway.
Hmm the pre-commitment to ignore would depend on other agents and their pre-pre-commitment to ignore pre-commitments. It just goes recursive like Sherlock Holmes vs Moriarty, and when you go meta and try to look for 'limit' of recursion, it goes recursive again... i have a feeling that it is inherently a rock-paper-skissors situation where you can't cheat like this robot. (I.e. I would suggest, at that point, to try to make a bunch of proofs of impossibility to narrow expectations down somewhat).
It's not possible to coordinate in general against arbitrary opponents, like it's impossible to predict what an arbitrary program does, but it's advantageous for players to eventually coordinate their decisions (on some meta-level of precommitment). On one hand, players want to set prices their way, but on the other they want to close the trade eventually, and this tradeoff keeps the outcome from both extremes ("unfair" prices and impossibility of trade). Players have an incentive to setup some kind of Loebian cooperation (as in these posts), whi... (read more)