Danish math major
Sorry I see now that i lost half a sentence in the middle. I agree that the notions of early/mid/late game doesn't map well to real life, and I don't think there is a good way to do so. I then (meant to) propose the stages of a 4X game as perhaps mapping more cleanly onto one-shot games
I think the most natural definitions are that early game is the part you have memorized, end game is where you can compute to the end (still doing pruning), and mid game is the rest.
So eg in Scrabble the end game is where there are no tiles or few enough tiles in the bag that you can think through all (relevant) combinations of bags.
I think perhaps the phases of a 4X game.
Explore: gain information that is relevant for what plan to execute
Expand: Investment phase, you take actions that maximise your growth
Exploit: You slowly start depriotizing growth as the time remaining grows shorter.
Exterminate: You go for your win condition
The arguments in the Aumann paper in favor of dropping the completeness axiom is that it makes for a better theory of Human/Buisness/Existent reasoning, not that it makes for a better theory of ideal reasoning.
The paper seems to prove that any partial preference ordering which obeys the other axioms must be representable by a utility function, but that there will be multiple such representatives.
My claim is that either there will be a dutch book, or your actions will be equivalent to the actions you would have taken by following one of those representative utility functions, in which case even though the internals don't seem like following a utility function they are for the purposes of VNM.
But demonstrating this is hard, as it is unclear what actions correspond to the fact that A is incomparable to B.
The concrete examples of non complete agents in the above, either seem like they will act according to one of those representatives, or like they are easily dutch bookable.
I don't understand how you are using incompleteness. For example, to me the sentence
"agents can make themselves immune to all possible money-pumps for completeness by acting in accordance with the following policy: ‘if I previously turned down some option X, I will not choose any option that I strictly disprefer to X.’"
Sounds like "agents can avoid all money pumps for completeness by completing their preferences in a random way." Which is true but doesn't seem like much of a challenge to completeness.
Can you explain what behavior is allowed under the first but isn't possible under my rephrasing?
Similarly can we make explicit what behavior counts as two options being incomparable?
It seems to me that FDT has the property that you associate with the "ultimate decision theory".
My understanding is that FDT says that you should follow the policy which is attained by taking the argmax over all policies of the utility from following that policy (only including downstream effects of your policy).
In these easy examples your policy space is your space of committed actions. In which case the above seems to reduce to the "ultimate decision theory" criterion.
The assumptions made here are not time reversible as the macrostate at time t+1 being deterministic given the macrostate at time t, does not imply that the macrostate at time t is deterministic given the macrostate at time t+1.
So in this article the direction of time is given through the asymmetry of the evolution of macrostates.
I think "book of X" can be usefully "translated" as beliefs about X.
The book of truth is not truth, just like the book of night is not night.
I think "book of names" can be read as human categoristion of animals (giving them name). Although other readings do seem plausible.
You might be interested in John Harsanyi on the topic.
He argues that the conclusion achieved in the original position is (average) utilitarianism.
I agree that behind the veil one shouldn't know the time (and thus can't care differently about current vs future humans). This actually causes further problems for Rawls conception when you project back in time, what if the worst life that will ever be lived has already been lived? Then the maximin principle gives no guidance at all, and in positions of uncertainty it recommends putting all effort in preventing a new minimum from being set.
The concept of Kolmogorov Sufficient Statistic might be the missing piece. (cf Elements of information theory section 14.12)
We want the shortest program that describes a sequence of bits. A particularly interpretable type of such programs is "the sequence is in the set X generated by program p, and among those it is the n'th element"
Example "the sequence is in the set of sequences of length 1000 with 104 ones, generated by (insert program here), of which it is the n~10^144'th element".
We therefore define f(String, n) to be the size of the smallest set containing String which is generated by a program of length n. (Or alternatively where a program of length n can test for membership of the set)
If you plot the logarithm of f(String,n) you will often see bands where the line has slope -1, corresponding to using the extra bit to hardcode one more bit of the index. In this case the longer programs aren't describing any more structure than the program where the slope started being -1. We call such a program a Kolmogorov minimal statistic.
The relevance is that for a biased coin with each flip independent the Kolmogorov minimal statistic is the bias. And it is often more natural to think about the Kolmogorov minimal statistics.
Taxing something where the supply or demand is fixed is extremely efficient, and the extent to which purchases stay the same is exactly the extent to which supply or demand is inflexible. The economic inefficiency of a tax comes from the changes in behavior induced by the tax. The difference between a tariff and a sales tax, is that it induces you to buy native products.