somewhat confident of Omega's prediction
51% confidence would suffice.
At some point you'll predictably approach death
I'm pretty sure that decision theories are not designed on that basis. We don't want an AI to start making different decisions based on the probability of an upcoming decommission. We don't want it to become nihilistic and stop making decisions because it predicted the heat death of the universe and decided that all paths have zero value. If death is actually tied to the decision in some way, then sure, take that into account, but otherwise, I don't think a decision theory should have "death is inevitably coming for us all" as a factor.
How do you resolve that tension?
Well, as previously stated, my view is that the scenario as stated (single-shot with no precommitment) is not the most helpful hypothetical for designing a decision theory. An iterated version would actually be more relevant, since we want to design an AI that can make more than one decision. And in the iterated version, the tension is largely resolved, because there is a clear motivation to stick with the decision: we still hope for the next coin to come down heads.
what happens when the consequences grow large? Say 1 person to save 500, or 1 to save 3^^^^3?
If 3^^^^3 lives are at stake, and we assume that we are running on faulty or even hostile hardware, then it becomes all the more important not to rely on potentially-corrupted "seems like this will work".
Well, humans can build calculators. That they can't be the calculators that they create doesn't demand an unusual explanation.
Yes, but don't these articles emphasise how evolution doesn't do miracles, doesn't get everything right at once, and takes a very long time to do anything awesome? The fact that humans can do so much more than the normal evolutionary processes can marks us as a rather significant anomaly.
Your decision is a result of your decision theory
I get that that could work for a computer, because a computer can be bound by an overall decision theory without attempting to think about whether that decision theory still makes sense in the current situation.
I don't mind predictors in eg Newcomb's problem. Effectively, there is a backward causal arrow, because whatever you choose causes the predictor to have already acted differently. Unusual, but reasonable.
However, in this case, yes, your choice affects the predictor's earlier decision - but since th...
Humans can do things that evolutions probably can't do period over the expected lifetime of the universe.
This does beg the question, How, then, did an evolutionary process produce something so much more efficient than itself?
(And if we are products of evolutionary processes, then all our actions are basically facets of evolution, so isn't that sentence self-contradictory?)
there is no distinction between making the decision ahead of time or not
Except that even if you make the decision, what would motivate you to stick to it once it can no longer pay up?
Your only motivation to pay is the hope of obtaining the $10000. If that hope does not exist, what reason would you have to abide by the decision that you make now?
I didn't mean to suggest that the existence of suffering is evidence that there is a God. What I meant was, the known fact of "shared threat -> people come together" makes the reality of suffering less powerful evidence against the existence of a God.
we want a rigorous, formal explanation of exactly how, when, and why you should or should not stick to your precommitment
Well, if we're designing an AI now, then we have the capability to make a binding precommitment, simply by writing code. And we are still in a position where we can hope for the coin to come down heads. So yes, in that privileged position, we should bind the AI to pay up.
However, to the question as stated, "is the decision to give up $100 when you have no real benefit from it, only counterfactual benefit, an example of winning?&q...
We are told no such thing. We are told it's a fair coin and that can only mean that if you divide up worlds by their probability density, you win in half of them. This is defined.
No, take another look:
in the overwhelming measure of the MWI worlds it gives the same outcome. You don't care about a fraction that sees a different result, in all reality the result is that Omega won't even consider giving you $10000, it only asks for your $100.
I wouldn't trust myself to accurately predict the odds of another repetition, so I don't think it would unravel for me. But this comes back to my earlier point that you really need some external motivation, some precommitment, because "I want the 10K" loses its power as soon as the coin comes down tails.
if your decision theory pays up, then if he flips tails, you pay $100 for no possible benefit.
But in the single-shot scenario, after it comes down tails, what motivation does an ideal game theorist have to stick to the decision theory?
Like Parfit's hitchhiker, although in advance you might agree that it's a worthwhile deal, when it comes to the point of actually paying up, your motivation is gone, unless you have bound yourself in some other way.
It should also be possible to milk the scenario for publicity: "Our opponents sold out to the evil plutocrat and passed horrible legislation so he would bankroll them!"
I wish I were more confident that that strategy would actually work...
is the decision to give up $100 when you have no real benefit from it, only counterfactual benefit, an example of winning?
No, it's a clear loss.
The only winning scenario is, "the coin comes down heads and you have an effective commitment to have paid if it came down tails."
By making a binding precommitment, you effectively gamble that the coin will come down heads. If it comes down tails instead, clearly you have lost the gamble. Giving the $100 when you didn't even make the precommitment would just be pointlessly giving away money.
The beggars-and-gods formulation is the same problem.
I don't think so; I think the element of repetition substantially alters it - but in a good way, one that makes it more useful in designing a real-world agent. Because in reality, we want to design decision theories that will solve problems multiple times.
At the point of meeting a beggar, although my prospects of obtaining a gold coin this time around are gone, nonetheless my overall commitment is not meaningless. I can still think, "I want to be the kind of person who gives pennies to beggars, b...
Sorry, but I'm not in the habit of taking one for the quantum superteam. And I don't think that it really helps to solve the problem; it just means that you don't necessarily care so much about winning any more. Not exactly the point.
Plus we are explicitly told that the coin is deterministic and comes down tails in the majority of worlds.
I think that what really does my head in about this problem is, although I may right now be motivated to make a commitment, because of the hope of winning the 10K, nonetheless my commitment cannot rely on that motivation, because when it comes to the crunch, that possibility has evaporated and the associated motivation is gone. I can only make an effective commitment if I have something more persistent - like the suggested $1000 contract with a third party. Without that, I cannot trust my future self to follow through, because the reasons that I would curr...
This is an attempt to examine the consequences of that.
Yes, but if the artificial scenario doesn't reflect anything in the real world, then even if we get the right answer, therefore what? It's like being vaccinated against a fictitious disease; even if you successfully develop the antibodies, what good do they do?
It seems to me that the "beggars and gods" variant mentioned earlier in the comments, where the opportunity repeats itself each day, is actually a more useful study. Sure, it's much more intuitive; it doesn't tie our brains up in kno...
If this was the only chance you ever get to determine your lifespan - then choose B.
In the real world, it would probably be a better idea to discard both options and use your natural lifespan to search for alternative paths to immortality.