I predict, with probability ~95%, that if Joe becomes unhappy in the marriage, he and Kate will get divorced, even though Joe and Kate, who is not as powerful a predictor as Omega, currently believe otherwise. Joe is, after all, running this "timeless decision theory" on hostile hardware.
(But I hope that they remain happy, and this prediction remains hypothetical.)
100% of marriages end in divorce or death.
100% of marriages that have ended ended in divorce or death.
it's a big open problem if some humans can precommit or not
No, it's not. I don't see any reason to believe that humans can reliably precommit, without setting up outside constraints, especially over time spans of decades.
What you have described is not Newcomb's problem. Take what taw said, and realize that actual humans are in fact in this category:
If precommitment is not observable and/or changeable, then it can be rearranged, and we have:
- Kate: accept or not - not having any clue what Joe did
- Joe: breakup or not
In real life, Joe actually recognized the similarity to Newcomb's problem, realizing for the first time that he must become timeless decision agent, and noting his 90% certainty, he self-modified by adopting a moral pre-commitment to never leaving Kate should they marry, proposed to her sincerely, and the rest is history.
It would be a (probabilistic approximation of a) Newcomb problem when considered without the ability to precommit or otherwise sabotage the future payoff for one of your future options. Having that option available makes the problem o...
It's not a Newcomb problem. It's a problem of how much his promises mean.
Either he created a large enough cost to leaving if he is unhappy, in that he would have to break his promise, to justify his belief that he won't leave; or, he did not. If he did, he doesn't have the option to "take both" and get the utility from both because that would incur the cost. (Breaking his promise would have negative utility to him in and of itself.) It sounds like that's what ended up happening. If he did not, he doesn't have the option to propose sincerely, since he knows it's not true that he will surely not leave.
This seems better described as a variant of the traditional paradox of hedonism. That is, some goals (e.g. long term happiness) are best achieved by agents who do not explicitly aim only at this goal, and who can instead be trusted to keep to their commitments even if it turns out that they'd benefit from defecting.
It's an interesting situation, and I can see the parallel to Newcombe's Problem. I'm not certain that it's possible for a person to self-modify to the extent that he will never leave his wife, ever, regardless of the very real (if small) doubts he has about the relationship right now. I don't think I could ever simultaneously sustain the thoughts "There's about a 10% chance that my marriage to my wife will make me very unhappy" and "I will never leave her no matter what". I could make the commitment financially - that, even if the marri...
One dissimilarity from Newcomb's is that the marginal utility of spouses decreases faster than the marginal utility of money, and moreover many potential spouses are known to exist. (I.e., Joe can just walk away and find someone more reasonable to marry for relatively small utility cost.)
Uh, someone having a script for your life that they require you to fit does not make them Omega - it just means they are attempting to dominate you and you are going along with it, like in many ordinary relationships. Admittedly I may be biased myself from having been burnt, but this is a plain old relationship problem, not Newcomb's problem. The answer is not a new decision theory, but to get out of the unhealthy and manipulative relationship.
I concur with JGWeissman's prediction. I just don't find it credible that time-binding apes, faced with the second...
If I could attempt to summarise my interpretation of the above:
Joe realises that the best payout comes from proposing sincerely even though he is defined to be insincere (10% probability of surely breaking his promise to never try and leave her if they marry). He seeks a method by which to produce an insincere sincere proposal.
As sincerity appears to be a controllable state of mind he puts himself in the right state, making him appear temporarily sincere and thus aiming for the bigger payout.
As you have not assigned any moral or mental cost associated with...
If precommitment is observable and unchangeable, then order of action is:
If precommitment is not observable and/or changeable, then it can be rearranged, and we have:
Or in the most complex situation, with 3 probabilistic nodes:
Okay, maybe not me, but someone I know, and that's what the title would be if he wrote it. Newcomb's problem and Kavka's toxin puzzle are more than just curiosities relevant to artificial intelligence theory. Like a lot of thought experiments, they approximately happen. They illustrate robust issues with causal decision theory that can deeply affect our everyday lives.
Yet somehow it isn't mainstream knowledge that these are more than merely abstract linguistic issues, as evidenced by this comment thread (please no Karma sniping of the comments, they are a valuable record). Scenarios involving brain scanning, decision simulation, etc., can establish their validy and future relevance, but not that they are already commonplace. For the record, I want to provide an already-happened, real-life account that captures the Newcomb essence and explicitly describes how.
So let's say my friend is named Joe. In his account, Joe is very much in love with this girl named Omega… er… Kate, and he wants to get married. Kate is somewhat traditional, and won't marry him unless he proposes, not only in the sense of explicitly asking her, but also expressing certainty that he will never try to leave her if they do marry.
Now, I don't want to make up the ending here. I want to convey the actual account, in which Joe's beliefs are roughly schematized as follows:
He roughly values the happy and unhappy outcomes oppositely:
So what should he do? What should this real person have actually done?1 Well, as in Newcomb, these beliefs and utilities present an interesting and quantifiable problem…
No surprise here, sincere proposal comes out on top. That's the important thing, not the particular numbers. In fact, in real life Joe's utility function assigned negative moral value to insincerity, broadening the gap. But no matter; this did not make him sincere. The problem is that Joe was a classical causal decision theorist, and he believed that if circumstances changed to render him unhappily married, he would necessarily try to leave her. Because of this possibility, he could not propose sincerely in the sense she desired. He could even appease himself by speculating causes2 for how Kate can detect his uncertainty and constrain his options, but that still wouldn't make him sincere.
Seeing expected value computations with adjustable probabilities for the problem can really help feel its robustness. It's not about to disappear. Certainties can be replaced with 95%'s and it all still works the same. It's a whole parametrized family of problems, not just one.
Joe's scenario feels strikingly similar to Newcomb's problem, and in fact it is: if we change some probabilities to 0 and 1, it's essentially isomorphic:
The analogue of the two boxes are marriage (opaque) and the option of leaving (transparent). Given marriage, the option of leaving has a small marginal utility of 10%·125 = 12.5 utilons. So "clearly" he should "just take both"? The problem is that he can't just take both. The proposed payout matrix would be:
The "principal of (weak3) dominance" would say the second row is the better "option", and that therefore "clearly" Joe should propose insincerely. But in Newcomb some of the outcomes are declared logically impossible. If he tries to take both boxes, there will be nothing in the marriage box. The analogue in real life is simply that the four outcomes need not be equally likely.
So there you have it. Newcomb happens. Newcomb happened. You might be wondering, what did the real Joe do?
In real life, Joe actually recognized the similarity to Newcomb's problem, realizing for the first time that he must become updateless decision agent, and noting his 90% certainty, he self-modified by adopting a moral pre-commitment to never leaving Kate should they marry, proposed to her sincerely, and the rest is history. No joke! That's if Joe's account is accurate, mind you.
Footnotes:
1 This is not a social commentary, but an illustration that probabilistic Newcomblike scenarios can and do exist. Although this also does not hinge on whether you believe Joe's account, I have provided it as-is nonetheless.
2 If you care about causal reasoning, the other half of what's supposed to make Newcomb confusing, then Joe's problem is more like Kavka's (so this post accidentally shows how Kavka and Newcomb are similar). But the distinction is instrumentally irrelevant: the point is that he can benefit from decision mechanisms that are evidential and time-invariant, and you don't need "unreasonable certainties" or "paradoxes of causality" for this to come up.
3 Newcomb involves "strong" dominance, with the second row always strictly better, but that's not essential to this post. In any case, I could exhibit strong dominance by removing "if they do get married" from Kate's proposal requirement, but I decided against it, favoring instead the actual account of events.