Okay, maybe not me, but someone I know, and that's what the title would be if he wrote it. Newcomb's problem and Kavka's toxin puzzle are more than just curiosities. Like a lot of thought experiments, they approximately happen. They make the issues with causal decision theory relevant, not only to designing artificial intelligence, but to our everyday lives as well.
Yet somehow it isn't mainstream knowledge that these are more than merely abstract linguistic issues, as evidenced by this comment thread (please no Karma sniping of the comments, they are a valuable record). Scenarios involving brain scanning, decision simulation, etc., can establish their validy and future relevance, but not that they are already commonplace. I want to provide an already-happened, real-life account that captures the Newcomb essence.
So let's say my friend is named Joe. In his account, Joe is very much in love with this girl named Omega… er… Kate, and he wants to get married. Kate is somewhat traditional, and won't marry him unless he proposes, not only in the sense of explicitly asking her, but also expressing certainty that he will never try to leave her if they do marry.
At this point, many of you could easily make up a simple conclusion to this post. As such, I want to convey the actual account, in which Joe's beliefs are roughly schematized as follows:
if he proposes sincerely, she is effectively sure to believe it.
if he proposes insincerely, she will 50% likely believe it.
if she believes his proposal, she will 80% likely say yes.
if she doesn't believe his proposal, she will surely say no, but will not be significantly upset in comparison to the significance of marriage.
if they marry, Joe will 90% likely be happy, and will 10% likely be unhappy.
He roughly values the happy and unhappy outcomes oppositely:
being happily married to Kate: 125 megautilons
being unhapily married to Kate: -125 megautilons.
So what should he do? What should this real person have actually done?1 Well, as in Newcomb, these beliefs and utilities present an interesting and quantifiable problem…
EU(marriage) = 90%·125 - 10%·125 = 100,
EU(sincere proposal) = 80%·100 = 80, and
EU(insincere proposal) = 50%·80%·100 = 40.
No surprise here, sincere proposal comes out on top. That's the important thing, not the particular numbers. In fact, in real life Joe's utility function assigned negative moral value to insincerity, broadening the gap. But no matter; this did not make him sincere. The problem is that Joe was a causal decision theorist, and he believed that if circumstances changed to render him unhappily married, he would necessarily try to leave her. Because of this possibility, he could not propose sincerely in the sense she desired.
This feels strikingly similar to Newcomb's problem, and in fact it is: if we change some probabilities to 0 and 1, it's essentially isomorphic:
If he proposes sincerely, she will say yes.
If he proposes insincerely, she will say no and break up with him forever.
If they marry, he is 90% likely to be very happy, and 10% likely to be very unhappy.
The analogue of the two boxes are marriage (opaque) and the option of leaving (transparent). Given marriage, the option of leaving has a small marginal utility of 10%·125 = 12.5 utilons. So "clearly" he should "just take both"? The problem is that he can't just take both. The proposed payout matrix would be:
Joe \ Kate
Say yes
Say no
Propose sincerely
Marriage
Nothing significant
Propose insincerely
Marriage + option to leave
Nothing significant
The "principal of (weak2) dominance" would say the second row is the better "option", and that therefore "clearly" Joe should propose insincerely. But in Newcomb some of the outcomes are declared logically impossible. If he tries to take both boxes, there will be nothing in the marriage box. The analogue in real life is simply that the four outcomes need not be equally likely.
So there you have it. Newcomb happens. Newcomb happened. You might be wondering, what did Joe actually do?
In real life, Joe became a timeless decision theorist, and noting his 90% certainty, self-modified by adopting a moral pre-commitment to never leaving Kate should they marry, proposed to her sincerely, and the rest is history. No joke! That's if Joe's account is accurate, mind you.
Footnotes:
1 This is not a social commentary, but an illustration that probabilistic Newcomblike scenarios can and do exist. Although this also does not hinge on whether you believe Joe's account, I have provided it as-is nonetheless. I would hope that there are other similar accounts written down somewhere, but I haven't seen them, so I've provided his.
2 Newcomb involves "strong" dominance, with the second row always strictly better, but that's not essential to this post. In any case, I could exhibit strong dominance by removing "if they do get married" from Kate's proposal requirement, but I decided against it, favoring instead the actual account of events.
Okay, maybe not me, but someone I know, and that's what the title would be if he wrote it. Newcomb's problem and Kavka's toxin puzzle are more than just curiosities. Like a lot of thought experiments, they approximately happen. They make the issues with causal decision theory relevant, not only to designing artificial intelligence, but to our everyday lives as well.
Yet somehow it isn't mainstream knowledge that these are more than merely abstract linguistic issues, as evidenced by this comment thread (please no Karma sniping of the comments, they are a valuable record). Scenarios involving brain scanning, decision simulation, etc., can establish their validy and future relevance, but not that they are already commonplace. I want to provide an already-happened, real-life account that captures the Newcomb essence.
So let's say my friend is named Joe. In his account, Joe is very much in love with this girl named Omega… er… Kate, and he wants to get married. Kate is somewhat traditional, and won't marry him unless he proposes, not only in the sense of explicitly asking her, but also expressing certainty that he will never try to leave her if they do marry.
At this point, many of you could easily make up a simple conclusion to this post. As such, I want to convey the actual account, in which Joe's beliefs are roughly schematized as follows:
He roughly values the happy and unhappy outcomes oppositely:
So what should he do? What should this real person have actually done?1 Well, as in Newcomb, these beliefs and utilities present an interesting and quantifiable problem…
No surprise here, sincere proposal comes out on top. That's the important thing, not the particular numbers. In fact, in real life Joe's utility function assigned negative moral value to insincerity, broadening the gap. But no matter; this did not make him sincere. The problem is that Joe was a causal decision theorist, and he believed that if circumstances changed to render him unhappily married, he would necessarily try to leave her. Because of this possibility, he could not propose sincerely in the sense she desired.
This feels strikingly similar to Newcomb's problem, and in fact it is: if we change some probabilities to 0 and 1, it's essentially isomorphic:
The analogue of the two boxes are marriage (opaque) and the option of leaving (transparent). Given marriage, the option of leaving has a small marginal utility of 10%·125 = 12.5 utilons. So "clearly" he should "just take both"? The problem is that he can't just take both. The proposed payout matrix would be:
The "principal of (weak2) dominance" would say the second row is the better "option", and that therefore "clearly" Joe should propose insincerely. But in Newcomb some of the outcomes are declared logically impossible. If he tries to take both boxes, there will be nothing in the marriage box. The analogue in real life is simply that the four outcomes need not be equally likely.
So there you have it. Newcomb happens. Newcomb happened. You might be wondering, what did Joe actually do?
In real life, Joe became a timeless decision theorist, and noting his 90% certainty, self-modified by adopting a moral pre-commitment to never leaving Kate should they marry, proposed to her sincerely, and the rest is history. No joke! That's if Joe's account is accurate, mind you.
Footnotes:
1 This is not a social commentary, but an illustration that probabilistic Newcomblike scenarios can and do exist. Although this also does not hinge on whether you believe Joe's account, I have provided it as-is nonetheless. I would hope that there are other similar accounts written down somewhere, but I haven't seen them, so I've provided his.
2 Newcomb involves "strong" dominance, with the second row always strictly better, but that's not essential to this post. In any case, I could exhibit strong dominance by removing "if they do get married" from Kate's proposal requirement, but I decided against it, favoring instead the actual account of events.