Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Newcomb's problem happened to me

37 Post author: Academian 26 March 2010 06:31PM

Okay, maybe not me, but someone I know, and that's what the title would be if he wrote it.  Newcomb's problem and Kavka's toxin puzzle are more than just curiosities relevant to artificial intelligence theory.  Like a lot of thought experiments, they approximately happen.  They illustrate robust issues with causal decision theory that can deeply affect our everyday lives.

Yet somehow it isn't mainstream knowledge that these are more than merely abstract linguistic issues, as evidenced by this comment thread (please no Karma sniping of the comments, they are a valuable record).  Scenarios involving brain scanning, decision simulation, etc., can establish their validy and future relevance, but not that they are already commonplace.  For the record, I want to provide an already-happened, real-life account that captures the Newcomb essence and explicitly describes how.

So let's say my friend is named Joe.  In his account, Joe is very much in love with this girl named Omega… er… Kate, and he wants to get married.  Kate is somewhat traditional, and won't marry him unless he proposes, not only in the sense of explicitly asking her, but also expressing certainty that he will never try to leave her if they do marry

Now, I don't want to make up the ending here.  I want to convey the actual account, in which Joe's beliefs are roughly schematized as follows: 

  1. if he proposes sincerely, she is effectively sure to believe it.
  2. if he proposes insincerely, she will 50% likely believe it.
  3. if she believes his proposal, she will 80% likely say yes.
  4. if she doesn't believe his proposal, she will surely say no, but will not be significantly upset in comparison to the significance of marriage.
  5. if they marry, Joe will 90% likely be happy, and will 10% likely be unhappy.

He roughly values the happy and unhappy outcomes oppositely:

  1. being happily married to Kate:  125 megautilons
  2. being unhapily married to Kate:  -125 megautilons.

So what should he do?  What should this real person have actually done?1  Well, as in Newcomb, these beliefs and utilities present an interesting and quantifiable problem…

  • ExpectedValue(marriage) = 90%·125 - 10%·125 = 100,
  • ExpectedValue(sincere proposal) = 80%·100 = 80,
  • ExpectedValue(insincere proposal) = 50%·80%·100 = 40.

No surprise here, sincere proposal comes out on top.  That's the important thing, not the particular numbers.  In fact, in real life Joe's utility function assigned negative moral value to insincerity, broadening the gap.  But no matter; this did not make him sincere.  The problem is that Joe was a classical causal decision theorist, and he believed that if circumstances changed to render him unhappily married, he would necessarily try to leave her.  Because of this possibility, he could not propose sincerely in the sense she desired.  He could even appease himself by speculating causes2 for how Kate can detect his uncertainty and constrain his options, but that still wouldn't make him sincere

Seeing expected value computations with adjustable probabilities for the problem can really help feel its robustness.  It's not about to disappear.  Certainties can be replaced with 95%'s and it all still works the same.  It's a whole parametrized family of problems, not just one. 

Joe's scenario feels strikingly similar to Newcomb's problem, and in fact it is:  if we change some probabilities to 0 and 1, it's essentially isomorphic: 

  1. If he proposes sincerely, she will say yes.
  2. If he proposes insincerely, she will say no and break up with him forever.
  3. If they marry, he is 90% likely to be happy, and 10% likely to be unhappy.

The analogue of the two boxes are marriage (opaque) and the option of leaving (transparent).  Given marriage, the option of leaving has a small marginal utility of 10%·125 = 12.5 utilons.  So "clearly" he should "just take both"?  The problem is that he can't just take both.  The proposed payout matrix would be:

Joe \ Kate
Say yes
Say no
Propose sincerely
Marriage Nothing significant
Propose insincerely
Marriage + option to leave Nothing significant

The "principal of (weak3) dominance" would say the second row is the better "option", and that therefore "clearly" Joe should propose insincerely.  But in Newcomb some of the outcomes are declared logically impossible.  If he tries to take both boxes, there will be nothing in the marriage box.  The analogue in real life is simply that the four outcomes need not be equally likely

So there you have it.  Newcomb happens.  Newcomb happened.  You might be wondering, what did the real Joe do

In real life, Joe actually recognized the similarity to Newcomb's problem, realizing for the first time that he must become updateless decision agent, and noting his 90% certainty, he self-modified by adopting a moral pre-commitment to never leaving Kate should they marry, proposed to her sincerely, and the rest is history.  No joke!  That's if Joe's account is accurate, mind you.

 


Footnotes:

1 This is not a social commentary, but an illustration that probabilistic Newcomblike scenarios can and do exist.  Although this also does not hinge on whether you believe Joe's account, I have provided it as-is nonetheless. 

2 If you care about causal reasoning, the other half of what's supposed to make Newcomb confusing, then Joe's problem is more like Kavka's (so this post accidentally shows how Kavka and Newcomb are similar).  But the distinction is instrumentally irrelevant:  the point is that he can benefit from decision mechanisms that are evidential and time-invariant, and you don't need "unreasonable certainties" or "paradoxes of causality" for this to come up. 

3 Newcomb involves "strong" dominance, with the second row always strictly better, but that's not essential to this post.  In any case, I could exhibit strong dominance by removing "if they do get married" from Kate's proposal requirement, but I decided against it, favoring instead the actual account of events.

Comments (97)

Comment author: JGWeissman 26 March 2010 06:48:53PM *  21 points [-]

I predict, with probability ~95%, that if Joe becomes unhappy in the marriage, he and Kate will get divorced, even though Joe and Kate, who is not as powerful a predictor as Omega, currently believe otherwise. Joe is, after all, running this "timeless decision theory" on hostile hardware.

(But I hope that they remain happy, and this prediction remains hypothetical.)

Comment author: Unknowns 27 March 2010 06:46:22AM 13 points [-]

Your prediction is overconfident. Less than 95% of unhappy marriages end in divorce.

Comment author: JGWeissman 27 March 2010 05:08:10PM 1 point [-]

Perhaps, I didn't look up any statistics. The "~" in "~95%" was supposed to indicate my meta uncertainty that this is the precise strength of belief an ideal rationalist should have given evidence observed by me. I am confident that I am closer to the ideal probability than 0% as believed by Kate.

Apparently 45% to 50% of first marriages in America end in divorce, but this does not account for whether the marriages were unhappy. Do you have a source for your assertion? I have not found anything with a quick Google search.

Comment author: Unknowns 30 March 2010 11:07:20AM 1 point [-]
Comment author: mutterc 12 May 2011 06:24:50PM -2 points [-]

100% of marriages end in divorce or death.

Comment author: wedrifid 12 May 2011 06:34:15PM 8 points [-]

100% of marriages end in divorce or death.

100% of marriages that have ended ended in divorce or death.

Comment author: mutterc 12 May 2011 11:26:15PM 3 points [-]

Good point; if we conquer death then there may be some marriages that do not end. It'd be interesting to see if people move towards near-universal divorce, sci-fi-novel-style limited-term marriages, or find ways to develop infinite-term compatibility. Or stop pairing up (inconceivable to present-day humans, but such is the nature of a Singularity).

Comment author: wedrifid 13 May 2011 05:07:04AM 0 points [-]

That definitely would be interesting. It would perhaps be an indicator of preferences, as opposed to the current indicator of capability. If you have tools that can alter your mind you can cheat.

Comment author: XFrequentist 12 May 2011 06:27:50PM 2 points [-]

Historically.

Comment author: AdeleneDawner 12 May 2011 07:02:53PM 1 point [-]

Which of those does it count as when one of the parties just leaves and becomes unfindable?

Comment author: mutterc 12 May 2011 11:20:46PM 2 points [-]

My understanding is that you're still married until one of you goes and gets a divorce, but I can't admit to having researched such a thing.

Comment author: AdeleneDawner 12 May 2011 11:43:46PM 1 point [-]

I suspect that getting a divorce requires some minimal amount of input from both parties - if I remember correctly I had to sign something saying that I'd received some paperwork, when mine happened, in order for it to go through.

I suspect that in the case I posited, the non-disappearing person would be able to get the disappearing person declared dead after a certain period of time, which doesn't strictly require that the disappearing person be dead, and then remarry. If that's accurate, that'd be a third option.

Comment author: wedrifid 13 May 2011 10:39:14AM 2 points [-]

I suspect that in the case I posited, the non-disappearing person would be able to get the disappearing person declared dead after a certain period of time, which doesn't strictly require that the disappearing person be dead, and then remarry. If that's accurate, that'd be a third option.

100% of marriages that have ended have ended in divorce or legal death?

Where does 'annulment' fit into things? Is that when it is decided to just pretend the marriage never existed in the first place.

Comment author: thomblake 13 May 2011 02:18:25PM 1 point [-]

Where does 'annulment' fit into things? Is that when it is decided to just pretend the marriage never existed in the first place.

Yes. In the Catholic Church, a "declaration of nullity" was nearly a loophole to not being able to get divorced. Basically, there were certain preconditions that were assumed to hold when getting married, and if it turns out any of those preconditions did not actually obtain, then the marriage never actually happened. For example, it is assumed that the couple wants to have children; if it turns out that one party never intended to have children, that can be grounds for a declaration of nullity.

Several legal jurisdictions have adopted this idea, but it makes little sense when one can just get divorced and there are not strict preconditions for marriage.

Wikipedia: Annulment

Comment author: Sniffnoy 13 May 2011 12:53:54AM 0 points [-]

As long as we're picking nits, in some places marriages can also be annulled (though of course they will insist that this is retroactive, and for some purposes it is).

Comment author: mutterc 13 May 2011 04:48:52PM 0 points [-]

That's what I understand; an annulment means the marriage never happened. (E.g. if it's been "consummated" then annulment is not an option. I wonder how that interacts with modern pre-consummated marriages?)

Comment author: Academian 26 March 2010 07:29:08PM *  3 points [-]

Yeah, it's a big open problem if some humans can precommit or not, making the issue of its value all the more relevant.

Comment author: JGWeissman 26 March 2010 07:35:14PM 7 points [-]

it's a big open problem if some humans can precommit or not

No, it's not. I don't see any reason to believe that humans can reliably precommit, without setting up outside constraints, especially over time spans of decades.

What you have described is not Newcomb's problem. Take what taw said, and realize that actual humans are in fact in this category:

If precommitment is not observable and/or changeable, then it can be rearranged, and we have:

  • Kate: accept or not - not having any clue what Joe did
  • Joe: breakup or not
Comment author: Academian 26 March 2010 08:01:11PM *  2 points [-]

added:

Certainties can be replaced with 95%'s and it all still works the same. It's a whole parametrized family of problems, not just one.

Try playing with the parameters. Maybe Kate only wants 90% certainty from Joe, and Joe is only 80% sure he'll be happy. Then he doesn't need a 100% precomitment, but only some kind of partial deterrent, and if Kate requires that he not resort to external self-restrictions, he can certainly self-modify partial pre-commitments into himself in the form of emotions.

Self-modification is robust, pre-commitment is robust, its detection is robust... these phenomena really aren't going anywhere.

Comment author: JGWeissman 26 March 2010 10:14:24PM 2 points [-]

Replacing the certainties with 95% still does not reflect reality. I don't think Kate can assign probability to whether she and Joe will get divorced any better than by taking the percentage of marriages, possibly in some narrow reference class they are part of, that end in divorce. Even if Joe can signal that he belongs to some favorable reference class, it still won't work.

Comment author: tut 27 March 2010 06:59:52AM 4 points [-]

If they are rational enough to talk about divorce in order to avoid it, then he can make an economic commitment by writing a prenup that guarantees that any divorce becomes unfavorable. Of course, only making it relatively unfavorable will give her an incentive to leave him, so it is better if a big portion of their property is given away or burned in case of a divorce.

Comment author: JGWeissman 27 March 2010 04:55:46PM 0 points [-]

Yes, that is a strategy they can take, However, that sort of strategy is unnecessary in Newcomb's problem, where you can just one-box and find the money there without having made any sort of precommitment.

Comment author: tut 28 March 2010 01:25:35PM 2 points [-]

I think that the translation to Newcombe's was that committing == one boxing and hedging == two boxing.

Comment author: JGWeissman 28 March 2010 04:26:36PM 1 point [-]

This mapping does not work. Causal Decision Theory would commit (if available) in the marriage proposal problem, but two box in Newcomb's problem. So the mapping does not preserve the relationship between the mapped elements.

This should be a sanity check for any scenario proposed to be equivalent to Newcomb's problem. EDT/TDT/UDT should all do the equivalent of one-boxing, and CDT should do the equivalent of two-boxing.

Comment author: Nick_Tarleton 30 March 2010 12:32:04AM *  2 points [-]

CDT on Newcomb's problem would, if possible, precommit to one-boxing as long as Omega's prediction is based on observing the CDT agent after its commitment.

CDT in the marriage case would choose to leave once unhappy, absent specific precommitment.

So that exact mapping doesn't work, but the problem does seem Newcomblike to me (like the transparent-boxes version, actually; which, I now realize, is like Kavka's toxin puzzle without the vagueness of "intent".) (ETA: assuming that Kate can reliably predict Joe, which I now see was the point under dispute to begin with.)

Comment author: JGWeissman 29 March 2010 09:24:06PM -1 points [-]

Why is the parent comment being voted down, and its parent being voted up, when it correctly refutes the parent?

Why is the article itself being voted up, when it has been refuted? Are people so impressed by the idea of a real life Newcomb like problem that they don't notice, even when it is pointed out, that the described story is not in fact a Newcomb like problem?

Comment author: bentarm 29 March 2010 03:29:51PM *  7 points [-]

I predict, with probability ~95%, that if Joe becomes unhappy in the marriage, he and Kate will get divorced, even though Joe and Kate

I predict with probability ~95% that if statisticians had arbitrarily decided many years ago to use 97% instead of 95% as their standard of proof, then all appearances of 95 and 97 in this comment would be reversed.

Comment author: JGWeissman 29 March 2010 04:40:00PM 4 points [-]

How does it change your prediction to learn that I was not considering statisticians' arbitrary standard of proof, but I was thinking about numbers in base ten, and I had considered saying ~90% instead?

Comment author: shokwave 12 May 2011 06:52:28PM 1 point [-]

Not much for me. I think it about six times more likely that you used base ten numbers to "get to" 95% than it is you came to 95% by coincidence.

Comment author: reaver121 26 March 2010 07:02:44PM *  3 points [-]

That's the reason why I never get why people are against marriage contracts. Even ignoring the inherent uncertainty of love & marriage, if I walk under a bus tomorrow and lose for example all empathy due to brain damage, my current self would wish you to divorce future psychopath-me as quickly as possible.

As for the OP, good article. If anyone ever asks why I spend my time theorizing away over 'impossible' things like AI or decision theory I can use this as an example.

Comment author: PhilGoetz 26 March 2010 08:39:21PM 3 points [-]

Did you mean to say you don't understand why people are in favor of marriage contracts? I don't see how the marriage contract helps in the bus example.

Comment author: reaver121 26 March 2010 10:10:29PM 3 points [-]

Sorry, I used the wrong terminology. I meant an prenuptial agreement. The bus example was to show that even if you precommit there is always the possibility that you will change your mind (i.e. in this case by losing empathy). I used the extreme method of brain damage because it's completely out of your control. You cannot precommit on not being run over by a bus.

Comment author: Mallah 26 March 2010 07:49:04PM 3 points [-]

It's not a Newcomb problem. It's a problem of how much his promises mean.

Either he created a large enough cost to leaving if he is unhappy, in that he would have to break his promise, to justify his belief that he won't leave; or, he did not. If he did, he doesn't have the option to "take both" and get the utility from both because that would incur the cost. (Breaking his promise would have negative utility to him in and of itself.) It sounds like that's what ended up happening. If he did not, he doesn't have the option to propose sincerely, since he knows it's not true that he will surely not leave.

Comment author: Academian 26 March 2010 08:06:59PM 1 point [-]

Creating internal deterrents is a kind of self modification, and you're right that it's a way of systematically removing or altering one's options.

Comment author: wedrifid 27 March 2010 05:46:03PM 3 points [-]

In real life, Joe actually recognized the similarity to Newcomb's problem, realizing for the first time that he must become timeless decision agent, and noting his 90% certainty, he self-modified by adopting a moral pre-commitment to never leaving Kate should they marry, proposed to her sincerely, and the rest is history.

It would be a (probabilistic approximation of a) Newcomb problem when considered without the ability to precommit or otherwise sabotage the future payoff for one of your future options. Having that option available makes the problem one that would be solved correctly by the same causal decision theorist that would two-box.

If you hadn't mentioned the whole moral precommitment possibility (and implied it wasn't available) then I would agree that Joe faced a Newcomblike situation. As it stands it is an interesting game theoretic situation involving an agent who can predict the decision you are in the process of making.

Fortunately humans come with a moral system capable of full sincerity at one moment and then inevitable update in the direction of self interest as necessary. Rather like 'compartmentalization'.

Comment author: Academian 09 April 2010 01:26:15AM 1 point [-]

It would be a (probabilistic approximation of a) Newcomb problem when considered without the ability to precommit or otherwise sabotage the future payoff

Yes, it was more Newcomblike before Joe realized his ability to pre-commit (or "hypothetically self sabotage" as you might call it), and less Newcomblike afterwards.

Comment author: RichardChappell 29 March 2010 05:03:05PM 3 points [-]

This seems better described as a variant of the traditional paradox of hedonism. That is, some goals (e.g. long term happiness) are best achieved by agents who do not explicitly aim only at this goal, and who can instead be trusted to keep to their commitments even if it turns out that they'd benefit from defecting.

Comment author: ata 30 March 2010 04:31:25AM 2 points [-]

That doesn't really sound like a paradox, just more evidence that people are very suboptimal optimizers. If the goal is long-term happiness, and some actions are more conducive to that than the actions most people come up with when aiming for long-term happiness, then that only indicates we're bad at reasoning about long-term goals.

Comment author: RichardChappell 30 March 2010 03:03:57PM *  0 points [-]

Hmm, I think you've missed something if you can't tell this apart from the general phenomenon of being "very suboptimal optimizers". The problem isn't that bad consequences result from our seeking pleasure ineptly. It's instead that bad consequences result from our seeking pleasure (even if all our means-end calculations are perfectly accurate).

I agree that it's a rather loose use of the term 'paradox', but this is the standard term for the phenomenon, dating back more than a century now. For more background, see the Stanford encyclopedia and wikipedia. (Parfit's 'rational irrationality' is also related.)

Comment author: ata 30 March 2010 08:16:11PM *  1 point [-]

It's instead that bad consequences result from our seeking pleasure (even if all our means-end calculations are perfectly accurate).

That sounds like a contradiction. If you're perfect at doing means-end calculations, and the best way to attain pleasure or happiness is something other than seeking it directly, then your calculations will tell you that, and you will do it.

Maybe I'm missing something, but this sounds more like an aesop about the perils of hedonism, and I'm not sure it would apply to perfect decision-makers.

Comment author: RichardChappell 30 March 2010 09:39:49PM *  3 points [-]

It's no contradiction. Perfect means-end calculations merely ensures that you'll choose the best of the options available given that you've made a means-end calculation. But you might have different (and better) options if you never made any such calculation. (For a crude illustration, imagine that God exists and will reward people who never make any attempt at instrumental reasoning.) By the time your calculations tell you that you never should have calculated in the first place, it's too late.

Comment author: HopeFox 14 May 2011 11:31:32PM 0 points [-]

Perfect decision-makers, with perfect information, should always be able to take the optimal outcome in any situation. Likewise, perfect decision-makers with limited information should always be able to choose the outcome with the best expected payoff under strict Bayesian reasoning.

However, when the actor's decision-making process becomes part of the situation under consideration, as happens when Katemega scrutinises Joe's potential for leaving her in the future, then the perfect decision-maker is only able to choose the optimal outcome if he is also capable of perfect self-modification. Without that ability, he's vulnerable to his own choices and preferences changing in the future, which he can't control right now.

I'd also like to draw a distinction between a practical pre-commitment (of the form "leaving this marriage will cause me -X utilons due to financial penalty or cognitive dissonance for breaking my vows"), and an actual self-modification to a mind state where "I promised I would never leave Kate, but I'm going to do it anyway now" is not actually an option. I don't think humans are capable of the latter. An AI might be, I don't know.

Also, what about decisions Joe made in the past (for example, deciding when he was eighteen that there was no way he was ever going to get married, because being single was too much fun)? If you want your present state to influence your future state strongly, you have to accept the influence of your past state on your present state just as strongly, and you can't just say "Oh, but I'm older and wiser now" in one instance but not the other.

Without the ability to self-modify into a truly sincere state wherein he'll never leave Kate no matter what, Joe can't be completely sincere, and (by the assumptions of the problem) Kate will sense this and his chances of his proposal being accepted will diminish. And there's nothing he can do about that.

Comment author: Bongo 15 May 2011 01:49:04AM 1 point [-]

I have to note that an agent using one of the new decision theories sometimes discussed around here, like UDT, wouldn't leave Katemega and wouldn't need self-modification or precommitment to not leave her.

Comment author: pozorvlak 29 March 2010 05:17:40PM 2 points [-]

If you're calling the potential bride in your scenario Kate, you should really have called her suitor Petruchio :-)

Comment author: David_Gerard 29 November 2010 01:44:43AM *  2 points [-]

Uh, someone having a script for your life that they require you to fit does not make them Omega - it just means they are attempting to dominate you and you are going along with it, like in many ordinary relationships. Admittedly I may be biased myself from having been burnt, but this is a plain old relationship problem, not Newcomb's problem. The answer is not a new decision theory, but to get out of the unhealthy and manipulative relationship.

I concur with JGWeissman's prediction. I just don't find it credible that time-binding apes, faced with the seconds of their life ticking away in misery, will just grin and put up with it without massive outside pressure, based on how said apes generally actually behave.

So, it's eight months later. How are Joe and Kate doing?

Comment author: duckduckMOO 14 April 2012 08:59:58PM *  0 points [-]

There are (human) apes that will. I think that you're underestimating how -for lack of a better word-, deontologically/morally, some people see things. Also (or maybe this is what I mean by deontologically) how much of a self image someone can tied up in being a person who doesn't break promises or how -for lack of a better word (that I can think of) literally people can think of breaking specific commitments.

Well, except the grinning. In any case if the marriage goes badly Kate is free to leave so unless it goes so badly she wants to string him along to torment him he can always ask her to end it.

also this, "I just don't find it credible that time-binding apes, faced with the seconds of their life ticking away in misery, will just grin and put up with it without massive outside pressure." They do.

Comment author: Kevin 26 March 2010 08:17:12PM 1 point [-]

How about a pre-nup and polyamory boxing?

Comment author: TimFreeman 12 May 2011 06:15:30PM *  1 point [-]

polyamory boxing

What is that? Does it require padded gloves?

Comment author: wedrifid 12 May 2011 06:36:22PM 2 points [-]

polyamory boxing

What is that? Does it require padded gloves?

No, it's when you keep spares in storage till you need them.

Comment author: HopeFox 13 May 2011 02:16:19PM 1 point [-]

It's an interesting situation, and I can see the parallel to Newcombe's Problem. I'm not certain that it's possible for a person to self-modify to the extent that he will never leave his wife, ever, regardless of the very real (if small) doubts he has about the relationship right now. I don't think I could ever simultaneously sustain the thoughts "There's about a 10% chance that my marriage to my wife will make me very unhappy" and "I will never leave her no matter what". I could make the commitment financially - that, even if the marriage turns awful, I will still provide the same financial support to her - but not emotionally. If Joe can modify his own code so that he can do that, that's very good of him, but I don't think many people could do it, not without pre-commitment in the form of a marital contract with large penalties for divorce, or at least a very strong mentality that once the vows are said, there's no going back.

Perhaps the problem would be both more realistic and more mathematically tractable if "sincerity" were rated between 0 and 1, rather than being a simple on/off state? If 1 is "till death do us part" and 0 is "until I get a better offer", then 0.9 could be "I won't leave you no matter how bad your cooking gets, but if you ever try to stab me, I'm out of here". Then Kate's probability of accepting the proposal could be a function of sincerity, which seems a much more reasonable position for her.

Could this be an example where rationality and self-awareness really do work against an actor? If Joe were less self-aware, he could propose with complete sincerity, having not thought through the 10% chance that he'll be unhappy. If he does become unhappy, he'd then feel justified in this totally unexpected change inducing him to leave. The thing impeding Joe's ability to propose with full sincerity is his awareness of the possibility of future unhappiness.

Also, it's worth pointing out that, by the formulation of the original problem, Kate expects Joe to stay with her even if she is causing him -125 megautilons of unhappiness by forcing him to stay. That seems just a touch selfish. This is something they should talk about.

Comment author: dclayh 26 March 2010 08:21:41PM 0 points [-]

One dissimilarity from Newcomb's is that the marginal utility of spouses decreases faster than the marginal utility of money, and moreover many potential spouses are known to exist. (I.e., Joe can just walk away and find someone more reasonable to marry for relatively small utility cost.)

Comment author: taw 26 March 2010 07:09:12PM -1 points [-]

If precommitment is observable and unchangeable, then order of action is:

  • Joe: precommit or not
  • Kate: accept or not - knowing if Joe precommitted or not
  • Joe: breakup (assuming no precommitment)

If precommitment is not observable and/or changeable, then it can be rearranged, and we have:

  • Kate: accept or not - not having any clue what Joe did
  • Joe: breakup or not

Or in the most complex situation, with 3 probabilistic nodes:

  • Joe: precommit or not
  • Nature: Kate figures out what Joe did correctly or not
  • Kate: accept or not
  • Nature: Marriage happy or unhappy
  • Nature: Joe changes mind or not
  • Joe: breakup or not

None of these is remotely Newcombish. You only get Newcomb paradox when you assume causal loop, and try to solve the problem using tools devised for situations without causal loops.

Comment author: PhilGoetz 26 March 2010 08:45:56PM *  4 points [-]

If Joe believes that his precommitment is inviolable, or even that it affects the probability of him breaking up later, then it appears to him that he is confronted with a causal loop. His decision-making program, at that moment, addresses Newcomb's problem, even if it's wrong in believing in the causal loop.

But I think this only proves that flawed reasoners may face Newcomb's problem. (It might even turn out that finding yourself facing Newcomb's problem proves your reasoning is flawed.)

It's still interesting enough to up-vote.

Comment author: wedrifid 27 March 2010 05:50:24PM 7 points [-]

None of these is remotely Newcombish. You only get Newcomb paradox when you assume causal loop, and try to solve the problem using tools devised for situations without causal loops.

It is the Newcomb Problem. It may be tricky and counter-intuitive but it isn't a paradox. More importantly The Newcomb Problem does not rely on a causal loop. Some form of reliable prediction is necessary but that does not imply a causal loop.

Comment author: Academian 26 March 2010 07:30:44PM *  1 point [-]

My pre-sponse to this is in footnote 2:

If you care about "causal reasoning", the other half of what's supposed to make Newcomb confusing, then Joe's problem is more like Kavka's (so this post accidentally shows how Kavka and Newcomb are similar). But the distinction is instrumentally irrelevant: the point is that he can benefit from decision mechanisms that are evidential and time-invariant, and you don't need "unreasonable certainties" or "paradoxes of causality" for this to come up.

Comment author: taw 26 March 2010 07:47:06PM 0 points [-]

There is no need for time-invariance. The most generic model (2 Joe nodes; 1 Kate note; 3 Nature nodes) of vanilla decision theory perfectly explains the situation you're talking about - unless you postulate some causal loops.

Comment author: Academian 26 March 2010 07:51:43PM *  0 points [-]

Joe's problem is more like Kavka's (so this post accidentally shows how Kavka and Newcomb are similar)

Is that not the simplicity you're interested in?

Comment author: taw 26 March 2010 08:04:06PM -1 points [-]

And in Kavka's problem there's no paradox unless we assume causal loops (billionaire knows now if you're going to decide to drink the toxin or not tomorrow), or leave the problem ambiguous (so can you change or mind or not?).

Comment author: Academian 26 March 2010 08:12:56PM 3 points [-]

You'll notice I didn't once use the word "paradox" ;)

Comment author: Leafy 29 March 2010 04:22:38PM 0 points [-]

If I could attempt to summarise my interpretation of the above:

Joe realises that the best payout comes from proposing sincerely even though he is defined to be insincere (10% probability of surely breaking his promise to never try and leave her if they marry). He seeks a method by which to produce an insincere sincere proposal.

As sincerity appears to be a controllable state of mind he puts himself in the right state, making him appear temporarily sincere and thus aiming for the bigger payout.

As you have not assigned any moral or mental cost associated with this then there appears to be no choice required in the matter and this path is clear (which is the one he took).

Could I suggest a possible adjustment? I would either replace the fixed probability of happiness with a varying probability depending on sincerity (ie 90% chance of happiness if sincere, 1% chance if insincere!) or perhaps provide a cost associated with the act of "lying".

This latter "cost of lying" would make this a slightly more real world example as I believe that I have witnessed examples such as the one above where a persons cost of lying has been low or has been high and the two outcomes have been different accordingly.