JGWeissman comments on Newcomb's problem happened to me - Less Wrong

37 Post author: Academian 26 March 2010 06:31PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (97)

You are viewing a single comment's thread. Show more comments above.

Comment author: JGWeissman 26 March 2010 07:35:14PM 7 points [-]

it's a big open problem if some humans can precommit or not

No, it's not. I don't see any reason to believe that humans can reliably precommit, without setting up outside constraints, especially over time spans of decades.

What you have described is not Newcomb's problem. Take what taw said, and realize that actual humans are in fact in this category:

If precommitment is not observable and/or changeable, then it can be rearranged, and we have:

  • Kate: accept or not - not having any clue what Joe did
  • Joe: breakup or not
Comment author: Academian 26 March 2010 08:01:11PM *  2 points [-]

added:

Certainties can be replaced with 95%'s and it all still works the same. It's a whole parametrized family of problems, not just one.

Try playing with the parameters. Maybe Kate only wants 90% certainty from Joe, and Joe is only 80% sure he'll be happy. Then he doesn't need a 100% precomitment, but only some kind of partial deterrent, and if Kate requires that he not resort to external self-restrictions, he can certainly self-modify partial pre-commitments into himself in the form of emotions.

Self-modification is robust, pre-commitment is robust, its detection is robust... these phenomena really aren't going anywhere.

Comment author: JGWeissman 26 March 2010 10:14:24PM 2 points [-]

Replacing the certainties with 95% still does not reflect reality. I don't think Kate can assign probability to whether she and Joe will get divorced any better than by taking the percentage of marriages, possibly in some narrow reference class they are part of, that end in divorce. Even if Joe can signal that he belongs to some favorable reference class, it still won't work.

Comment author: tut 27 March 2010 06:59:52AM 4 points [-]

If they are rational enough to talk about divorce in order to avoid it, then he can make an economic commitment by writing a prenup that guarantees that any divorce becomes unfavorable. Of course, only making it relatively unfavorable will give her an incentive to leave him, so it is better if a big portion of their property is given away or burned in case of a divorce.

Comment author: JGWeissman 27 March 2010 04:55:46PM 0 points [-]

Yes, that is a strategy they can take, However, that sort of strategy is unnecessary in Newcomb's problem, where you can just one-box and find the money there without having made any sort of precommitment.

Comment author: tut 28 March 2010 01:25:35PM 2 points [-]

I think that the translation to Newcombe's was that committing == one boxing and hedging == two boxing.

Comment author: JGWeissman 28 March 2010 04:26:36PM 1 point [-]

This mapping does not work. Causal Decision Theory would commit (if available) in the marriage proposal problem, but two box in Newcomb's problem. So the mapping does not preserve the relationship between the mapped elements.

This should be a sanity check for any scenario proposed to be equivalent to Newcomb's problem. EDT/TDT/UDT should all do the equivalent of one-boxing, and CDT should do the equivalent of two-boxing.

Comment author: Nick_Tarleton 30 March 2010 12:32:04AM *  2 points [-]

CDT on Newcomb's problem would, if possible, precommit to one-boxing as long as Omega's prediction is based on observing the CDT agent after its commitment.

CDT in the marriage case would choose to leave once unhappy, absent specific precommitment.

So that exact mapping doesn't work, but the problem does seem Newcomblike to me (like the transparent-boxes version, actually; which, I now realize, is like Kavka's toxin puzzle without the vagueness of "intent".) (ETA: assuming that Kate can reliably predict Joe, which I now see was the point under dispute to begin with.)

Comment author: JGWeissman 30 March 2010 06:43:54AM 0 points [-]

the problem does seem Newcomblike to me

Would you care to share your reasoning? What is your mapping of strategies, and does it pass my sanity check? (EVT two-boxes on the transparent boxes variation.)

Comment author: Nick_Tarleton 31 March 2010 02:18:05AM *  0 points [-]

one-box <=> stay in marriage when unhappy
two-box <=> leave marriage when unhappy
precommit to one-boxing <=> precommit to staying in marriage

In both this problem and transparent-boxes Newcomb:

  • you don't take the action under discussion (take boxes, leave or not) until you know whether you've won
  • if you would counterfactually take one of the choices if you were to win, you'll lose
  • TDT and UDT win
  • CDT either precommits and wins or doesn't and loses, as described in my previous comment

(I'm assuming that Kate can reliably predict Joe. I didn't initially realize your objection might have more to do with that than the structure of the problem.)

Comment author: JGWeissman 29 March 2010 09:24:06PM -1 points [-]

Why is the parent comment being voted down, and its parent being voted up, when it correctly refutes the parent?

Why is the article itself being voted up, when it has been refuted? Are people so impressed by the idea of a real life Newcomb like problem that they don't notice, even when it is pointed out, that the described story is not in fact a Newcomb like problem?

Comment author: wedrifid 30 March 2010 07:51:50PM *  5 points [-]

Why is the article itself being voted up, when it has been refuted?

I voted it up because it is a good article. The claim "this situation is a problem of the class Newcomblike" has been refuted. If Academian had belligerently defended the 'It's Newcomblike' claim in response to correction I would have reversed my upvote. As it stands the discussion both in the original post and the comments are useful. I expect it has helped clarify how the situation as it is formalized here differs from Newcomb's problem and what changes the scenario would need to actually be a Newcomblike problem. In fact, that is a follow up post that I would like to see.

Are people so impressed by the idea of a real life Newcomb like problem that they don't notice, even when it is pointed out, that the described story is not in fact a Newcomb like problem?

Ease up. The "it's not actually Newcomblike" comments are being upvoted. People get it. It's just that sometimes correction is sufficient and a spiral of downvotes isn't desirable.

Comment author: JGWeissman 31 March 2010 01:32:53AM -2 points [-]

I voted it up because it is a good article.

It is an article in which poor thought leads to a wrong conclusion. I don't consider that "good".

If Academian had belligerently defended the 'It's Newcomblike' claim in response to correction I would have reversed my upvote.

I wouldn't say he was belligerent, but earlier in this thread he seemed to be Fighting a Rearguard Action Against the Truth, first saying, "it's a big open problem if some humans can precommit or not", and then saying the scenario still works if you replace certainties with high confidence levels, with those confidence levels also being unrealistic. I found "Self-modification is robust, pre-commitment is robust, its detection is robust... these phenomena really aren't going anywhere." to be particularly arrogant. He seems to have dropped out after I refuted those points.

My standard for changing this article from bad to sort of ok, would require an actual retraction of the wrong conclusion.

As it stands the discussion both in the original post and the comments are useful.

As it stands, someone can be led astray by reading just the article and not the comments.

The "it's not actually Newcomblike" comments are being upvoted. People get it.

Not as much as the article. And this comment, which refuted a wrong argument that the scenario really is Newcomb's problem, at the time I asked that question, was at -2.

It's just that sometimes correction is sufficient and a spiral of downvotes isn't desirable.

I am not saying everyone should vote it down so Academian loses so much karma he can never post another article. I think a small negative score is enough to make the point. A small positive score would be appropiate if he made a proper retraction. +27 is too high. I don't think articles should get over +5 without the main point actually being correct, and they should be incredibly thought provoking to get that high.

I am also wary of making unsupportable claims that Newcomb's problem happens in real life, which can overshadow other reasons we consider such problems, so these other reasons are forgotten when the unsupportable claim is knocked down.

Comment author: pjeby 29 March 2010 09:58:50PM 4 points [-]

Are people so impressed by the idea of a real life Newcomb like problem that they don't notice, even when it is pointed out, that the described story is not in fact a Newcomb like problem?

That depends entirely on what characteristics you consider to be most "Newcomb like". From an emotional point of view, the situation is very "Newcomb like", even if the mathematics is different.

Comment author: JGWeissman 29 March 2010 10:16:41PM -1 points [-]

From an emotional point of view

This sounds like a fully general excuse to support any position. What is this emotional view? If the emotions disagree with the logical analisys, why aren't the emotions wrong? Correct emotions should be reactions to the actual state of reality.