FAWS comments on Newcomb's Problem: A problem for Causal Decision Theories - Less Wrong

8 [deleted] 16 August 2010 11:25AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (120)

You are viewing a single comment's thread. Show more comments above.

Comment author: FAWS 16 August 2010 03:07:04PM *  1 point [-]

She doesn't tell you in the scenario.

Maybe she had her grad students talk with you on various subjects and subject you to various stealth psychological experiments over the last 10 years and watched it all on video, all based on your signing an agreement to take part in a psychological experiment that didn't specify a duration 15 years ago that was followed by a dummy experiment and that you promptly forgot about.

Maybe she is secretly your mother.

Maybe she is just that good and tell it by the way you shaked her hand.

In any case 99% shouldn't require imagining the actions of a reflectively indistinguishable from you copy of you.

Comment author: Kingreaper 16 August 2010 03:09:48PM *  -2 points [-]

Those are all ways of her having gathered the evidence.

From the evidence, how has she reached the conclusion?

The most plausible scenario for getting from evidence to conclusion is mental simulation as far as I can tell.

You haven't even proposed a single alternative yet

EDIT: (did you edit this in, or did I miss it?)

In any case 99% shouldn't require imagining the actions of a reflectively indistinguishable from you copy of you.

You expect the copy to be able to tell it's a copy? Why? Why would the psychologist simulate it discovering that it is the copy? When you simulate someone's reaction to possible courses of action, do you simulate them as being aware of being a simulation?

None of my internal simulations have ever been aware of being simulations.

Comment author: FAWS 16 August 2010 03:41:02PM *  1 point [-]

In any case 99% shouldn't require imagining the actions of a reflectively indistinguishable from you copy of you.

You expect the copy to be able to tell it's a copy? Why? Why would the psychologist simulate it discovering that it is the copy? When you simulate someone's reaction to possible courses of action, do you simulate them as being aware of being a simulation?

None of my internal simulations have ever been aware of being simulations.

There are four possibilities:

  1. The copy never wonders whether it's a copy.
  2. The copy wonders about being a copy and concludes that it is.
  3. The copy concludes that it cannot be a copy.
  4. The copy is from it's point of view reflectively indistinguishable from you.

Only in case 4. will you seriously have to wonder whether you are a copy. In case 1. you will know that you are not as soon as you consider the possibility, case 2. is irrelevant unless you also assume that the real you will also conclude that it's a copy, which is logically inconsistent.

Nevertheless case 1. should be sufficient for predicting the actions you take once you conclude that you are not a copy to a reasonable accuracy.

Comment author: Kingreaper 16 August 2010 03:56:57PM 0 points [-]

Case 1 is sufficient to predict my actions IFF I would never wonder about whether I was a copy.

Given that I would in fact wonder whether I was a copy, and that that thought-process is significant to the scenario, Case 1 seems likely to be woefully inadequate for simulating me.

Case 4 is therefore much more plausible for a genius psychologist (with 99% accuracy) from my PoV.

Comment author: FAWS 16 August 2010 04:08:11PM *  0 points [-]

The psychologist tells you that she simply isn't capable of case 4 (there are all sorts of at least somewhat verifiable facts that you would expect yourself to know and that she doesn't [e. g. details about your job that have to make sense and be consistent with a whole web of other details, that she couldn't plausibly have spied out or invented a convincing equivalent thereof herself]). Given that you just wondered you can't be a simulation. What do you do?

Comment author: Kingreaper 16 August 2010 04:33:51PM *  1 point [-]

I know she's lying.

Case 4 just requires that the simulation not recognise that it is a simulation when it considers whether or not it's a simulation, ie. that whatever question it asks itself, it finds an answer. It can't actually check for consistency, remember, it's a simulation, if it would find an inconsistency "change detail [removing inconsistency], run" or "insert thought 'yep, that's all consistent'; run"

If she's capable of case 1, she's capable of case 4, even if she has to insert the memory on it being requested, rather than prior to request.

Comment author: FAWS 16 August 2010 03:26:24PM *  1 point [-]

The stealth psychological experiments could have included an isomorphic problem, or she could be using a more sophisticated version of

  • New ager: one box
  • Thinks time travel conflicts with free will: two box
  • uses EDT: one box
  • TDT/UDT; one box
  • bog standard CDT: two box
  • CDT, but takes simulation hypothesis seriously: one box if thinking it possible that in a simulation, two box otherwise.

Stealth psychological experiments you forgot about allowed her to determine necessary and/or sufficient conditions for you assuming that you might be in a simulation that you yourself are unaware of, and she set the whole thing up in a such a way that she can tell with high confidence whether you do.

Comment author: Kingreaper 16 August 2010 04:01:28PM 0 points [-]

The categorisation possibility is reasonable. Personally I would estimate the probability of 99% accuracy achieved through categorisation lower than the probability of 99% accuracy achieved through mental simulation, but it's certainly a competitive hypothesis.

Comment author: FAWS 16 August 2010 04:33:44PM 0 points [-]

Assuming she tells you that she predicted your actions through some unspecified mechanism other than imagining your thought process in sufficient detail for the imagined version to ask itself whether it just exists in her imagination, what do you do?

Comment author: Kingreaper 16 August 2010 04:42:59PM 1 point [-]

I question what reason I have to assume she's being honest, and is in fact correct.

Given her psychological genius she is likely correct about the methods she used, although not certainly (she may not be good at self-analysis).

If I conclude that: either A) she is being honest or B) the whole pay-off is a lie Then I will probably act on the second most plausible (to my mind) scenario. I've yet to work out what that is. Repeating the experiment often enough to get statistics that are precise enough for 99% accuracy would be extremely costly with the standard pay-out scheme; so while I jumped towards that as my secondary scenario it's actually very implausible.

Comment author: FAWS 16 August 2010 05:11:41PM 0 points [-]

Reduce both payoffs by a factor of 100.

The psychologist is hooked up to a revolutionary lie detector that is 99% reliable, there is the standing price of $ 1,000,000,000 for anyone who can after calibration deceive it on more than 10 out of 50 statements (with no further calibration during the trial). The psychologist is known to have tried the test three times and failed (with 1, 4, and 3 successful deceptions).

Comment author: Kingreaper 16 August 2010 05:53:31PM *  1 point [-]

Well, the psychologist's track record of successful lying is within a plausible range of the 99% reliability.

With the payoffs decreased by a factor of 100, and the lie detector added in, my best guess would be that she's repeated the experiment often, and gathered up a statistical model of people to which she can compare me, and to which I will be added. In such a circumstance I think I would still tend to one-box, but the reason is slightly different.

I value the wellbeing of people who are like me. If I one-box, others like me will be more likely to receive the $10,000; rather than just the $10

Comment author: FAWS 16 August 2010 06:19:50PM 0 points [-]

Are you sure you are actually trying to make a valid defense of CDT and not just looking for excuses?

What would you do if that somehow were not a consideration? (What would you do if you were more selfish, what would an otherwise identical more selfish simulation of you do, what would you do if you could be reasonably sure that you won't affect the payoff for anyone else you would care about for some reason that doesn't change your estimation of the accuracy of the prediction and the way it came about [e. g. you are the last subject and everyone before you for whom it would matter was asked what they would have done if they had been the last subject]?)

Comment author: Kingreaper 16 August 2010 07:31:46PM *  0 points [-]

Are you sure you're not just trying to destroy CDT rather than think rationally? If you think I am being irrationally defensive of CDT, check the OTHER thread off my first reply. You seem to be trying very hard indeed to tear down CDT.

CDT gives the correct result in the original posted scenario, for reasons which are not immediately obvious but are none-the-less present. You appear to have accepted that, what with your gradually moving further and further from the original scenario.

In your scenario, designed specifically to make CDT not work, it would still work for me, because of who I am.

If I was more selfish, I don't see CDT working in your scenario. If there is a reason why it should work, I haven't realised it. But then, it's a scenario contrived with the specific intention of CDT not working.

Your "everyone was the last subject" scenario breaks down somewhat; if everyone is told they are the last subject then I can't take being told that I'm the last subject seriously. If I AM the last subject, I will be extremely skeptical, given the sample-size I expect to be needed for the 99% accuracy, and thus I will tend to behave as though I am not the last subject due to not believing I am the last subject.

My original point was simply that the starting post, while claiming to show problems with CDT, failed. It used a scenario that didn't illustrate any problem with CDT. Do you still disagree with my original point?

EDIT: You seem to think that I'm doing my best to defend CDT. I'm really not, I have no major vested interest in defending CDT except when it was unfairly attacked. Adambell has posted two scenarios where CDT works fine, with claims that CDT doesn't work in those scenarios.