FAWS comments on Newcomb's Problem: A problem for Causal Decision Theories - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (120)
Sigh.
You are missing the point.
Replace Omega with a genius Psychologist who only gets it right 99% of the time and CDT will have you walk off with $1000 while correct thinking leaves you with $1,000,000 almost all of the time, it's just that in that scenario people will uselessly argue that the 1% chance to get lucky somehow makes it rational.
How is the genius psychologist likely to be predicting your actions?
To me, it seems probable that he's simulating you, imperfectly, within his own mind.
How would you explain his methodology?
EDIT: to clarify my reasoning, I simulate people, myself included, often. Generally when I want to predict their actions. I'm not very good at it. Were I a genius psychologist, and hence obviously great at simulating people, I don't see why I would be any less likely to simulate people.
She doesn't tell you in the scenario.
Maybe she had her grad students talk with you on various subjects and subject you to various stealth psychological experiments over the last 10 years and watched it all on video, all based on your signing an agreement to take part in a psychological experiment that didn't specify a duration 15 years ago that was followed by a dummy experiment and that you promptly forgot about.
Maybe she is secretly your mother.
Maybe she is just that good and tell it by the way you shaked her hand.
In any case 99% shouldn't require imagining the actions of a reflectively indistinguishable from you copy of you.
Those are all ways of her having gathered the evidence.
From the evidence, how has she reached the conclusion?
The most plausible scenario for getting from evidence to conclusion is mental simulation as far as I can tell.
You haven't even proposed a single alternative yet
EDIT: (did you edit this in, or did I miss it?)
You expect the copy to be able to tell it's a copy? Why? Why would the psychologist simulate it discovering that it is the copy? When you simulate someone's reaction to possible courses of action, do you simulate them as being aware of being a simulation?
None of my internal simulations have ever been aware of being simulations.
There are four possibilities:
Only in case 4. will you seriously have to wonder whether you are a copy. In case 1. you will know that you are not as soon as you consider the possibility, case 2. is irrelevant unless you also assume that the real you will also conclude that it's a copy, which is logically inconsistent.
Nevertheless case 1. should be sufficient for predicting the actions you take once you conclude that you are not a copy to a reasonable accuracy.
Case 1 is sufficient to predict my actions IFF I would never wonder about whether I was a copy.
Given that I would in fact wonder whether I was a copy, and that that thought-process is significant to the scenario, Case 1 seems likely to be woefully inadequate for simulating me.
Case 4 is therefore much more plausible for a genius psychologist (with 99% accuracy) from my PoV.
The psychologist tells you that she simply isn't capable of case 4 (there are all sorts of at least somewhat verifiable facts that you would expect yourself to know and that she doesn't [e. g. details about your job that have to make sense and be consistent with a whole web of other details, that she couldn't plausibly have spied out or invented a convincing equivalent thereof herself]). Given that you just wondered you can't be a simulation. What do you do?
I know she's lying.
Case 4 just requires that the simulation not recognise that it is a simulation when it considers whether or not it's a simulation, ie. that whatever question it asks itself, it finds an answer. It can't actually check for consistency, remember, it's a simulation, if it would find an inconsistency "change detail [removing inconsistency], run" or "insert thought 'yep, that's all consistent'; run"
If she's capable of case 1, she's capable of case 4, even if she has to insert the memory on it being requested, rather than prior to request.
The stealth psychological experiments could have included an isomorphic problem, or she could be using a more sophisticated version of
Stealth psychological experiments you forgot about allowed her to determine necessary and/or sufficient conditions for you assuming that you might be in a simulation that you yourself are unaware of, and she set the whole thing up in a such a way that she can tell with high confidence whether you do.
The categorisation possibility is reasonable. Personally I would estimate the probability of 99% accuracy achieved through categorisation lower than the probability of 99% accuracy achieved through mental simulation, but it's certainly a competitive hypothesis.
Assuming she tells you that she predicted your actions through some unspecified mechanism other than imagining your thought process in sufficient detail for the imagined version to ask itself whether it just exists in her imagination, what do you do?
I question what reason I have to assume she's being honest, and is in fact correct.
Given her psychological genius she is likely correct about the methods she used, although not certainly (she may not be good at self-analysis).
If I conclude that: either A) she is being honest or B) the whole pay-off is a lie Then I will probably act on the second most plausible (to my mind) scenario. I've yet to work out what that is. Repeating the experiment often enough to get statistics that are precise enough for 99% accuracy would be extremely costly with the standard pay-out scheme; so while I jumped towards that as my secondary scenario it's actually very implausible.
Reduce both payoffs by a factor of 100.
The psychologist is hooked up to a revolutionary lie detector that is 99% reliable, there is the standing price of $ 1,000,000,000 for anyone who can after calibration deceive it on more than 10 out of 50 statements (with no further calibration during the trial). The psychologist is known to have tried the test three times and failed (with 1, 4, and 3 successful deceptions).
Well, the psychologist's track record of successful lying is within a plausible range of the 99% reliability.
With the payoffs decreased by a factor of 100, and the lie detector added in, my best guess would be that she's repeated the experiment often, and gathered up a statistical model of people to which she can compare me, and to which I will be added. In such a circumstance I think I would still tend to one-box, but the reason is slightly different.
I value the wellbeing of people who are like me. If I one-box, others like me will be more likely to receive the $10,000; rather than just the $10