FAWS comments on Newcomb's Problem: A problem for Causal Decision Theories - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (120)
Those are all ways of her having gathered the evidence.
From the evidence, how has she reached the conclusion?
The most plausible scenario for getting from evidence to conclusion is mental simulation as far as I can tell.
You haven't even proposed a single alternative yet
EDIT: (did you edit this in, or did I miss it?)
You expect the copy to be able to tell it's a copy? Why? Why would the psychologist simulate it discovering that it is the copy? When you simulate someone's reaction to possible courses of action, do you simulate them as being aware of being a simulation?
None of my internal simulations have ever been aware of being simulations.
There are four possibilities:
Only in case 4. will you seriously have to wonder whether you are a copy. In case 1. you will know that you are not as soon as you consider the possibility, case 2. is irrelevant unless you also assume that the real you will also conclude that it's a copy, which is logically inconsistent.
Nevertheless case 1. should be sufficient for predicting the actions you take once you conclude that you are not a copy to a reasonable accuracy.
Case 1 is sufficient to predict my actions IFF I would never wonder about whether I was a copy.
Given that I would in fact wonder whether I was a copy, and that that thought-process is significant to the scenario, Case 1 seems likely to be woefully inadequate for simulating me.
Case 4 is therefore much more plausible for a genius psychologist (with 99% accuracy) from my PoV.
The psychologist tells you that she simply isn't capable of case 4 (there are all sorts of at least somewhat verifiable facts that you would expect yourself to know and that she doesn't [e. g. details about your job that have to make sense and be consistent with a whole web of other details, that she couldn't plausibly have spied out or invented a convincing equivalent thereof herself]). Given that you just wondered you can't be a simulation. What do you do?
I know she's lying.
Case 4 just requires that the simulation not recognise that it is a simulation when it considers whether or not it's a simulation, ie. that whatever question it asks itself, it finds an answer. It can't actually check for consistency, remember, it's a simulation, if it would find an inconsistency "change detail [removing inconsistency], run" or "insert thought 'yep, that's all consistent'; run"
If she's capable of case 1, she's capable of case 4, even if she has to insert the memory on it being requested, rather than prior to request.