wedrifid comments on Newcomb's Problem: A problem for Causal Decision Theories - Less Wrong

8 [deleted] 16 August 2010 11:25AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (120)

You are viewing a single comment's thread. Show more comments above.

Comment author: Kingreaper 16 August 2010 02:34:30PM *  0 points [-]

We could do a modified Newcomb's Problem where the perfectly honest, all knowing Omega tells you that you're not the simulation but the actual person and the simulation has already been done which seems to resolve that possibility discussed above.

An All-knowing Omega by definition contains a simulation of this exact scenario. And in that simulation they aren't being perfectly honest, but I still believe they are.

If Omega is in fact all-knowing, all possible scenarios exist in simulation within it's infinite knowledge.

This is why throwing all-knowing entities into problems always buggers things up

I feel that finding a way for CDT to answer Newcomb's Problem via the specifics of the way Omega predicts your reactions is a similar response - trying to respecify the argument in such a way that an answer can be found rather than looking at the abstracted conception of the argument.

Given the abstracted conception, prediction through simulation seems to be the most probable explanation. This results in CDT working.

It's not starting from wanting CDT to work, it's starting from examining the problem, working out the situation from the evidence, and then working out what CDT would say to do.

If I can't apply reason when using CDT, CDT will fail when I'm presented with an "opportunity" to buy a magic rock that costs £10,000, and will make me win the lottery within a month.

Comment author: wedrifid 16 August 2010 08:05:30PM 1 point [-]

An All-knowing Omega by definition contains a simulation of this exact scenario.

No, he doesn't (necessarily). He could prove the inevitable outcome based of elements of the known state of your brain without ever simulating anything. If you read reduction of could you will find a somewhat similar distinction that may make things clearer.

And in that simulation they aren't being perfectly honest, but I still believe they are.

... So we can't conclude this.

If I can't apply reason when using CDT, CDT will fail when I'm presented with an "opportunity" to buy a magic rock that costs £10,000, and will make me win the lottery within a month.

This suggests you don't really understand the problem (or perhaps CDT). That is not the same kind of reasoning.

Comment author: Kingreaper 16 August 2010 08:07:47PM *  1 point [-]

No, he doesn't (necessarily). He could prove the inevitable outcome based of elements of the known state of your brain without ever simulating anything. If you read reduction of could you will find a somewhat similar distinction that may make things clearer.

Does he not know the answer to "what will happen after this" with regards to every point in the scenario?

If he doesn't, is he all-knowing?

If he does know the answer at every point, in what way doesn't he contain the entire scenario?

EDIT: A non-all-knowing superintelligence could presumably find ways other than simulation of getting my answer, as I said simulation just strikes me as the most probable. If you think I should update my probability estimate of the other methods, that's a perfectly reasonable objection to my logic re: a non-all-knowing superint.

Comment author: wedrifid 16 August 2010 08:33:03PM 0 points [-]

EDIT: A non-all-knowing superintelligence could presumably find ways other than simulation of getting my answer, as I said simulation just strikes me as the most probable.

Certainly. That is what I consider Omega doing when I think about these problems. It is a useful intuition pump, something we can get our head around.