Kingreaper comments on Newcomb's Problem: A problem for Causal Decision Theories - Less Wrong

8 [deleted] 16 August 2010 11:25AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (120)

You are viewing a single comment's thread. Show more comments above.

Comment author: Kingreaper 16 August 2010 08:07:47PM *  1 point [-]

No, he doesn't (necessarily). He could prove the inevitable outcome based of elements of the known state of your brain without ever simulating anything. If you read reduction of could you will find a somewhat similar distinction that may make things clearer.

Does he not know the answer to "what will happen after this" with regards to every point in the scenario?

If he doesn't, is he all-knowing?

If he does know the answer at every point, in what way doesn't he contain the entire scenario?

EDIT: A non-all-knowing superintelligence could presumably find ways other than simulation of getting my answer, as I said simulation just strikes me as the most probable. If you think I should update my probability estimate of the other methods, that's a perfectly reasonable objection to my logic re: a non-all-knowing superint.

Comment author: wedrifid 16 August 2010 08:33:03PM 0 points [-]

EDIT: A non-all-knowing superintelligence could presumably find ways other than simulation of getting my answer, as I said simulation just strikes me as the most probable.

Certainly. That is what I consider Omega doing when I think about these problems. It is a useful intuition pump, something we can get our head around.