wedrifid comments on The Presumptuous Philosopher's Presumptuous Friend - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (80)
By trivial argument (of the kind employed in algorithm complexity analysis and cryptography) that you can just toss a coin or do mental equivalent of it, any guaranteed probability nontrivially >.5, even by a ridiculously small margin, is impossible to achieve. Probability against a random human is entirely irrelevant - what Omega must do is probability against the most uncooperative human being nontrivially >.5, as you can choose to be maximally uncooperative if you wish to.
If we force determinism (what is cheating already), disable free will (as in ability to freely choose our answer only at the point we have to), and let Omega see our brain, it basically means that we have to decide before Omega, and have to tell Omega what we decided, what reverses causality, and collapses it into "Choose 1 or 2 boxes. Based on your decision Omega chooses what to put in them".
From the linked Wikipedia article:
That's basically it. It's ill-defined, and any serious formalization collapses it into either "you choose first, so one box", or "Omega chooses first, so two box" trivial problems.
Yes. Omega chooses first. That's Newcomb's. The other one isn't.
It seems that the fact that both my decision and Omega's decision are determined (quantum acknowledged) by the earlier state of the universe utterly bamboozles your decision theory. Since that is in fact how this universe works your decision theory is broken. It is foolish to define a problem as 'ill-defined' simply because your decision theory can't handle it.
The current state of my brain influences both the decisions I will make in the future and the decisions other agents can make based on what they can infer of my from their observations. This means that intelligent agents will be able to predict my decisions better than a coin flip. In the case of superintelligences they can get a lot better than than 0.5.
Just how much money does Omega need to put in the box before you are willing to discard 'Serious' and take the cash?