You said "truth=opinion", but to defend that you ask people not to do something true to you that isn't a matter of opinion, but to "give you a statement that does not resolve to opinion".
That's false reasoning. You didn't originally say "all true statements are produced by people's opinions" which is trivially true according to some definition of "opinions", as all statements people can make are by necessity produced by their minds.
But if e.g. you get in an accident and you lose your leg, nobody will have offered you an opinion, but nonetheless it'll be true that you'll be missing a leg. If you then say it's only a matter of opinion that you'll have lost your leg, I direct you to the well-known Monty Python sketch....
Your failure seems to arise from a very basic confusion between map and territory, where you think that because statements about reality derive from opinion, then reality itself must derive from opinion. That doesn't follow at all. In truth: F(x)-> y and Mind(Reality) -> "Statements about Reality". -- you didn't disprove the existence of x, just by illustrating that all y can be mapped from x through a function F.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I don't really think Newcomb's problem or any of its variations belong in here. Newcomb's problem is not a decision theory problem, the real difficulty is translating the underspecified English into a payoff matrix.
The ambiguity comes from the the combination of the two claims, (a) Omega being a perfect predictor and (b) the subject being allowed to choose after Omega has made its prediction. Either these two are inconsistent, or they necessitate further unstated assumptions such as backwards causality.
First, let us assume (a) but not (b), which can be formulated as follows: Omega, a computer engineer, can read your code and test run it as many times as he would like in advance. You must submit (simple, unobfuscated) code which either chooses to one- or two-box. The contents of the boxes will depend on Omega's prediction of your code's choice. Do you submit one- or two-boxing code?
Second, let us assume (b) but not (a), which can be formulated as follows: Omega has subjected you to the Newcomb's setup, but because of a bug in its code, its prediction is based on someone else's choice than yours, which has no correlation with your choice whatsoever. Do you one- or two-box?
Both of these formulations translate straightforwardly into payoff matrices and any sort of sensible decision theory you throw at them give the correct solution. The paradox disappears when the ambiguity between the two above possibilities are removed. As far as I can see, all disagreement between one-boxers and two-boxers are simply a matter of one-boxers choosing the first and two-boxers choosing the second interpretation. If so, Newcomb's paradox is not as much interesting as poorly specified. The supposed superiority of TDT over CDT either relies on the paradox not reducing to either of the above or by fiat forcing CDT to work with the wrong payoff matrices.
I would be interested to see an unambiguous and nontrivial formulation of the paradox.
Some quick and messy addenda:
I agree; wherever there is paradox and endless debate, I have always found ambiguity in the initial posing of the question. An unorthodox mathematician named Norman Wildberger just released a new solution by unambiguously specifying what we know about Omega's predictive powers.