This only shows that the model is no good, because the model does not respect the assumptions of the decision theory.
Decision theories do not compute what the world will be like. Decision theories select the best choice, given a model with this information included. How the world works is not something a decision theory figures out, it is not a physicist and it has no means to perform experiments outside of its current model. You need take care of that yourself, and build it into your model.
If a decision theory had the weakness that certain, possible scenarios could not be modeled, that would be a problem. Any decision theory will have the feature that they work with the model they are given, not with the model they should have been given.
You are applying a decision theory to the node C, which means you are implicitly stating: there are multiple possible choices to be made at this point, and this decision can be made independent of nodes not in front of this one. This means that your model does not model the Newcomb's problem we have been discussing - it models another problem, where C can have values independent of P, which is indeed solved by two-boxing.
It is not the decision theory's responsibility to know that the values of node C is somehow supposed to retrospectively alter the state o...
Could you try to maybe give a straight answer to, what is your problem with my model above? It accurately models the situation. It allows CDT to give a correct answer. It does not superficially resemble the word for word statement of Newcomb's problem.
Therefore, even if the CDT algorithm knows that its choice is predetermined, it cannot make use of that in its decision, because it cannot update contrary to the direction of causality.
You are trying to use a decision theory to determine which choice an agent should make, after the agent has already had its algorithm fixed, which causally determines which choice the agent must make. Do you honestly blame that on CDT?
If you apply CDT at T=4 with a model which builds in the knowledge that the choice C and the prediction P are perfectly correlated, it will one-box. The model is exceedingly simple:
This excludes the two other impossibilities, C1P2 and C2P1, since these violate the correlation constraint. CDT makes a wrong choice when these two are included, because then you have removed the information of the correlation constraint from the model, changing the problem to one in which Omega is not a predictor.
What is your problem with this model?
If you take a careful look at the model, you will realize that the agent has to be precommited, in the sense that what he is going to do is already fixed. Otherwise, the step at T=1 is impossible. I do not mean that he has precommited himself consciously to win at Newcomb's problem, but trivially, a deterministic agent must be precommited.
It is meaningless to apply any sort of decision theory to a deterministic system. You might as well try to apply decision theory to the balls in a game of billiards, which assign high utility to remaining on the table but...
Playing prisoner's dilemma against a copy of yourself is mostly the same problem as Newcomb's. Instead of Omega's prediction being perfectly correlated with your choice, you have an identical agent whose choice will be perfectly correlated with yours - or, possibly, randomly distributed in the same manner. If you can also assume that both copies know this with certainty, then you can do the exact same analysis as for Newcomb's problem.
Whether you have a prediction made by an Omega or a decision made by a copy really does not matter, as long as they both are automatically going to be the same as your own choice, by assumption in the problem statement.
Excellent.
I think laughably stupid is a bit too harsh. As I understand thing, confusion regarding Newcomb's leads to new decision theories, which in turn makes the smoking lesion problem interesting because the new decision theories introduce new, critical weaknesses in order to solve Newcomb's problem. I do, agree, however, that the smoking lesion problem is trivial if you stick to a sensible, CDT model.
We do, by and large, agree. I just thought, and still think, the terminology is somewhat misleading. This is probably not a point I should press, because I have no mandate to dictate how words should be used, and I think we understand each other, but maybe it is worth a shot.
I fully agree that some values in the past and future can be correlated. This is more or less the basis of my analysis of Newcomb's problem, and I think it is also what you mean by imposing constraints on the past light cone. I just prefer to use different words for backwards correlati...
I agree with the content, though I am not sure if I approve of a terminology where causation traverses time like a two-way street.
I tend to agree with mwengler - value is not a property of physical objects or world states, but a property of an observer having unequal preferences for different possible futures.
There is a risk we might be disagreeing because we are working with different interpretations of emotion.
Imagine a work of fiction involving no sentient beings, not even metaphorically - can you possibly write a happy or tragic ending? Is it not first when you introduce some form of intelligence with preferences that destruction becomes bad and serenity good? And are not preferences for this over that the same as emotion?
I do not want to make estimates on how and with what accuracy Omega can predict. There is not nearly enough context available for this. Wikipedia's version has no detail whatsoever on the nature of Omega. There seems to be enough discussion to be had, even with the perhaps impossible assumption that Omega can predict perfectly, always, and that this can be known by the subject with absolute certainty.
I do not think the standard usage is well defined, and avoiding these terms altogether is not possible, seeing as they are in the definition of the problem we are discussing.
Interpretations of the words and arguments for the claim are the whole content of the ancestor post. Maybe you should start there instead of quoting snippets out of context and linking unrelated fallacies? Perhaps, by specifically stating the better and more standard interpretations?
Then I guess I will try to leave it to you to come up with a satisfactory example. The challenge is to include Newcomblike predictive power for Omega, but not without substantiating how Omega achieves this, while still passing your own standards of subject makes choice from own point of view. It is very easy to accidentally create paradoxes in mathematics, by assuming mutually exclusive properties for an object, and the best way to discover these is generally to see if it is possible construct or find an instance of the object described.
...I don't think it
The post scav made more or less represents my opinion here. Compatibilism, choice, free will and determinism are too many vague definitions for me to discuss with. For compatibilism to make any sort of sense to me, I would need a new definition of free will. It is already difficult to discuss how stuff is, without simultaneously having to discuss how to use and interpret words.
Trying to leave the problematic words out of this, my claim is that the only reason CDT ever gives a wrong answer in a Newcomb's problem is that you are feeding it the wrong model. h...
I think the barbering example is excellent - it illustrates that, while controlled experiments more or less is physics, and while physics is great, it is probably not going to bring a paradigm shift to barbering any time soon. One should not expect all domains to be equally well suited to a cut and dried scientific approach.
Where medicine lies on this continuum of suitedness is an open question - it is probably even a misleading question, with medicine being a collection of vastly different problems. However, it is not at all obvious that simply turning up...
If you interpret evidence-based in the widest sense possible, the phrase sort of loses its meaning. Note that the very post you quote explains the intended contrast between systematic and statistical use of evidence versus intuition and traditional experience based human learning.
Besides, would you not say that astrologers figure out both how to be optimally vague, avoiding being wrong while exciting their readers, much the same way musicians figure out what sounds good?
Ironically, this whole exchange might have been a bit more constructive with less taking of offense.
I think I agree, by and large, despite the length of this post.
Whether choice and predictability are mutually exclusive depends on what choice is supposed to mean. The word is not exactly well defined in this context. In some sense, if variable > threshold then A, else B is a choice.
I am not sure where you think I am conflating. As far as I can see, perfect prediction is obviously impossible unless the system in question is deterministic. On the other hand, determinism does not guarantee that perfect prediction is practical or feasible. The computationa...
Only in the sense that the term "pro-life" implies than there exist people opposed to life.
Maybe he means something along the lines of same cause, same effect is just a placeholder for as long as all the things which matter stay the same, you get the same effect. After all, some things, such as time since the man invented fire and position relative to Neptune and so on and so forth cannot possibly be the same for two different events. And this in turn sort of means things which matter -> same effect is a circular definition. Maybe he means to say that the law of causality is not the actually useful principle for making predictions, while there are indeed repeatable experiments and useful predictions to be made.
(Thanks for discussing!)
I will address your last paragraph first. The only significant difference between my original example and the proper Newcomb's paradox is that, in Newcomb's paradox, Omega is made a predictor by fiat and without explanation. This allows perfect prediction and choice to sneak into the same paragraph without obvious contradiction. It seems, if I try to make the mode of prediction transparent, you protest there is no choice being made.
From Omega's point of view, its Newcomb subjects are not making choices in any substantial sense, they...
I am not sure where our disagreement lies at the moment.
Are you using choice to signify strongly free will? Because that means the hypothetical Omega is impossible without backwards causation, leaving us at (b) but not (a) and the whole of Newcomb's paradox moot. Whereas, if you include in Newcomb's paradox, the choice of two-boxing will actually cause the big box to be empty, whereas the choice of one-boxing will actually cause the big box to contain a million dollars by a mechanism of backwards causation, then any CDT model will solve the problem.
Perhaps...
Thanks for the link.
I like how he just brute forces the problem with (simple) mathematics, but I am not sure if it is a good thing to deal with a paradox without properly investigating why it seems to be a paradox in the first place. It is sort of like saying that this super convincing card trick you have seen, there is actually no real magic involved without taking time to address what seems to require magic and how it is done mundanely.
I do not agree that a CDT must conclude that P(A)+P(B) = 1. The argument only holds if you assume the agent's decision is perfectly unpredictable, i.e. that there can be no correlation between the prediction and the decision. This contradicts one of the premises of Newcomb's Paradox, which assumes an entity with exactly the power to predict the agent's choice. Incidentally, this reduces to the (b) but not (a) from above.
By adopting my (a) but not (b) from above, i.e. Omega as a programmer and the agent as predictable code, you can easily see that P(A)+P(B)...
I don't really think Newcomb's problem or any of its variations belong in here. Newcomb's problem is not a decision theory problem, the real difficulty is translating the underspecified English into a payoff matrix.
The ambiguity comes from the the combination of the two claims, (a) Omega being a perfect predictor and (b) the subject being allowed to choose after Omega has made its prediction. Either these two are inconsistent, or they necessitate further unstated assumptions such as backwards causality.
First, let us assume (a) but not (b), which can be for...
The values of A, C and P are all equivalent. You insist on making CDT determine C in a model where it does not know these are correlated. This is a problem with your model.