Posts

Sorted by New

Wiki Contributions

Comments

incogn11y-20

The values of A, C and P are all equivalent. You insist on making CDT determine C in a model where it does not know these are correlated. This is a problem with your model.

incogn11y-20

This only shows that the model is no good, because the model does not respect the assumptions of the decision theory.

incogn11y-20

Decision theories do not compute what the world will be like. Decision theories select the best choice, given a model with this information included. How the world works is not something a decision theory figures out, it is not a physicist and it has no means to perform experiments outside of its current model. You need take care of that yourself, and build it into your model.

If a decision theory had the weakness that certain, possible scenarios could not be modeled, that would be a problem. Any decision theory will have the feature that they work with the model they are given, not with the model they should have been given.

incogn11y-10

You are applying a decision theory to the node C, which means you are implicitly stating: there are multiple possible choices to be made at this point, and this decision can be made independent of nodes not in front of this one. This means that your model does not model the Newcomb's problem we have been discussing - it models another problem, where C can have values independent of P, which is indeed solved by two-boxing.

It is not the decision theory's responsibility to know that the values of node C is somehow supposed to retrospectively alter the state of the branch the decision theory is working in. This is, however,a consequence of the modelling you do. You are on purpose applying CDT too late in your network, such that P and thus the cost of being a two-boxer has gone over the horizon and such that the node C must affect P backwards, not because the problem actually contains backwards causality, but because you want to fix the value of nodes in the wrong order.

If you do not want to make the assumption of free choice at C, then you can just not promote it to an action node. If the decision at C is casually determined from A, then you can apply a decision theory at node A and follow the causal inference. Then you will, once again, get a correct answer from CDT, this time for the version of Newcomb's problem where A and C are fully correlated.

If you refuse to reevaluate your model, then we might as well leave it at this. I do agree that if you insist on applying CDT at C in your model, then it will two-box. I do not agree that this is a problem.

incogn11y10

Could you try to maybe give a straight answer to, what is your problem with my model above? It accurately models the situation. It allows CDT to give a correct answer. It does not superficially resemble the word for word statement of Newcomb's problem.

Therefore, even if the CDT algorithm knows that its choice is predetermined, it cannot make use of that in its decision, because it cannot update contrary to the direction of causality.

You are trying to use a decision theory to determine which choice an agent should make, after the agent has already had its algorithm fixed, which causally determines which choice the agent must make. Do you honestly blame that on CDT?

incogn11y-20

If you apply CDT at T=4 with a model which builds in the knowledge that the choice C and the prediction P are perfectly correlated, it will one-box. The model is exceedingly simple:

  • T'=0: Choose either C1 or C2
  • T'=1: If C1, then gain 1000. If C2, then gain 1.

This excludes the two other impossibilities, C1P2 and C2P1, since these violate the correlation constraint. CDT makes a wrong choice when these two are included, because then you have removed the information of the correlation constraint from the model, changing the problem to one in which Omega is not a predictor.

What is your problem with this model?

incogn11y00

If you take a careful look at the model, you will realize that the agent has to be precommited, in the sense that what he is going to do is already fixed. Otherwise, the step at T=1 is impossible. I do not mean that he has precommited himself consciously to win at Newcomb's problem, but trivially, a deterministic agent must be precommited.

It is meaningless to apply any sort of decision theory to a deterministic system. You might as well try to apply decision theory to the balls in a game of billiards, which assign high utility to remaining on the table but have no free choices to make. For decision theory to have a function, there needs to be a choice to be made between multiple, legal options.

As far as I have understood, your problem is that, if you apply CDT with an action node at T=4, it gives the wrong answer. At T=4, there is only one option to choose, so the choice of decision theory is not exactly critical. If you want to analyse Newcomb's problem, you have to insert an action node at T<1, while there is still a choice to be made, and CDT will do this admirably.

incogn11y00

Playing prisoner's dilemma against a copy of yourself is mostly the same problem as Newcomb's. Instead of Omega's prediction being perfectly correlated with your choice, you have an identical agent whose choice will be perfectly correlated with yours - or, possibly, randomly distributed in the same manner. If you can also assume that both copies know this with certainty, then you can do the exact same analysis as for Newcomb's problem.

Whether you have a prediction made by an Omega or a decision made by a copy really does not matter, as long as they both are automatically going to be the same as your own choice, by assumption in the problem statement.

incogn11y10

Excellent.

I think laughably stupid is a bit too harsh. As I understand thing, confusion regarding Newcomb's leads to new decision theories, which in turn makes the smoking lesion problem interesting because the new decision theories introduce new, critical weaknesses in order to solve Newcomb's problem. I do, agree, however, that the smoking lesion problem is trivial if you stick to a sensible, CDT model.

incogn11y-10

We do, by and large, agree. I just thought, and still think, the terminology is somewhat misleading. This is probably not a point I should press, because I have no mandate to dictate how words should be used, and I think we understand each other, but maybe it is worth a shot.

I fully agree that some values in the past and future can be correlated. This is more or less the basis of my analysis of Newcomb's problem, and I think it is also what you mean by imposing constraints on the past light cone. I just prefer to use different words for backwards correlation and forwards causation.

I would say that the robot getting the extra pack necessitates that it had already been charged and did not need the extra pack, while not having been charged earlier would cause it to fail to recharge itself. I think there is a significant difference between how not being charged causes the robot to run out of power, versus how running out of power necessitates that is has not been charged.

You may of course argue that the future and the past are the same from the viewpoint of physics, and that either can said to cause another. However, as long as people consider the future and the past to be conceptually completely different, I do not see the hurry to erode these differences in the language we use. It probably would not be a good idea to make tomorrow refer to both the day before and the day after today, either.

I guess I will repeat: This is probably not a point I should press, because I have no mandate to dictate how words should be used.

Load More