PhilosophyStudent

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

In any event, you didn't answer the question I asked, which was at what point in time does the two-boxer label the decision "irrational". Is it still "irrational" in their estimation to two-box, in the case where Omega decides after they do?

Time is irrelevant to the two-boxer except as a proof of causal independence so there's no interesting answer to this question. The two-boxer is concerned with causal independence. If a decision cannot help but causally influence the brain scan then the two-boxer would one-box.

Notice that in both cases, the decision arises from information already available: the state of the chooser's brain. So even in the original Newcomb's problem, there is a causal connection between the chooser's brain state and the boxes' contents. That's why I and other people are asking what role time plays: if you are using the correct causal model, where your current brain state has causal influence over your future decision, then the only distinction two-boxers can base their "irrational" label on is time, not causality.

Two-boxers use a causal model where your current brain state has causal influence on your future decisions. They are interested in the causal effects of the decision not the brain state and hence the causal independence criterion does distinguish the cases in their view and they need not appeal to time.

If a two-boxer argues that their decision cannot cause a past event, they have the causal model wrong. The correct model is one of a past brain state influencing both Omega's decision and your own future decision.

They have the right causal model. They just disagree about which downstream causal effects we should be considering.

For me, the simulation argument made it obvious that one-boxing is the rational choice, because it makes clear that your decision is algorithmic. "Then I'll just decide differently!" is, you see, still a fixed algorithm. There is no such thing as submitting one program to Omega and then running a different one, because you are the same program in both cases -- and it's that program that is causal over both Omega's behavior and the "choice you would make in that situation". Separating the decision from the deciding algorithm is incoherent.

No-one denies this. Everyone agrees about what the best program is. They just disagree about what this means about the best decision. The two-boxer says that unfortunately the best program leads us to make a non-optimal decision which is a shame (but worth it because the benefits outweigh the cost). But, they say, this doesn't change the fact that two-boxing is the optimal decision (while acknowledging that the optimal program one-boxes).

How does your hypothetical two-boxer respond to simulation or copy arguments? If you have no way of knowing whether you're the simulated version of you, or the real version of you, which decision is rational then?

I suspect that different two-boxers would respond differently as anthropic style puzzles tend to elicit disagreement.

To put it another way, a two-boxer is arguing that they ought to two-box while simultaneously not being the sort of person who would two-box -- an obvious contradiction. The two-boxer is either arguing for this contradiction, or arguing about the definitions of words by saying "yes, but that's not what 'rational' means".

Well, they're saying that the optimal algorithm is a one-boxing algorithm while the optimal decision is two-boxing. They can explain why as well (algorithms have different causal effects to decisions). There is no immediate contradiction here (it would take serious argument to show a contradiction like, for example, an argument showing that decisions and algorithms are the same thing). For example, imagine a game where I choose a colour and then later choose a number between 1 and 4. With regards to the number, if you pick n, you get $n. With regards to the colour, if you pick red, you get $0, if you pick blue you get $5 but then don't get a choice about the number (you are presumed to have picked 1). It is not contradictory to say that the optimal number to pick is 1 but the optimal colour to pick is blue. The two-boxer is saying something pretty similar here.

What "ought" you do, according to the two-boxer. Well that depends what decision you're facing. If you're facing a decision about what algorithm to adopt, then adopt the optimal algorithm (which one-boxers on all future versions of NP though not ones where the prediction has occurred). If you are not able to choose between algorithms but are just choosing a decision for this occasion then choose two-boxing. They do not give contradictory advice.

The two-boxer never assumes that the decision isn't predictable. They just say that the prediction can no longer be influenced and so you may as well gain the $1000 from the transparent box.

In terms of your hypothetical scenario, the question for the two-boxer will be whether the decision causally influences the result of this brain scan. If yes, then, the two-boxer will one-box (weird sentence). If no, the two-boxer will two-box.

Two-boxing definitely entails that you are a two-boxing agent type. That's not the same claim as the claim that the decision and the agent type are the same thing. See also my comment here. I would be interested to know your answer to my questions there (particularly the second one).

Generally agree. I think there are good arguments for focusing on decision types rather than decisions. A few comments:

Point 1: That's why rationality of decisions is evaluated in terms of expected outcome, not actual outcome. So actually, it wasn't just your agent type that was flawed here but also your decisions. But yes, I agree with the general point that agent type is important.

Point 2: Agree

Point 3: Yes. I agree that there could be ways other than causation to attribute utility to decisions and that these ways might be superior. However, I also think that the causal approach is one natural way to do this and so I think claims that the proponent of two-boxing doesn't care about winning are false. I also think it's false to say they have a twisted definition of winning. It may be false but I think it takes work to show that (I don't think they are just obviously coming up with absurd definitions of winning).

By decision, the two-boxer means something like a proposition that the agent can make true or false at will (decisions don't need to be analysed in terms of propositions but it makes the point fairly clearly). In other words, a decision is a thing that an agent can bring about with certainty.

By agent type, in the case of Newcomb's problem, the two-boxer is just going to mean *the thing that Omega based their prediction on". Let's say the agent's brain state at the time of prediction.

Why think these are the same thing?

If these are the same thing, CDT will one-box. Given that, is there any reason to think that the LW view is best presented as requiring a new decision theory rather than as requiring a new theory of what constitutes a decision?

The two-boxer is trying to maximise money (utility). They are interested in the additional question of which bits of that money (utility) can be attributed to which things (decisions/agent types). "Caused gain" is a view about how we should attribute the gaining of money (utility) to different things.

So they agree that the problem is about maximising money (utility) and not "caused gain". But they are interested in not just which agents end up with the most money (utility) but also which aspects of those agents is responsible for them receiving the money. Specifically, they are interested in whether the decisions the agent makes are responsible for the money they receive. This does not mean they are trying to maximise something other than money (utility). It means they are interested in maximising money and then also in how you can maximise money via different mechanisms.

I'm not convinced this is actually the appropriate way to interpret most two-boxers. I've read papers that say things that sound like this claim but I think the distinction that it generally being gestured at is the distinction I'm making here (with different terminology). I even think we get hints of that with the last sentence of your post where you start to talk about agent's being rewards for their decision theory rather than their decision.

One-boxers end up with 1 000 000 utility Two-boxers end up with 1 000 utility

So everyone agrees that one-boxers are the winning agents (1 000 000 > 1 000)

The question is, how much of this utility can be attributed to the agent's decision rather than type. The two-boxer says that to answer this question we ask about what utility the agent's decision caused them to gain. So they say that we can attribute the following utility to the decisions:

One-boxing: 0 Two-boxing: 1000

And the following utility to the agent's type (there will be some double counting because of overlapping causal effects):

One-boxing type: 1 000 000 Two-boxing type: 1 000

So the proponent of two-boxing says that the winning decision is two-boxing and the winning agent type is a one-boxing type.

I'm not interpreting it so that it's good (for a start, I'm not necessarily a proponent of this view, I'm just outlining it). All I'm discussing is the two-boxer's response to the accusation that they don't win. They say they are interested not in winning agents but winning decisions and that two boxing is the winning decision (because 1000 > 0).

I was using winning to refer to something that comes in degrees.

The basic idea is that each agent ends up with a certain amount of utility (or money) and the question is which bits of this utility can you attribute to the decision. So let's say you wanted to determine how much of this utility you can attribute to the agent having blue hair. How would you do so? One possibility (that used by the two-boxer) is that you ask what causal effect the agent's blue hair had on the amount of utility received. This doesn't seem an utterly unreasonable way of determining how the utility received should be attributed to the agent's hair type.

But the very point is that you can't submit one piece of code and run another. You have to run what you submitted.

Yes. So the two-boxer says that you should precommit to later making an irrational decision. This does not require them to say that the decision you are precommitting to is later rational. So the two-boxer would submit the one-boxing code despite the fact that one unfortunate effect of this would be that they would later irrationally run the code (because there are other effects which counteract this).

I'm not saying your argument is wrong (nor am I saying it's right). I'm just saying that the analogy is too close to the original situation to pump intuitions. If people don't already have the one-boxing intuition in Newcomb's problem then the submitting code analogy doesn't seem to me to make things any clearer.

Load More