Posts

Sorted by New

Wiki Contributions

Comments

I think there are some rather significant assumptions underlying the idea that they are "non-relevant". At the very least, if the agents were distinguishable, I think you should indeed be willing to pay to make n higher. On the other hand, if they're indistinguishable then it's a more difficult question, but the anthropic averaging I suggested in my previous comments leads to absurd results.

What's your proposal here?

I don't think that's entirely correct; SSA, for example, is a halfer position and it does exclude worlds where you don't exist, as do many other anthropic approaches.

Personally I'm generally skeptical of averaging over agents in any utility function.

You definitely don't have a 50% chance of dying in the sense of "experiencing dying". In the sense of "ceasing to exist" I guess you could argue for it, but I think that it's much more reasonable to say that both past selves continue to exist as a single future self.

Regardless, this stuff may be confusing, but it's entirely conceivable that with the correct theory of personal identity we would have a single correct answer to each of these questions.

OK, the "you cause 1/10 of the policy to happen" argument is intuitively reasonable, but under that kind of argument divided responsibility has nothing to do with how many agents are subjectively indistinguishable and instead has to do with the agents who actually participate in the linked decision.

On those grounds, "divided responsibility" would give the right answer in Psy-Kosh's non-anthropic problem. However, this also means your argument that SIA+divided = SSA+total clearly fails, because of the example I just gave before, and because SSA+total gives the wrong answer in Psy-Kosh's non-anthropic problem but SIA+divided does not.

Ah, subjective anticipation... That's an interesting question. I often wonder whether it's meaningful.

As do I. But, as Manfred has said, I don't think that being confused about it is sufficient reason to believe it's meaningless.

As I mentioned earlier, it's not an argument against halfers in general; it's against halfers with a specific kind of utility function, which sounds like this: "In any possible world I value only my own current and future subjective happiness, averaged over all of the subjectively indistinguishable people who could equally be "me" right now."

In the above scenario, there is a 1/2 chance that both Jack and Roger will be created, a 1/4 chance of only Jack, and a 1/4 chance of only Roger.

Before finding out who you are, averaging would lead to a 1:1 odds ratio, and so (as you've agreed) this would lead to a cutoff of 1/2.

After finding out whether you are, in fact, Jack or Roger, you have only one possible self in the TAILS world, and one possible self in the relevant HEADS+Jack/HEADS+Roger world, which leads to a 2:1 odds ratio and a cutoff of 2/3.

Ultimately, I guess the essence here is that this kind of utility function is equivalent to a failure to properly conditionalise, and thus even though you're not using probabilities you're still "Dutch-bookable" with respect to your own utility function.

I guess it could be argued that this result is somewhat trivial, but the utility function mentioned above is at least intuitively reasonable, so I don't think it's meaningless to show that having that kind of utility function is going to put you in trouble.

Linked decisions is also what makes the halfer paradox go away.

I don't think linked decisions make the halfer paradox I brought up go away. Any counterintuitive decisions you make under UDT are simply ones that lead to you making a gain in a counterfactual possible worlds at the cost of a loss in actual possible worlds. However, in the instance above you're losing both in the real scenario in which you're Jack, and in the counterfactual one in which you turned out to be Roger.

Granted, the "halfer" paradox I raised is an argument against having a specific kind of indexical utility function (selfish utility w/ averaging over subjectively indistinguishable agents) rather than an argument against being a halfer in general. SSA, for example, would tell you to stick to your guns because you would still assign probability 1/2 even after you know whether you're "Jack" or "Roger", and thus doesn't suffer from the same paradox. That said, due to the reference class problem, If you are told whether you're Jack or Roger before being told everything else SSA would give the wrong answer, so it's not like it's any better...

To get a paradox that hits at the "thirder" position specifically, in the same way as yours did, I think you need only replace the ticket with something mutually beneficial - like putting on an enjoyable movie that both can watch. Then the thirder would double count the benefit of this, before finding out who they were.

Are you sure? It doesn't seem to be that this would be paradoxical; since the decisions are linked you could argue that "If I hadn't put on an enjoyable movie for Jack/Roger, Jack/Roger wouldn't have put on an enjoyable movie for me, and thus I would be worse off". If, on the other hand, only one agent gets to make that decision, then the agent-parts would have ceased to be subjectively indistinguishable as soon as one of them was offered the decision.

But SIA also has some issues with order of information, though it's connected with decisions

Can you illustrate how the order of information matters there? As far as I can tell it doesn't, and hence it's just an issue with failing to consider counterfactual utility, which SIA ignores by default. It's definitely a relevant criticism of using anthropic probabilities in your decisions, because failing to consider counterfactual utility results in dynamic inconsistency, but I don't think it's as strong as the associated criticism of SSA.

Anyway, if your reference class consists of people who have seen "this is not room X", then "divided responsibility" is no longer 1/3, and you probably have to go whole UTD.

If divided responsibility is not 1/3, what do those words even mean? How can you claim that only two agents are responsible for the decision when it's quite clear that the decision is a linked decision shared by three agents?

If you're taking "divided responsibility" to mean "divide by the number of agents used as an input to the SIA-probability of the relevant world", then your argument that SSA+total = SIA+divided boils down to this: "If, in making decisions, you (an SIA agent) arbitrarily choose to divide your utility for a world by the number of subjectively indistinguishable agents in that world in the given state of information, then you end up with the same decisions as an SSA agent!"

That argument is, of course, trivially true because the the number of agents you're dividing by will be the ratio between the SIA odds and the SSA odds of that world. If you allow me to choose arbitrary constants to scale the utility of each possible world, then of course your decisions will not be fully specified by the probabilities, no matter what decision theory you happen to use. Besides, you haven't even given me any reason why it makes any sense at all to measure my decisions in terms of "responsibility" rather than simply using my utility function in the first place.

On the other hand, if, for example, you could justify why it would make sense to include a notion of "divided responsibility" in my decision theory, then that argument would tell me that SSA+total responsibility must clearly be conceptually the wrong way to do things because it uses total responsibility instead.

All in all, I do think anthropic probabilities are suspect for use in a decision theory because

  1. They result in reflective inconsistency by failing to consider counterfactuals.
  2. It doesn't make sense to use them for decisions when the probabilities could depend upon the decisions (as in the Absent-Minded Driver)

That said, even if you can't use those probabilities in your decision theory there is still a remaining question of "to what degree should I anticipate X, given my state of information". I don't think your argument on "divided responsibility" holds up, but even if it did the question on subjective anticipation remains unanswered.

That's not true. The SSA agents are only told about the conditions of the experiment after they're created and have already opened their eyes.

Consequently, isn't it equally valid for me to begin the SSA probability calculation with those two agents already excluded from my reference class?

Doesn't this mean that SSA probabilities are not uniquely defined given the same information, because they depend upon the order in which that information is incorporated?

I think that argument is highly suspect, primarily because I see no reason why a notion of "responsibility" should have any bearing on your decision theory. Decision theory is about achieving your goals, not avoiding blame for failing.

However, even if we assume that we do include some notion of responsibility, I think that your argument is still incorrect. Consider this version of the incubator Sleeping Beauty problem, where two coins are flipped.
HH => Sleeping Beauties created in Room 1, 2, and 3
HT => Sleeping Beauty created in Room 1
TH => Sleeping Beauty created in Room 2
TT => Sleeping Beauty created in Room 3
Moreover, in each room there is a sign. In Room 1 it is equally likely to say either "This is not Room 2" or "This is not Room 3", and so on for each of the three rooms.

Now, each Sleeping Beauty is offered a choice between two coupons; each coupon gives the specified amount to their preferred charity (by assumption, utility is proportional to $ given to charity), but only if each of them chose the same coupon. The payoff looks like this:
A => $12 if HH, $0 otherwise.
B => $6 if HH, $2.40 otherwise.

I'm sure you see where this is going, but I'll do the math anyway.

With SIA+divided responsibility, we have
p(HH) = p(not HH) = 1/2
The responsibility is divided among 3 people in HH-world, and among 1 person otherwise, therefore
EU(A) = (1/2)(1/3)$12 = $2.00
EU(B) = (1/2)(1/3)$6 + (1/2)$2.40 = $2.20

With SSA+total responsibility, we have
p(HH) = 1/3
p(not HH) = 2/3
EU(A) = (1/3)$12 = $4.00
EU(B) = (1/3)$6 + (2/3)$2.40 = $3.60

So SIA+divided responsibility suggests choosing B, but SSA+total responsibility suggests choosing A.

There's no "should" - this is a value set.

The "should" comes in giving an argument for why a human rather than just a hypothetically constructed agent might actually reason in that way. The "closest continuer" approach makes at least some intuitive sense, though, so I guess that's a fair justification.

The halfer is only being strange because they seem to be using naive CDT. You could construct a similar paradox for a thirder if you assume the ticket pays out only for the other copy, not themselves.

I think there's more to it than that. Yes, UDT-like reasoning gives a general answer, but under UDT the halfer is still definitely acting strange in a way that the thirder would not be.

If the ticket pays out for the other copy, then UDT-like reasoning would lead you to buy the ticket regardless of whether you know which one you are or not, simply on the basis of having a linked decision. Here's Jack's reasoning:

"Now that I know I'm Jack, I'm still only going to pay at most $0.50, because that's what I precommited to do when I didn't know who I was. However, I can't help but think that I was somehow stupid when I made that precommitment, because now it really seems I ought to be willing to pay 2/3. Under UDT sometimes this kind of thing makes sense, because sometimes I have to give up utility so that my counterfactual self can make greater gains, but it seems to me that that isn't the case here. In a counterfactual scenario where I turned out to be Roger and not Jack, I would still desire the same linked decision (x=2/3). Why, then, am I stuck refusing tickets at 55 cents?"

It appears to me that something has clearly gone wrong with the self-averaging approach here, and I think it is indicative of a deeper problem with SSA-like reasoning. I'm not saying you can't reasonably come to the halfer conclusion for different reasons (e.g. the "closest continuer" argument), but some or many of the possible reasons can still be wrong. That being said, I think I tend to disagree with pretty much all of the reasons one could be a halfer, including average utilitarianism, the "closest continuer", and selfish averaging.

Load More