All of dadadarren's Comments + Replies

I think this discussion is focusing  on what other's would behave towards me, and derive what ought to be regarded as my future self from there. That is certainly a valid discussion to be had. However my post is taking about a different (thought related) topic. 

For example, if I for whatever crazy reason thinks that me from tomorrow:—the one with (largely) the same physical body and no trick on memory whatsoever— not my future self. Then I would do a bunch of irresponsible things that would lead to others' dislike or hostility toward me that coul... (read more)

2Seth Herd
I agree with all of that. It's probably relevant to think about why we tend to value our future selves in the first place. I think it's that each of us has memories (and the resulting habits) of thinking "wow, past self really screwed me over. I hate that. I think I'll not screw future self over so that doesn't happen again". We care because there's a future self that will hate us if we do, and we can imagine it very vividly. In addition, there's an unspoken cultural assusmption that it's logical to care about our future selves. I included some of how other people regard our identity, but that's not my point. My point is that, for almost any reason whatsoever you could come up with to value your physically continuous future self, you'd also value a physically discontinuous future self that maintains the same mind-pattern. That's except for deciding "I no longer care about anything that teleports", which is possible and consistent, but no more sensible than stopping caring about anything wearing blue hats. So sure, people aren't necessarily logically wrong if they value their physically continuous future self over a perfect clone (or upload). But they probably are making a logic error, if they have even modestly consistent values.

If one regard physics as a detached description of the world— like a non-interacting yet apt depiction of the objective reality, (assuming that exists and is attainable) then yes there is no distinct "me".  And any explanation of subject experience ought to be explained by physical processes, such that everyone's "MEness" must be ultimately reduced to the physical body. 

However my entire position  stems from a different logical starting point.  It starts with "me". It is an undeniable and fundamental fact that I am this particular thing... (read more)

2Dagon
Oh, I wonder if our crux is  I don't dispute that you have an undeniable and fundamental fact that you experience things which you summaraize as Dadadarren.  I do question what you mean by "this particular thing".  I don't know that our model of physics is true or complete (in fact, I suspect it's not), but I don't see any reason to believe that conscious experiences (me-ness) are somehow separate from underlying physical processes.

Please do not take this as an insult. Though I do not intend to continue this discussion further, I feel obliged to say that I strongly disagree that we have the same position in substance and only disagree in semantics. Our position are different on a fundamental level. 

The description of "I" you just had is what I earlier referred to as the physical person, which is one of the two possible meanings. For the Doomsday argument, it also used the second meaning: the nonphysical reference to the first-person perspective. I.E. the uniform prior distribution DA proposed, which is integral to the controversial Bayesian update, is not suggesting that a particular physical person can be born earlier than all human beings or later than all of them due to variations in its gestation period. In its convoluted way thanks to the equivo... (read more)

1Ape in the coat
As always we completely agree in substance, while using different semantics.  Yes, that's why I'm saying that it requires the existence of some non-physical entity, which I call souls.  You seem to imply that "first person perspective" itself is non-physical, but this sounds weird to me, Clearly physicalism is not debunked by the fact taht people have first person perspectives. There are seem to be very physical rules due to which mind in Dadadarren's body has Dadarren's first person perspective and not Ape in the coat's. The only way how "I" here can not be equivalent to particular physical person is if we assume that there is something non-physical about personhood. People do indeed implicitly assume it all the time. But this assumption is completely ungrounded, and that's what I'm pointing out. Yes, absolutely. "I" is just a variable, referencing different things depending on who says it. When Dadadarren says "I" it means "Dadadarren". When Ape in the coat says "I" it means "Ape in the coat".  Doomsday argument can be expressed in terms of birth ranks. So inquiring in the mechanism due to which physical people accure birth ranks seems to be only right thing to do.

The point of defining "me" vigorously is not about how much upstream or physically specific we ought to be, but rather when conducting discussions in the anthropic field, we ought to recognize words such as "me" or "now" are used equivocally in two different senses: 1, the specific physical person, i.e. the particular human being born to the specific parents etc. and 2, just a reference to the first person of any given perspective. Without distinguishing which meaning in particular is used in an argument, there is room for confounding the discussion, I fee... (read more)

1Ape in the coat
I don't see how it's happening here, but sure let's try to be vigorous in this sense. Let me construct the argument, while tabooing the word "I" altogether: Dadadarren is a result of a particular sexual encounter between dadadarren's parents. Dadadarren's mother is a result of a particular sexual encounter between dadadarren's mother's parents. Dadadarren's father is a result of a particular sexual encounter between dadadarren's father's parents. And so on. Therefore, dadadarren is not a random person from all the people throughout human history. Therefore doomsday inference is incorrect for dadadarren.  Ape in the coat is a result of a particular sexual encounter between Ape in the coat's parents. Ape in the coat's mother is a result of a particular sexual encounter between Ape in the coat's mother's parents. Ape in the coat's father is a result of a particular sexual encounter between Ape in the coat's father's parents. And so on. Therefore, Ape in the coat is not a random person from all the people throughout human history. Therefore doomsday inference is incorrect for Ape in the coat. And so on. Its possible to construct this kind of argument for every person who has ever lived. Therefore, doomsday inference is incorrect in general. Using "I" simply compresses billions of such individual statements into a shorter and more comprehensive form. Yes, that's exactly what it is. Unless there are souls - a non physical component to personal identity which can add additional uncertanity about the causal process, "I" is just a pointer that refers to a particular human being and therefore statement "I could have been a different human being" is as absurd as claiming that A != A.

While I agree with the notion that we cannot regard ourselves as random samples from all human beings past, present and future, I find the discussion wanting in vigorously defining the reference of "us", or "me" or by extension "my parents". Without doing that there's always the logical wiggle room for arriving at an ad hoc conclusion that does not give paradoxical results, e.g. while discussion SBP, you suggested that "today" could mean any day, then attempting to derive the probability of "today is Monday" from there. That just doesn't sit comfortably wi... (read more)

1Ape in the coat
Extra vigor is always nice but I don't see how its necessary here. "I" am a downstream reasult of my parents having sex at a particular time. The same way "my father" is a downstream result of his parents having sex at a particular time. The same way "my mother" is a downstream result of her parents having sex at a particuler time and so on and so forth. This level of vigor is already enough to see that DA is nonsense. I was showing that "Today" is a poorly specified term in the setting of Sleeping Beauty and if we try to define it as "Monday xor Tuesday" - the way we usually define it in such situations - we clearly observe that this event doesn't happen in 50% of iterations of the experiment and so the popular claim that the Beauty always observes that she is "Awake Today" is wrong. As long as we stop thinking about "Todays" and instead make a model about the propbability experiment as a whole - as we are supposed to - everything adds up to normality. Either DA does imply that my physical body could happen to exist in the distant past or distant future or it implies that there is something non-physical about my identity. Therefore, I mention that to rescue DA we need the concept of souls. Only this way the idea of me being a different human being is coherent. All probabilistic models are approximations. This model is less unreasonable and, considering where the current sanity waterline for anthropics is, this is good enough. There is still room for improvement, of course. As long as we are not talking about me being born to different parents but simply having a different birth rank - I'm born a bit earlier, while someone else is born a bit later, for example, - then no souls are required.

This post highlights my problem with your approach: I just don't see a clear logic dictating which interpretation to use in a given problem—whether it's the specific first-person instance or any instance in some reference class. 

When Alice meets Bob, you are saying she should construe it as "I meet Bob in the experiment (on any day)" instead of "I meet Bob today" because—"both awakening are happening to her, not another person". This personhood continuity, in your opinion, is based on what? Given you have distinguished the memory erasure problem from ... (read more)

1Ape in the coat
Causality. Two time states of a single person a causally connected, while two clones are not. Probability theory treats independent and non-independent events differently. The fact that it fits the basic intuition for personal identity is a nice bonus. Yes it would. I find the fact that these problems are put in the same category of "anthropic problems" quite unfortunate as they have testably different probability theoretic properties. For example for Sleeping Beauty correct position is double halfism, while for fissure - lewisian halfism. Okay, that sounds as an interesting problem. Let's formulate it like this: Alice if put to sleep then the coin is tossed. On Heads she is awaken on Monday. On Tails another coin is tossed: 1. Either she is awakened both on Monday and on Tuesday with memory erasure 2. Or fissure happens. Alice1 is awakened on Monday, Alice2 is awakened on Tuesday What do we have probability wise, on an awakening on the unknown day? 50% for Heads, 50% for Tails, 25% fissure, 25% memory erasure, 12.5% to be Alice1/Alice2 Now, suppose Alice meets Bob, who is awaken on a random day. Bob updates 2/3 in favor of Tails as he meets an Alice in the experiment with 75% probability. But for a particular Alice the probability to meet Bob in the experiment is only 1/4 + 2/8 + 1/8 = 5/8 So her probability that the initial coin is Heads: P(H1|MeetsBob)=P(MeetsBob|H1)P(H1)/P(MeetsBob)=1/2∗1/2∗8/5=40% Now, I think in this particular case there is not much difference between fissure and cloning. There would apparently be difference if we were talking about a person who was about to participate in the experiment, instead of a person in the middle of it. Because current participator can be in the state of uncertanity whether she is a clone or not, while future participator is pretty sure the she is not going to be a clone, thus can omit this possibility from the calculations. But yeah, I should probably write a separate post about such scenarios, after I

I guess my main problem with your approach is that I don't see a clear rational of which probability to use, or when to interpret it as "I see green" and when to interpret it as "Anyone see green" when both of the statement is based on the fact that I drew a green ball. 

For example, my argument is that after seeing the green ball, my probability is 0.9, and I shall make all my decisions based on that. Why not update the pre-game plan based on that probability? Because the pre-game plan is not my decision. It is an agreement reached by all participants... (read more)

I maintain the memory erasure and fission problem are similar because I regard the first-person identification equally applies to both questions. Both the inherent identifications of "NOW" and "I" are based on the primitive perspective. I.E., to Alice, today's awakening is not the other day's awakening, she can naturally tell them apart because she is experiencing the one today. 

I don't think our difference comes from the non-fissured person always stays in Room1 while the fissure person are randomly assigned either Room 1 or Room 2. Even if the exper... (read more)

1Ape in the coat
Well, sure but nothing is preventing her from also realizing that both of the awakenings are happening to her, not some other person. That both today's and tomorrow awakening are casually connected to each other even if she has her memory erased, contrary to the fissure problem where there are actually two different people in two rooms with their own causal history hence forth. Alice is indeed unable to observe the event "I didn't see Bob at all". Due to the memory erasure she can't distinguish between "I don't observe Bob today but will observe him tomorrow/observed him yesterday" and "I do not observe Bob in this experiment at all". So when Alice doesn't see Bob she keeps her credence at 50%. But why doesn't she also observe "I see Bob on one of the two days", if she sees Bob on a specific day? Surely today is one of the two days. This seems like logical necessity.  Suppose there is no Bob. Suppose: The Beauty is awakened on Monday with 50% chance. If she wasn't awakened a fair coin is tossed. On Tails the Beauty is awakened on Tuesday. Do you also think that the Beauty isn't supposed to update in favor of Tails when she awakes in this case?

If you use this logic not for the latitude your are born in but for your birth rank among human beings, then you get the Doomsday argument. 

To me the latitude argument is even more problematic as it involves problems such as linearity. But in any case I am not convinced of this line of reasoning. 

P.S. 59N is really-really high.  Anyway if your use that information and make predictions about where humans are born generally latitude-wise it will be way-way off. 

I think this highlights our difference at least in the numerical sense in this example. I would say Alex and Bob would disagree (provided Alex is a halfer, which is the correct answer in my opinion). The disagreement is again based on the perspective-based self identification. From Alex's perspective, there is an inherent difference between "today's awakening" and "the other day's awakening" (provided there is actually two awakenings). But to Bob, either of those is "today's awakening", Alex cannot communicate the inherent difference from her perspective t... (read more)

1Ape in the coat
Yes! This is one of the few objective disagreements we have and I'm very excited to figure it out! You seem to treat different awakenings of Alice as if they were different people in attempts to preserve the similarity between memory erasure sleeping beauty type of problems and fissure type of problems. While I notice that these problems are different.  The difference is that in Sleeping Beauty P(Heads|Monday) = 1/2 while in Fissure, where non-fissured person is always in Room1 and fissured people are randomly assigned either Room1 or Room2, P(Heads|Room1) = 2/3. Is it our crux?

We both argue the two probabilities, 0.5 and 0.9, are valid. The difference is how we justify both. I have held that "the probability of mostly-green-balls" are different concepts if there are from different perspectives: From a participant's first-person perspective, the probability is 0.9. From an objective outsider's perspective, even after I drew a green ball, it is 0.5. The difference come from the fact that the inherent self-identification "I" is meaningful only to the first-person. Which is the same reason for my argument for perspective disagreemen... (read more)

1Ape in the coat
I think I mostly agree with that. I agree that there can be valid differences in people perspectives. But I reduce them to differences in possible events that people can or can't observe. This allows to reduce all the mysterious anthropic stuff to simple probability theory and makes the reasoning more clear, I believe. As I've written in the post, my personal probability is 0.9. More specifically, it's probability that the coin is Heads, conditionally on me seeing green. P(Heads|ISeeGreen)=P(ISeeGreen|Heads)P(Heads)/P(ISeeGreen)=0.9 But probability that the coin is Heads, conditionally on any person seeing green is 0.5 P(Heads|AnySeesGreen)=P(AnySeesGreen|Heads)P(Heads)/P(AnySeesGreen)=0.5 This is because while I may or may not see green, someone from the group always will. Me, in particular and any person have different possible events that we can observe. Thus we have different probabilities for these events. If we had the same possible events, for example, because I'm the only person in the experiment, then the probabilities would be the same And then you just check which probability is relevant to which betting scheme. In this case it's the probability for any person not for me. Of course it would look like that from inside the SSA vs SIA framework. But that's because the framework is stupid.  Imagine there is a passionate disagreement of what color the sky is. Some people claim that it's blue, while other people claim that it's black. There is a significant amount of evidence supporting both sides. For example, a group of blue sky supporters went outside during the day and recorded that the sky is blue. Then a group of black sky supporters did the same during the night and recorded that the sky is black. Both groups argue that the other group made their experiment from the other side of the planet then the result would be different. With time, two theories are developped: Constant Day Assumption and Constant Night Assumption. Followers of CDA claim tha

I am trying to point out the difference between the following two: 

(a) A strategy that prescribes all participants' actions, with the goal of maximizing the overall combined payoff, in the current post I called it the coordination strategy. In contrast to: 

(b) A strategy that that applies to the single participant's action (me), with the goal of maximizing my personal payoff, in the current post I called it the personal strategy. 

I argue that they are not the same things, the former should be derived with an impartial observer's perspective,... (read more)

If one person is created in each room, then there is no probability of "which room I am in" cause that is asking "which person I am". To arrive to any probability you need to employ some sort of anthropic assumption. 

If 10 persons are are randomly assigned (or assigned according to some unknown process), the probability of "which room I am in" exists. No anthropic assumption is needed to answer it. 

You can also find the difference using a frequentist model by repeating the experiments. The latter questions has a strategy that could maximize "my" personal interest.  The former model doesn't. It only has a strategy, if abided by everyone, that could maximize the group interest (coordination strategy). 

2avturchin
We can experimentally test this. I can treat the place I was born as random relative to its latitude = 59N. I ignore everything I know about population distribution and spherical geometry and ask a question: assuming that I was born in the middle of all latitudes, what is the highest possible latitude? It will be double of my latitude, or 118 - which is reasonably close to real answer 90.  From this I conclude that I can use information about my location as a random sample and use it for some predictions about the things I can't observe. 

The probability of 0.9 is the correct one to use to derive "my" strategies maximizing "my" personal interest. e.g. If all other participants decides to say yes to the bet, what is your best strategy? Based on the probability of 0.9 you should also say yes. But based on the probability of 0.5 you would say no. However, the former will yield you more money. It would be obvious if the experiment is repeated a large number of time. 

You astutely pinpointed that the problem of saying yes is not beneficial because you are paying the idiot versions of you's d... (read more)

Yep, under PBR, perspective—which agent is the "I"—is primitive. I can take it as given, but there is no way to analyze it. In another word, self-locating probability like "what is the probability that I am L" is undefined. 

2avturchin
But can we ask another question: 'where I am located?' For example, I know that I am avturchin, but I don't know in which of 10 rooms I am located, and assuming that 9 of them are red outside and 1 green, I can bet there is 0.9 chances that I am in red one. It doesn't matter here if I am just one person entering the rooms, or there are other people in the rooms (if in equal numbers) or even that my copies are in each room. 

The Sleeping Beauty problem and this paradox are highly similar,  I would say they are caused by the same thing—switching of perspectives. However, there is one important distinction. 

For the current paradox, there is an actual sampling process for the balls. Therefore there is no need to assume a reference class of "I".  Take who I am—which person's perspective I am experiencing the world from—as a given, and the ball-assigning process treats "I" and other participants as equals. So there is no need to interpret "I" as a a random sample fro... (read more)

4Dagon
I think we're more in agreement than at odds, here.  The edict to avoid mixing or switching perspectives seems pretty strong. I'm not sure I have a good mechanism for picking WHICH perspective to apply to which problems, though.  The setup of this (and of Sleeping Beauty) is such that using the probability of 0.9 is NOT actually in your own interest.   This is because of the cost of all the times you'd draw red and have to pay for the idiot version of you who drew green - the universe doesn't care about your choice of perspective; in that sense it's just incorrect to use that probability. The only out I know is to calculate the outcomes of both perspectives, including the "relevant counterfactuals", which is what I struggle to define.  Or to just accept predictably-bad outcomes in setups like these (which is what actually happens in a lot of real-world equilibria).

Numerically it is trivial to say the better thing to do (for each bet, for the benefit of all participants) is not to update. The question is of course how do we justify this.  After all, it is pretty uncontroversial that the probability of urn-with mostly-green-balls is 0.9 when I get received the randomly assigned ball which turns out to be green. You can enlist a new type of decision theory such as UDT, or a new type of probability theory which allows two probability to be both valid depending on what betting scheme like Ape in the Coat's did). Wha... (read more)

1Ape in the coat
I absolutely didn't create a new type of probability theory. People just happen to have some bizarre misconeptions about probability theory like "you are always supposed to use the power set of the sameple space as your event space" or "you can't use more than one probability space to describe a problem". And I point that nothing in formal probability theory actually justify such claims. See my recent post and discussion with Throwaway2367 for another example.

Late to the party but want to say this post is quite on point with the analysis. Just want to add my—as a supporter of CDT—reading to the problem,  which has a different focus. 

I agree the assumption that every person would make the same decision as I do is deeply problematic. It may seem intuitive if the others are "copies of me", which is perhaps why this problem is first brought up in an anthropic context. CDT inherently treats the decision maker as an agent apart from his surrounding world, outside of the casual analysis scope. Assuming "othe... (read more)

Betting and reward arguments like this is deeply problematic in two senses:

  1. The measurement of objective is the combined total reward to all in a purposed reference class, like the 20 "you" in the example. Usually the question would try to boost the intuition of this by saying all of them are copies of "you". However, even if the created persons (actually doesn't even have to be all persons, AIs or aliens will work just fine) are vastly different, it does not affect the analysis at all. Since the question is directed at you, and the evidence is your observa
... (read more)
2Ape in the coat
Yes, this is the whole point. Probability theory doesn't have any special case for anthropics. Neither it's supposed to have one.  Probability theory should be able to lawfully deal with all kind of bets and rewards. The reason why this particular type of bet was looked into is because it apparently lead to a paradox which I wanted to resolve.  I though this assimption wasn't subtle at all. There are two possibilities: either "I" is a person who is always meant to see green, or "I" is a person who could either see green or red and was randomly sampled. The first case is trivial - if I was always supposed to see green then I'm not supposed to update my probability estimate and thus there is no paradox. So we focus on the other case as a more interesting one. I don't see how it is the case. If anything it's the opposite. Paradoxes happen when people try to treat first person perspective as somewhat more than just a set of possible outcomes and anthropics as something beyond simple probability theory. As if there is some extra rule about self-selection, as if the universe is supposed to especially care about our personal identities for some reason. Then they try to apply this rule to every other anthropic problem and get predictably silly results. But as soon as we do not do that and just lawfully use probability theory as it is - all the apparent paradoxes resolve which this posts demonstrates. "I" is not "perspectiveless" but it corresponds to a specific set of possible outcomes thus we have a run-of-the-mill probability problem. There may be disagreements about what set of possible outcomes correctly represent first person perspective in a specific situation - usually in problems where different number of people are created in different outcomes - but this problem isn't an example of it. On priors, the theory that claims that anthropics is special case is more complicated and thus, less likely, than a theory that anthropics is not special in any way. Previously

The more I think about it the more certain I am that many unsolved problems, not just anthropics, are due to the deep-rooted habit of a view-from-nowhere reasoning. Recognizing perspective as a fundamental part of logic would be the way out.

Problems such as anthropics, interpretive challenges of quantum mechanics, CDT's problem of non-self-analyzing, how agency and free will coexist with physics, Russel's paradox and Godel's incomplete theorem etc

Maybe I am the man with a hammer looking for nails. Yet deep down I have to be honest to myself and say I don't think that's the case. 

2avturchin
If we completely embrace this view, will it end in solipsism?  The whole idea of the world existing without us is based on view-from-nowhere reasoning.
4Gunnar_Zarncke
I agree that there are multiple interpretative challenges around agency that have a common root. However, I think decision theories and free will are distinct from Russel's paradox and mathematical consistency problems. I think PBR helps with the first but not with the second. At some level, there is a commonality to all "problems" in and with symbolic reasoning. But I think PBR will not help with it as it is itself symbolic.

Well, I didn't expect this to be the majority opinion. I guess I was too in my head. 

But to explain my rationale: The effects of the two drugs only differ during the operation, their end results are identical. So after the operation, barring external records like bank account information, there is no way to even tell which drug I took, their result would be the same. Taking external records into consideration, the extra dollar in the bank would certainly be more welcomed. 

The memory-inhibiting part was supposed to preclude the journey considerati... (read more)

1Nathaniel Monson
I don't think the end result is identical. If you take B, you now have evidence that, if a similar situation arises again, you won't have to experience excruciating pain. Your past actions and decisions are relevant evidence of future actions and decisions. If you take drug A, your chance of experiencing excruciating pain at some point in the future goes up (at least your subjective estimation of the probability should probably go up at least a bit.) I would pay a dollar to lower my best rational estimate of the chance of something like that happening to me--wouldn't you?

Understandable. As much as I firmly believe in my theory, I have to admit I have a hard time making it look convincing. 

The conflict arises when the self at the perspective center is making the decision but is also being analyzed. With CDT it leads to a self-referential-like paradox: I'm making the decision (which according to CDT is based on agency and unpredictable) yet there really is no decision but merely generating an output.

Precommitments sidestep this by saying there is no decision at the point being analyzed. It essentially moves the decision to a different observer-moment. Thus allowing the analysis to be taken into account in the decision analysis. In Newcomb, th... (read more)

2Gunnar_Zarncke
I think that's maybe the point people can agree on: To build a machine that performs well. That goes beyond building a decision procedure that performs well in many specific situations (that would each correspond to observer moments) but not in a succession of them, or in situations that would require its own analyzability.  Building such a machine requires specifying what it optimizes over, which will be potentially very many observer moments.

I didn't "choose" to generalize my position beyond conscious beings. It is an integral part of it. If perspectives are valid only for things that are conscious (however that is defined), then perspective has some prerequisite and is no longer fundamental. It would also give rise to the age-old reference class problem and no longer be a solution to anthropic paradoxes. E.g. are computer simulations conscious? answers to that would directly determine anthropic problems such as Nick Bostrom's simulation argument. 

Phenomenal consciousness is integral to p... (read more)

Consciousness has many contending definitions. e.g. if you take the view that consciousness is identified by physical complexity and the ability to process data then it doesn't have anything to do with perspective. I'm endorsing phenomenal consciousness, as in the hard problem of consciousness: we can describe brain functions purely physically,  yet it does not resolve why they are accompanied by subjective feelings. And this "feeling" is entirely first-person, I don't know your feelings because otherwise, I would be you instead of me. "What it means ... (read more)

-1[anonymous]
But you've generalised your position on perspective beyond conscious beings. My understanding is that perspective is not reducible to non-perspective facts in the theory because the perspective is contingent, but nothing there explicitly refers to consciousness. You can adopt mutatus mutandis a different perspective in the description of a problem and arrive to the right conclusion. There's no appeal to a phenomenal perspective there. The epistemic limitations of minds that map to the idea of a perspective-centric epistemology and metaphysics come from facts about brains.

I do think SIA and SSA are making extraordinary claims and the burden of proof is on them. I have proposed assuming the self as a random sample is wrong for several years. That is not the problem I have with this argument. What I disagree with is that your argument depends on phrases and concepts such as "'your' existence" and "who 'you' are" without even attempting to define what/which one is this 'you' refers to. My position is it refers to the self, based on the first-person perspective, which is fundamental,  a primitive concept. So it doesn't req... (read more)

1Ape in the coat
The thing is, what "you" refers to, fully depends on the setting of the experiment which is, whether there is random sampling going on or not. In FBJE you are a person in a blue jacket, regardless of the coin toss outcome. In BJE you are one of the created people and can either have a blue jacket or not with probabilities depending on the coin toss. Part of the confusion of anthropics is thinking that "you" always points to the same thing in any experiment setting and what I'm trying to show is that it is not the case. And this approach is clearly superrior to both SSA and SIA, which claim that it has to always be one particular way biting all the ridiculous bullets and presumptious cases on the way. Is it true, though? I agree it's easy to just accept as an axiom that "selfness" is some fundamental property and try to build your ontological theory on this assumption. But the more we learn more about the ordered mechanism of the universe, the less probable subjective idealism becomes, compared to materialism.  I believe, on our current level of knowledge, it doesn't really seem plausiable that "first person perspective" is somehow fundamental. In the end it's made from quarks like everything else. No this is treating you as a random sample when you are actually random sampled. I was comming from the assumption that people have good intuitive understanding what counts as random sample and what doesn't. But I see why this may be confusing in its own right, and I make a note for myself to go deeper into the question in one of future posts. For now I'll just point that a regular coin toss counts as a random sample between two outcomes, even if it was made a year ago. Same logic applies here. Well, I can come up with some plausible sounding settings, but this doesn't really matter for the general point I'm making. Whatever is the priocess that is guaranteeing that you in particular will always have the blue jacket the logic stays the same. And if there is no such pro

Something's not adding up. You said that anthropic paradox is not about first-person perspective or consciousness. But later:

But in ISB there are no iterations in which you do not exist. The number of outcomes in which you are created equals the total number of iterations.

The most immediate question is the definition of "you" in this logic. Why can't thirders define "you" as a potentially existing person? In which case the statement would be false. If you define it as an actually existing person then which one? Seems to me you are using the word "you" to l... (read more)

3Ape in the coat
I'm not saying that there never any differences between first and third person perspectives in any possible setting. I'm saying that all these differences are explained by different possible outcomes and expected evidence - general principles of probability theory and do not require any additional methaphysics. My next post will focus more specifically on this idea. They can in principle. SIA followers may claim that people are indeed random sampled from a finite set of immaterial soul to inhabit a body. But then the burden of proof would be on them to show some evidence for such an extraordinary claim. As long as there are no reason to expect that your existence is a random sample we shouldn't assume that it's the case.  If you are the person that is guaranteed to have a blue jacket then this is FBJE and indeed the analysis changes as you can not lawfully update on the fact of having a blue jacket. However, if the causal process creating you didn't particularly care about specifically you having or not having a blue jacket, if it was just two people created the first always with a blue jacket and the second always without and, once again, you were not necessary meant to be the first, then this counts as random sampling and BJE analysis stands.

I don't feel there is enough common ground for effective discussion. This is the first time I have seen the position that the sleeping beauty paradox disappears when the Heads awakening is sampled between Monday and Tuesday. 

1Ape in the coat
Oh, sorry, I misinterpreted you. I thought you meant that Tails outcome is randomly sampled, not Heads outcome. So that we would have 1 awakening on Monday on Heads and 1 awakening on either Monday or Tuesday on Tails and then, indeed, there is no paradox. Yeah, as far as I can tell, random sampling on Heads doesn't change anything, just makes harder to track the outcomes. You may read my recent post to better grasp how and what kind of random sampling is relevant to anthropic problems.

Can you point out the difference why Tails and Monday, Tails and Tuesday are casually connected while the 100 people created by the incubator are not, by independent outcomes instead?

Nothing is stopping us to perceive the situations as different possible worlds, not different places in the same world.

All this post is trying to argue is statement like this requires some justification. Even if the justification is a mere stipulation, it should be at least recognized as an additional assumption. Given that anthropic problems often lead to controversial paradoxes, it is prudent to examine every assumption we make in solving them. 

1Ape in the coat
  Sure. Tails and Tuesday always happens after Tails and Monday with the same person. While each of a hundred people are created only in one room. Here I've shown how this is a big deal. There is a general problem with applying probability theory to moments in time due to it's conectedness. We can in principle design an experiment to make this conectedness irrelevant. But SB isn't that because it simultaneously tries to track random sampled results of a coin toss and non random sampled days. When we fix the day we can meaningfully talk about P(Heads|Monday) and P(Tails|Monday). When we fix the outcome of the coin toss we can meaningfully talk about P(Monday|Heads) and P(Monday|Tails). But as soon as we try to combine them together... well then we have to talk about "centered possible words" for which we do not actually have proper mathematical framework, which means we are just unlawfully making things up. Totally agree with this point. I just believe that I've already found the source of these paradoxes and it has to do with wrongly applying probability theory and not with whether the problem is anthropic or not. But yeah, I can be missing something here and it's important to be prudent with such things.

If we modify the original sleeping beauty problem, such that if heads you will be awakened on one randomly sampled day (either Monday/ Tuesday), would you change your answer to 1/3?

1Ape in the coat
This kind of sampling, actually makes Halfism true. You can see that P(Heads|Monday) = 2/3 in this setting, contrary to classical SB where P(Heads|Monday) = 1/2. But the paradox disappears, nevertheless. To make Thirdism true we need to make the implicit assumption, that awakened states are randomly sampled, to be actually true. So the causal process that determines the awakenings shouldn't be based on a coin toss, but on a random generator with three states: 0, 1, 2. If the generator produced 0, the coin will be put Heads and the Beauty to be awakened on Monday. If 1 - the coin is to be put Tails and the Beauty also to be awakened on Monday. And if the generator produced 2 - the coin is to be put Tails and the Beauty to be awakened on Tuesday. Again the paradox disappears, even though the experiment is still as anthropic as ever. 

Anthropic paradoxes happen only when we use events representing different self-locations in the same possible world. If the paradoxes are just problems of probability theory then why this limited scope? 

I do consider anthropic problems, in one sense or another, to be metaphysical. And I know there are people who disagree with this. But wouldn't stipulating anthropic paradoxes are solely probability problems also require arguments to justify? Apart from "a rule of thumb"?

1Ape in the coat
My current hypothesis is that anthropic paradoxes happen when people use probability theory incorrectly, in an inappropriate setting, making incorrect assumptions. Mostly assuming things to be randomly sampled when they are not, ignoring causality and law of conservation of expected evidence.  Of course. I'm currently finishing a post dedicated to this among other things. Here is an example from it, that I call Bargain Sleeping Beauty (BSB). I claim that here P(Heads|Awakening) = 1/3, despite being Double Halfer/Halfer in Classic/Incubator versions correspondingly. And the important difference isn't in the fact that BSB isn't anthropic problem but in the fact that here there is actually a random sample between two people who would be put to sleep and awakened on Heads. So being awakened is evidence for Tails.  And of course there is also this example of an antropic paradox which doesn't become less paradoxical when remade it into a non-antropic problem.

Like Question 1 and traditional probability problems, Question 3's events reflect different possible worlds, different outcomes of the room-assigning experiment.  Question 2's supposed events reflect different locations of the self in the same possible world, i.e. different centred worlds. 

Controversial anthropic probability problems occur only when the latter type is used. So there is good reason to think this distinction is significant. 

1Ape in the coat
Hmm. I don't think you require the framework of centered possible worlds for question 2. Nothing is stopping us to perceive the situations as different possible worlds, not different places in the same world. There are hundred independent elementary outcomes (I am in room X for X up to 100). So we can define probability space and satisfy the conditions of Kolmogorov's Axioms. On the other hand, consider classic Sleeping Beauty. Heads and Monday, Tails and Tuesday, Tails and Tuesday are not three independent outcomes (the last two are casually connected), so normal probability theory is not applicable and people are trying to do the shenanigans with centered possible worlds.

It seems earlier posts and your post have defined anthropic shadow differently in subtle but important ways. The earlier posts by Christopher and Jessica argued AS is invalid: that there should be updates given I survived. Your post argued AS is valid: that there are games where no new information gained while playing can change your strategy (no useful updates). The former is focusing on updates, the latter is focusing on strategy. These two positions are not mutually exclusive. 

Personally, the concept of "useful update" seems situational. For exampl... (read more)

2dr_s
True enough, I guess. I do wonder how to reconcile the two views though, because the approach you describe that allows you to update in case of a basic game is actively worse for the second kind of game (the one with the blanks). In that case, using the approach I suggested actually produces a peaked probability distribution on b that eventually converges to the correct value (well, on average). Meanwhile just looking at survival produces exactly the same monotonically decaying power law. If the latter potentially is useful information, I wonder how one might integrate the two.

To my understanding, anthropic shadow refers to the absurdum logic in Leslie's Firing Squad: "Of course I have survived the firing squad, that is the only way I can make this observation. Nothing surprising here". Or reasonings such as "I have played the Russian roulette 1000 times, but I cannot increase my belief that there is actually no bullet in the gun because surviving is the only observation I can make".  

In the Chinese Roulette example, it is correct that the optimal strategy for the first round is also optimal for any following round. It is a... (read more)

2dr_s
But the general point I wanted to make is that "anthropic shadow" reflects a fundamental impossibility of drawing useful updates. From within the boundaries of the game, you can't really say anything other than "well, I'm still playing, so of course I'm still playing". You can still feel like you update as a person because you wouldn't cease existing if you lost. But my point was that the essence of the anthropic shadow is that if you think as the player, an entity that in a sense ceases to exist as soon as that specific game is over, then you can't really update meaningfully. And that is reflected in the fact that you can't get any leverage out of the update. At least, that was my thought when writing the post. I am thinking now about whether that can change if we design a game such that you can actually get meaningful in-game updates on your survival; I think having a turn-dependent survival probability might be key for that. I'll probably return to this.

That's not it. In your simulation you give equal chances for Head and Tails, and then subdivide Tails into two equiprobables of T1 and T2 while keeping all probability of Heads as H1. It's essentially a simulation based on SSA. Thirders would say that is the wrong model because it only considers cases where the room is occupied: H2 never appeared in your model. Thirders suggests there is new info when waking up in the experiment because it rejects H2. So the simulation should divide both Head and Tails into equiprobables of H1 H2 T1 T2. And waking up rejects H2 which pushes P(T) to 2/3. And then learning it is room 1 would push it back down to 1/2. 

1Ape in the coat
The other way around. My simulation is based on the experiment as stated. SSA then tries to generalise this principle assuming that all possible experiments are the same - which is clearly wrong. Thirders position in the incubator-like experiments requires assuming that SBs are randomly sampled, as if God selects a soul from the set of all possible souls and materialise it when a SB is created, and thus thirdism inevitably fails when it's not the case. I'm going to highlight it in the next post which is exploring multiple similar but a bit different scenarios.

To thirders, your simulation is incomplete. It should first include randomly choosing a room and finding it occupied. That will push the probability of Tails to 2/3. Knowing it is room 1 will push it back to 1/2. 

1Ape in the coat
Code for incubator function includes random choice of a room on Tails def incubator(heads_chance=0.5): if random.random() >= heads_chance: # result of the coin toss coin = 'Tails' room = 1 if random.random() >= 0.5 else 2 # room sample else: coin = 'Heads' room = 1 return room, coin

One thing that should be noted is that while Adam's argument is influential, especially since it first (to my knowledge) pointed out halfers have to either reject Bayesian updating upon learning it is Monday or accept a fair coin yet to be tossed has the probability other than 1/2. Thirders in general disagree with it in some crucial ways. Most notably Adam argued that there is no new information when waking up in the experiment. In contrast, most thirders endorsing some versions of SIA would say waking up in the experiment is evidence favouring Tails, whi... (read more)

1Ape in the coat
I think there is a weird knot of contradictions hidden there. On one hand, Elga's mathematical model doesn't include anything about awakening. But then people rationalize that the update on awakening is the justification why according to this model the probability of a fair coin landing Tails is always 2/3, instead of noticing that the model just returns contradictory results.  Which would be a mistake (unless we once again shifted to anthropical motte) because knowing that you are in Room 1 should update you to 2/3 Heads as I've showed here: coin_guess = [] for n in range(100000): room, coin = incubator() beauty_knows_room1 = (room == 1) if beauty_knows_room1: coin_guess.append(coin == 'Heads') print(coin_guess.count(True)/len(coin_guess)) # 0.6688515435956072 It's quite curious that "updating on existence" here is equal to not updating on actual evidence. A Thirder who figured out that they are in Room 1 equals to the Halfer who didn't.

I would also point out that FNC is not strictly a view-from-nowhere theory. The probability updates it proposes are still based on an implicit assumption of self-sampling. 

I really don't like the pragmatic argument against the simulation hypothesis. It demonstrates a common theme in anthropics which IMO is misleading the majority of discussions. By saying pre-simulation ancestors have impacts on how the singularity plays out therefore we ought to make decisions as if we are real pre-simulation people, it subtly shifts the objective of our decisions. Instead of the default objective of maximizing reward to ourselves, doing what's best for us in our world, it changes the objective to achieve a certain state of the universe con... (read more)

Exactly this. The problem with the current anthropic schools of thought is using this view-from-nowhere while simultaneously using the concept of "self" as a meaningful way of specifying a particular observer. It effectively jumps back and forth between the god's eye and first-person views with arbitrary assumptions to facilitate such transitions (e.g. treating the self as the random sample of a certain process carried out from the god's eye view). Treating the self as a given starting point and then reasoning about the world would be the way to dispel anthropic controversies. 

Let's take the AI driving problem in your paper as an example. The better strategy is regarded as the one that gives the better overall reward from all drivers. Whether the rewards of the two instances of a bad driver should be cumulatively or just count once is what divides halfers and thirders. Once that is determined the optimal decision can be calculated from the relative fractions of good/bad drivers/instances. It doesn't involve taking the AI's perspective in a particular instance and deciding the best decision for that particular instance, which req... (read more)

  1. If you are born a month earlier as a preemie instead of full-term, it can be quite convincingly said you are still the same person. But if you are born a year earlier are you still the same person you are now? There are obviously going to be substantial physical differences, different sperm and egg, maybe different gender. If you are the first few human beings born, there will be few similarities between the physical person that's you in that case and the physical person you are now. So the birth rank discussion is not about if this physical person you reg
... (read more)
1green_leaf
1. That's seemingly quite a convincing reason why you can't be born too early. But what occurs to me now is that the problem can be about where you are, temporally, in relation to other people. (So you were still born on the same day, but depending on the entire size of the civilization (m), the probability of you having n people precede you is nm⋅100%.) 2. Depending on how "anthropic problem" is defined, that could potentially be true either for all, or for some anthropic problems.

When you say the time of your birth is not special, you are already trying to judge it objectively. For you personally, the moment of your birth is special. And more relevantly to the DA, from a first-person perspective, the moment "now" is special. 

  1. From an objective viewpoint, discussing a specific observer or a specific moment requires some explanation, something process pointing to it. e.g. a sampling process. Otherwise, it fails to be objective by inherently focusing on someone/sometime.
  2. From a first-person perspective, discussions based on "I" and
... (read more)

I didn't explicitly claim so. But it involves reasoning from a perspective that is impartial to any moment. This independency manifested in its core assumption: that one should regard themself to be randomly selected from all observers from its reference class from past, present and future

1Giskard
I think I don't understand what makes you say that anthropic reasoning requires "reasoning from a perspective that is impartial to any moment". The way I think about this is the following: * If I imagine how an omnitemporal, omniscient being would see me, I imagine they would see me as a randomly selected sample from all humans, past present and future (which don't really exist for the being). * From my point of view, it does feel weird to say that "I'm a randomly selected sample", but I certainly don't feel like there is anything special about the year I was born. This, combined with the fact that I'm obviously human, is just a from-my-point-of-view way of saying the same thing. I'm a human and I have no reason to belive the year I was born is special == I'm a human whose birth year is a sample randomly taken from the population of all possible humans. What changes when you switch perspectives is just the words, not the point. I guess you're thinking about this differently? Do you think you can state where we're disagreeing?

if you get a reward for guessing if your number is >5 correctly, then you should guess that your number is >5 every time.

I am a little unsure about your meaning here. Say you get a reward for guessing if your number is <5 correctly, then would you also guess your number is <5 each time? 

I'm guessing that is not what you mean, but instead, you are thinking as the experiment is repeated more and more the relative frequency of you finding your own number >5 would approach 95%. What I am saying is this belief requires an assumption about treating the "I" as a random sample. Whereas for the non-anthropic problem, it doesn't. 

For the non-anthropic problem, why take the detour of asking a different person each toss? You can personally take it 100 times, and since it's a fair die, it would be around 95 times that it lands >5. Obviously guessing yes is the best strategy for maximizing your personal interest. There is no assuming the I" as a random sample, or making forced transcodings. 

Let me construct a repeatable anthropic problem. Suppose tonight during your sleep you will be accurately cloned with memory preserved. Waking up the next morning, you may find yourself to b... (read more)

1Nox ML
I actually agree with you that there is no single answer to the question of "what you ought to anticipate"! Where I disagree is that I don't think this means that there is no best way to make a decision. In your thought experiment, if you get a reward for guessing if your number is >5 correctly, then you should guess that your number is >5 every time. My justification for this is that objectively, those who make decisions this way will tend to have more reward and outcompete those who don't. This seems to me to be as close as we can get to defining the notion of "doing better when faced with uncertainty", regardless of if it involves the "I" or not, and regardless of if you are selfish or not. Edit to add more (and clarify one previous sentence): Even in the case where you repeat the die-roll experiment 100 times, there is a chance that you'll lose every time, it's just a smaller chance. So even in that case it's only true that the strategy maximizes your personal interest "in aggregate". I am also neither a "halfer" nor a "thirder". Whether you should act like a halfer or a thirder depends on how reward is allocated, as explained in the post I originally linked to.

Thank you for the kind words. I understand the stance about self-locating probability. That's the part I get most disagreements. 

To me the difference is for the unfair coin, you can treat the reference class as all tosses from unfair coins that you don't know how. Then the symmetry between Head\Tail holds, and you can say in this kind of tosses the relative frequency would be 50%. But for the self-locating probabilities in the fission problem, there really is nothing pointing to any number. That is, unless we take the average of all agents and discard... (read more)

3Ape in the coat
I don't think that I need to think about referential classes at all. I can just notice that I'm in a state of uncertanity between two outcomes and as there is no reason to think that any specific one is more likely than the other I use the equiprobable prior.  I believe the ridiculousness of antropics comes when the model assumes that I'm randomly selected from a distribution, while in reality it's not actually the case. But sometimes it may still be true. So there are situations when self-locating probability is valid and situations when it's not. I think my intuition pump is this:  If I'm separated in ten people 9 of whom are going to wake up in the red room while 1 is going to wake up in the blue room it's correct to have 9:1 odds in favour of red for my expected experience. Because I would actually be one of these 10 people.  But if a fair coin is tossed and I'm separated in 9 people who will wake up in red rooms if its heads or I'll wake up in a blue room if it's tails then there odds are 1:1 because the causal process is completely different. I am either one of nine people or one of one based on the results of the coin toss, not the equiprobable distribution. Also none of these cases include "updating from existence/waking up". I was expected to be existing anyway and got no new information.

In anthropic questions, the probability predictions about ourselves (self-locating probabilities) lead to paradoxes. At the same time, they also have no operational value such as decision-making. In a practical sense, we really shouldn't make such probabilistic predictions. Here in this post I'm trying to explain the theoretical reason against it. 

1conitzer
Not the Doomsday Argument, but self-locating probabilities can certainly be useful in decision making, as Caspar Oesterheld and I argue for example here: http://www.cs.cmu.edu/~conitzer/FOCALAAAI23.pdf and show can be done consistently in various ways here: https://www.andrew.cmu.edu/user/coesterh/DeSeVsExAnte.pdf
1green_leaf
I found two statements in the article that I think are well-defined enough and go into your argument: 1. "The birth rank discussion isn't about if I am born slightly earlier or later." How do you know? I think it's exactly about that. I have x% probability of being born within the first x% of all humans (assuming all humans are the correct reference class - if they're not, the problem isn't in considering ourselves a random person from a reference class, but choosing the wrong reference class). 2. "Nobody can be born more than a few months away from their actual birthday." When reasoning probabilistically, we can imagine other possible worlds. We're not talking about something being the case while at the same time not being the case. We imagine other possible worlds (created by the same sampling process that created our world) and compare them to ours. In some of those possible worlds, we were born sooner or later.

Consciousness is a property of the first-person: e.g. To me I am conscious but inherently can't know you are. Whether or not something is conscious is asking if you think from that thing's perspective. So there is no typical or atypical conscious being, from my perspective I am "the" conscious being, if I reason from something else's perspective, then that thing is "the" conscious being instead. 

Our usual notion of considering ourselves as a typical conscious being is because we are more used to thinking from the perspectives of things similar to us. ... (read more)

Load More