Edit 2: I'm now fairly confident that this is just the Presumptuous Philosopher problem is disguise, which is explained clearly in Section 6.1 here https://www.lesswrong.com/s/HFyami76kSs4vEHqy/p/LARmKTbpAkEYeG43u

This is my first post ever on LessWrong. Let me explain my problem. 

I was born in a unique situation — I shall omit the details of exactly what this situation was, but for my argument's sake, assume I was born as the tallest person in the entire world. Or instead suppose that I was born into the richest family in the world. In other words, take as an assumption that I was born into a situation entirely unique relative to all other humans on an easily measurable dimension such as height or wealth (i.e., not some niche measure like "longest tongue"). And indeed, my unique situation is perhaps more immediate and obvious to myself and others than even height or wealth.

For that reason, I've always had an unconscious fear that I'm living in a fake, or simulated world. That fear recently entered my awareness. I reasoned a couple days ago that the fear is motivated by an implicit use of anthropic reasoning. Something along the lines of, "I could have been any human, so that fact that I'm this particular one, this unique human, means there's 'something wrong' with my world. And therefore I'm in a simulation." Something like that. I read through various posts on this site related to anthropic reasoning, including when to use SSA and SIA, but none of them seem to address my concern specifically. Hopefully someone reading this can help me.

To be clear, the question I want answered is the following: "Based on the theory of anthropic reasoning as it is currently understood, from my perspective alone (not your perspective, as the person responding to me, but my own), is my distinctiveness strong evidence for being in a simulation? And if it is, by how much should I 'update' my belief in the simulation given my own observation of my distinctiveness?"

Please let me know if you need any clarifications on this question. The question matters a lot to me, so thank you to anyone who responds.

Edit: In particular, I wonder if the following Bayesian update is sound:

As rough estimates, let Pr(I'm in a simulation) = 0.01, Pr(I'm distinct | I'm not in a simulation) = 0.0001, Pr(I'm distinct | I'm in a simulation) = 0.5 — a high probability since I assume simulated observers are quite likely to be 'special' or 'distinct' with respect to the class of other entities in their simulated world that appear to be observers. (Though perhaps this assumption is precisely my error. Should I be applying SIA here to argue that this latter probability is much smaller? Because simulated worlds in which the other observers are real and not 'illusory' would have low probability of distinctiveness and far more observers? I don't know if this is sound. Should be using SSA instead here to make an entirely separate argument?)

From these estimates, we calculate Pr(I'm distinct)=0.0060, and then using Bayes' theorem, we find Pr(I'm in a simulation | I'm distinct) = 0.98. So even with a quite small, 0.01 prior on being in a simulation, the fact that I'm distinct gives me a 98% chance that I'm in a simulation. 

New Answer
New Comment

6 Answers sorted by

Ape in the coat

60

I assume simulated observers are quite likely to be 'special' or 'distinct' with respect to the class of other entities in their simulated world that appear to be observers. (Though perhaps this assumption is precisely my error.

Yes, it is your main error. Think how justified this assumption is according to your knowledge state. How much evidence do you actually have? Have you check many simulations before generalizing that principle? Or are you just speculating based on total ignorance?

Should I be applying SIA here to argue that this latter probability is much smaller? Because simulated worlds in which the other observers are real and not 'illusory' would have low probability of distinctiveness and far more observers? I don't know if this is sound. Should be using SSA instead here to make an entirely separate argument?

For your own sake, please don't. Both SIA and SSA are also unjustified assumptions out of nowhere and lead to more counterintuitive conclusions.

Instead consider these two problems.

Problem 1:

There is a grey bag filled with equal proportion with balls of a hundred distinct colors. And there is a blue bag, half of which balls are blue. Someone has put their hand in one of the bag, picked a random ball from it and given it to you. The ball happened to be blue. What are the odds that it's from the blue bag?

Problem 2:

There is a grey bag with some balls. And there is a blue bag with some balls. Someone has put their hand in one of the bag, picked a random ball from it and given it to you. The ball happened to be blue. What are the odds that it's from the blue bag?

Are you justified to believe that Problem 2 has the same answer as Problem 1? That you can simply assume that half of the balls in blue bag are blue? Not after you went and checked a hundred random blue bags and in all of them half the balls were blue but just a priori? And likewise with a grey bag. Where would these assumptions be coming from?

You can come up with some plausibly sounding just-so story. That people who were filling the bag felt the urge to put blue balls in a blue bag. But what about the opposite just-so story, where people were disincentivized to put blue balls in a blue bag? Or where people payed no attention to the color of bag? Or all the other possible just-so stories? Why do you prioritize this one in particular?

Maybe you imagine yourself tasked with filling two bags with balls of different colors. And when you inspect your thinking process in such situation, you feel the urge to put a lot of blue balls in blue bag.

But why would the way you'd fill the bags, be entangled with the actual causal process that filled these bags in a general case? You don't know that bags were filled by people with your sensibilities. You don't know that they were filled by people, to begin with.

Or spin it the other way. Suppose, you could systematically produce correct reasoning by simply assuming things like that. What would be the point in gathering evidence then? Why spend extra energy on checking the way blue bags and grey bags are organized if you can confidently a priori deduce that? 

But, on second thought, why are you confident that the way I'd fill the bags is not "entangled with the actual causal process that filled these bags in a general case?" It seems likely that my sensibilities reflect at least in some manner the sensibilities of my creator, if such a creator exists.

Actually, in addition, my argument still works if we only consider simulations in which I'm the only human and I'm distinct (on my aforementioned axis) from other human-seeming entities. So the 0.5 probability becomes identically 1, and I sidestep your argument. So... (read more)

2Ape in the coat
Most ways of reasoning are not entangled with most causal processes. When we do not have much reason to think that a particular way of reasoning is entangled, we don't expect it to be. It's possible to simply guess correctly, but it's not probable. That's not the way to systematically arrive to truth. Even if it's true, how could you know that it's true? Where does this "seeming" comes from? Why do you think that it's more likely that a creator would imprint their own sensibilities in you instead of literally every other possibility? If you are in a simulation, you are trying to speculate about the reality outside of simulation, based on the information from inside the simulation. None of this information is particularly trustworthy, unless you already know for a fact that properties of simulation represent the properties of base reality. Have you heard about Follow-The-Improbability game? I recommend you read the linked post and think for a couple of minutes of how it applies to your comment before further reading my answer. Try to track yourself the flow of improbability and understand, why the total value doesn't decrease when consider only a specific type of simulations. So. You indeed can consider only a specific type of simulations. But if you don't have actual evidence which would justify prioritizing this hypothesis from all the other, the overall improbability stays the same, you just pass the buck of it to other factors. Consider Problem 2 once again. You can reason conditionally on the assumption that all the balls in the blue bag are blue while balls in the grey bag have random colors. That would give you a very strong update in favor of blue bag... conditionally on your assumption being true. The prior probability of this assumption to be true is very low. It's exactly proportionally low to how much you updated in favor of blue bag conditionally on it, so that when you try to calculate the total probability it stays the same. Only when you hav

Thank you Ape, this sounds right.

[This comment is no longer endorsed by its author]Reply

For what it's worth I do think observers that observe themselves to be highly unique in important axes rationally should increase their credence in simulation hypotheses.

Dagon

40

Everyone is unique, given enough dimensions of measurement.  Humans as a species are unique, as far as we can tell.  "unique on a common, easy metric" is ... rare, but there are still lots of metrics to choose from, so there are likely many who can say that.  If you're one in a million, there are 5 of you in Manhattan and 1400 of you in China.  

The problem with anthropic calculations is the same as any singleton observation - your prior is going to be the main determinant of the posterior.  The problem with this specific calculation is why in the simulator's green earth you'd think the chance of uniqueness on this dimension is greater if you're simulated than if you're not.  If they can simulate you, they can simulate billions or trillions, right?

I don't think anything observable is useful evidence for or against simulation.

Good questions. Firstly, let's just take as an assumption that I'm very distinct — not just unique. In my calculation, I set Pr(I'm distinct | I'm not in a simulation)=0.0001 to account for this (1 in 10,000 people), but honesty I think the real probability is much much lower than this figure (maybe 1 in a million) — so I was even being generous to your point there.

To your second question, the reason why, in my simulator's earth, I imagine the chance of uniqueness to be larger is that if I'm in a simulation then there could be what I will call "NPCs." Peop... (read more)

2Dagon
Note that if your prior is "it's much cheaper to simulate one person and have most of the rest of the universe be NPC/rougher-than-reality", then you being unique doesn't change it by much.  This would STILL be true if you were superficially similar to many NPCs.  
1AynonymousPrsn123
True, but that wasn't my prior. My assumption was that if I'm in a simulation, there's quite a high likelihood that I would be made to be so 'lucky' to be the highest on this specific dimension. Like a video game in which the only character has the most Hp.

Satron

32

No, I don't think it is.

  1. Imagine a scenario in which the people running the simulation decided to simulate every human on Earth as an actual observer.

In this case, Pr(I'm distinct | I'm not in a simulation) = Pr(I'm distinct | I'm in a simulation) because no special treatment has been shown to you. If you think it is very unlikely that you just happened to be distinct in a real world, then in this scenario, you ought to think that it is very unlikely that you just happened to be distinct in a simulated world.

  1. I think what you are actually thinking about is a scenario where only you are an actual observer, whereas everyone else is a p-zombie (or an NPC if you wish).

But this scenario also raises a few questions. Why would the simulators make you a real observer and everyone else a p-zombie? Apparently, p-zombies are able to carry on any tasks that are useful for observation just as well as actual observers.

But even leaving that aside, it is unclear why is your Pr(I'm distinct | I'm in a simulation) so high? Computer-game-style simulations where you are "the main character" are plausible, but so are other types of simulations. For example, imagine a civilization wanting to learn about its past and running a simulation of its history. Or imagine a group of people who want to run an althistory simulation. Perhaps they want to see what could've happened had Nazi Germany won WW2 (specifics are irrelevant here). Clearly, in the althistory simulation, there would be no "main characters," so it would be plausible that everyone would be an actual observer (as opposed to there being only one actual observer). And let's also imagine for a second that althistory simulation has one hundred billion (10¹¹) observers in total. The chances of you being the only real observer in this world vs. being one of a trillion real observers in althistory world are 1:10¹¹.

And from here, we can generalize. If it is plausible that simulators would ever run a simulation with 10¹¹ observers (or any other large number), then it would require 10¹¹ (or any other large number) simulations with only one observer to match the odds of you being in a "one observer" simulation.

Other people here have responded in similar ways to you; but the problem with your argument is that my original argument could also just consider only simulations in which I am the only observer. In which case Pr(I'm distinct | I'm in a simulation)=1, not 0.5. And since there's obviously some prior probability of this simulation being true, my argument still follows.

I now think my actual error is saying Pr(I'm distinct | I'm not in a simulation)=0.0001, when in reality this probability should be 1, since I am not a random sample of all humans (i.e., SSA is... (read more)

1Satron
But then this turns Pr(I'm in a simulation) into Pr(I'm in a simulation) + Pr(only simulations with one observer exist | simulations exist). It's not enough that a simulation exists with only one observer. It needs to be so that simulations with multiple observers also don't exist. For example, if there is just one simulation with a billion observers, it heavily skews the odds in favor of you not being in a simulation with just one observer. And I am very much willing to say Pr(I'm in a simulation) + Pr(only simulations with one observer exist | simulations exist) is going to be lower than Pr(I'm distinct | I'm not in a simulation). That answer seems reasonable to me. However, I think that there is value in my answer as well: it works even if SSA (the "least favorable" assumption) is true.
1AynonymousPrsn123
I think you are overlooking that your explanation requires BOTH SSA and SIA, but yes, I understand where you are coming from.
1Satron
Can you please explain why my explaination requires SIA? From a quick Google search: "The Self Sampling Assumption (SSA) states that we should reason as if we're a random sample from the set of actual existent observers" My last paragraph in my original answer was talking about a scenario where simulators have actually simulated a) a world with 1 observers AND b) a world with 10¹¹ observer. So a set of "actual existent observers" includes 1 + 10¹¹ observers. You are randomly selected from that, giving you 1:10¹¹ odds of being in the world where you are the only observer. I don't see where SIA is coming in play here.
1AynonymousPrsn123
This is what I was thinking: If simulations exist, we are choosing between two potentially existing scenarios, either I'm the only real person in my simulation, or there are other real people in my simulation. Your argument prioritizes the latter scenario because it contains more observers, but these are potentially existing observers, not actual observers. SIA is for potentially existing observers. I have a kind of intuition that something like my argument above is right, but tell me if that is unclear. And note: one potential problem with your reasoning is that if we take it to it's logical extreme, it would be 100% certain that we are living in a simulation with infinite invisible observers. Because infinity dominates all the finite possibilities.
1Satron
But the thing is that, there is a matter of fact of whether there are other observers in our world if it is simulated. Either you are the only observer or there are other observers, but one of them is true. Not just potentially true, but actually true. The same is true of my last paragraph in the original answer (although perhaps, I could've used a clearer wording). If, as a matter of fact there actually exist 10¹¹ + 1 observers, then you are more likely to be in 10¹¹ group as per SSA. We don't know if there are actually 10¹¹ + 1 observers, but that is merely an epistemic gap.
1AynonymousPrsn123
You are describing the SIA assumption to a T.
1Satron
The way I understand it, the main difference between SIA and SSA is the fact that in SIA "I" may fail to exist. To illustrate what I mean, I will have to refer to "souls" just because it's the easiest thing I can come up with. SSA: There are 10¹¹ + 1 observers and 10¹¹ + 1 souls. Each soul gets randomly assigned to an observer. One of the souls is you. The probability of you existing is 1. You cannot fail to exist. SIA: There are 10¹¹ + 1 observers and a very large (much larger than 10¹¹ + 1) amount of souls. Let's call this amount N. Each soul gets assigned to an observer. One of the souls is you. However, in this scenario, you may fail to exist. The probability of you existing is (10¹¹ + 1)/N
1AynonymousPrsn123
This is an interesting observation which may well be true, I'm not sure, but the more intuitive difference is that SSA is about actually existing observers, while SIA is about potentially existing observers. In other words, if you are reasoning about possible realities in the so-called "multiverse of possibilities," than you are using SIA. Whereas if you are only considering a single reality (e.g., the non-simulated world), you select a reference class from that reality (e.g., humans), you may choose to use use SSA to say that you are a random observer from that class (e.g., a random human in human history).
1Satron
I guess the word "reality" is kind of ambiguous, and maybe that's why we've been disagreeing for so long. For example, imagine a scenario where we have 1) a non-simulated base world (let's say 10¹² observers in it) AND 2) a simulated world with 10¹¹ observers AND 3) a simulated world with 1 observer. All three worlds actually concretely exist. People from world #1 just decided to run two simulations (#2 and #3). Surely, in this scenario, as per SSA, I can say that I am a randomly selected observer from the set of all observers. As far as I see, this "set of all observers" would include 10¹² + 10¹¹ + 1 observer because all of these observers actually exist, and I could've been born as any one of them. Edit 1: I noticed that you edited one of your replies to include this: I don't actually think this is true. My reasoning only really says that we are most likely to exist in the world with the most observers as compared to other actual worlds, not other possible worlds. The most you can get out of this is the fact that conditional on a simulation with infinite observers existing, we are most likely in that simulation. However, because of the weirdness of actual infinity, because of the abysmal computational costs (it's one thing to simulate billions of observers and another thing to simulate an infinity of observers), and because of the fact that it is probably physically impossible, I put an incredibly low prior on the fact that a simulation with infinite observers actually exists. And if it doesn't exist, then we are not in it. Edit 2: You don't even need to posit a 10¹¹ simulation for it to be unlikely that you are in an "only one observer" simulation. It is enough that the non-simulated world has multiple observers. To illustrate what I mean, imagine that a society in a non-simulated world with 10¹² observers decides to make a simulation with only 1 observer. The odds are overwhelming that you'd be among 10¹² mundane, non-distinct observers in the non-simulated

Noosphere89

20

The answer is yes, trivially, because under a wide enough conception of computation, basically everything is simulatable, so everything is evidence for the simulation hypothesis because it includes effectively everything.

It will not help you infer anything else though.

More below:

http://www.amirrorclear.net/academic/ideas/simulation/index.html

https://arxiv.org/abs/1806.08747

plex

11

In a large universe, you, and everyone else, exists both in and not in simulations. That is: The pattern you identify with exists in both basement reality (in many places) and also in simulations (in many places).

There is a question of what proportion of the you-patterns exist in basement reality, but it has a slightly different flavour, I think. It seems to trigger some deep evolved patterns (around fakeness?) less than the kind of existential fear that simulations with the naive conception of identity sometimes brings up.

But to answer that question: Maybe simulators tend to prefer "flat" simulations, where the entire system is simulated evenly to avoid divergence from the physical system it's trying to gather information about. Maybe your unique characteristic is the kind of thing that makes you more likely to be simulated in higher fidelity than the average human, and simulators prefer uneven simulations. Or maybe it's unusual but not particularly relevant for tactical simulations of what emerges from the intelligence explosion (which is probably where the majority of the simulation compute goes). 

But, either way, that update is probably pretty small compared to the background high rate of simulations of "humans around at the time of the singularity". Bostrom's paper covers the general argument for simulations generally outnumbering basement reality due to ancestor simulations: https://simulation-argument.com/simulation.pdf

However, even granting all of the background assumptions that go into this: Not all observers who are you live in a simulation. You exist in both types of places. Simulations don't reduce your weight in the basement reality, they can only give you more places which you exist.

Why are you so sure it's a computer simulation? How do you know it's not a drug trip? A fever dream? An unfathomable organism staring into some kind of (to it's particular phenomenology) plugging it's senses into a pseudo-random pattern generator from which is hallucinates or infers the experience of OP?

How could we falsify the simulation hypothesis?

1plex
From the way things sure seem to look, the universe is very big, and has room for lots of computations later on. A bunch of plausible rollouts involve some small fraction of those very large resources going on simulations. You can, if you want, abandon all epistemic hope and have a very very wide prior. Maybe we're totally wrong about everything! Maybe we're Boltzmann brains! But that's not super informative or helpful, so we look around us and extrapolate assuming that's a reasonable thing to do, because we ain't got anything else we can do. Simulations are very compatible with that. The other examples aren't so much, if you look up close and have some model of what those things are like and do.
1CstineSublime
I don't understand how the assumption that we are living in a simulation which is so convincing as to be indistinguishable from a non-simulation is any more useful than the Boltzmann brain, or a brain in a vat, or a psychedelic trip, or that we're all just the fantasy of the boy at the end of St. Elsewhere: since, by virtue of being a convincing simulation it has no characteristic which knowingly distinguishes it from a non-simulation. In fact some of those others would be more useful if true, because they would point to phenomena which would better explain the world. How are the other examples not compatible? What fact could only necessarily be true in a simulation but not on a psychedelically induced hallucination? Or a fever dream? What do you mean "look up close" close to what exactly?
17 comments, sorted by Click to highlight new comments since:

You make the assumption that half of all simulated observers are distinctively unique in an objectively measurable property within simulated worlds having on the order of billions of entities in the same class. Presumably you also mean a property that requires very few bits to specify - such as, if you asked a bunch of people for their lists of such properties that someone could be "most extreme" in, and entropy-code the results, then the property in question would be in the list and correspond to very few bits (say, 5 or fewer).

That seems like a massive overestimate, and is responsible for essentially all of your posterior probability ratio.

I give this hypothesis very much lower weight.

That makes sense. But to be clear, it makes intuitive sense to me that the simulators would want to make their observers so 'lucky' as I am, so I assigned 0.5 probability to this hypothesis. Now I realize this is not the same as Pr(I'm distinct | I'm in a simulation) since there's some weird anthropic reasoning going on since only one side of this probability has billions of observers. But what would be the correct way of approaching this problem? Should I have divided 0.5 by 8 billion? That seems too much. What is the correct mathematical approach?

Think MMORPGs - what are the chances of simulation being like that vs a simulation with just a few special beings, and the rest NPCs?. Even if you say it's 50/50, then given that MMORPG-style simulations have billions of observes and "observers are special" ones only have a few, then an overwhelming majority of simulates observers are actually not that special in their simulations.

Thank you Anon User. I thought a little more about the question and I now think it's basically the Presumptuous Philosopher problem in disguise. Consider the following two theories that are equally likely:

T1 : I'm the only real observer

T2: I'm not the only real observer

For SIA, the ratio is 1:(8 billion / 10,000)=800,000, so indeed, as you said above, most copies of myself are not simulated. 

For the SSA, the ratio is instead 10,000:1, so in most universes in the "multiverse of possibilities", I am the only real observer.

So it's just a typical Presumptuous Philosopher problem. Does this sound right to you?

There is no correct mathematical treatment, since this is a disagreement about models of reality. Your prior could be correct if reality is one way, though I think it's very unlikely.

I will point out though that for your reasoning to be correct, you must literally have Main Character Syndrome, believing that the vast majority of other apparently conscious humans in such worlds as ours are actually NPCs with no consciousness.

I'm not sure why you think that simulators will be sparse with conscious entities. If consciousness is possible at all for simulated beings, it seems likely that it's not some "special sauce" that they can apply separately to some entities and not to otherwise identical entities, but a property of the structure of the entities themselves. So in my view, an exceptionally tall human won't be given "special sauce" to make them An Observer, but all sufficiently non-brain-damaged simulated humans will be observers (or none of them).

It might be different if the medically and behaviourally similar (within simulation) "extremest" and "other" humans are not actually structurally similar (in the system underlying the simulation), but are actually very different types of entities that are just designed to appear almost identical from examination within the simulation. There may well be such types of simulations, but that seems like a highly complex additional hypothesis, not the default.

I suspect it's quite possible to give a mathematical treatment for this question, I just don't know what that treatment is. I suspect it has to do with anthropics. Can't anthropics deal with different potential models of reality?

The second part of your answer isn't convincing to me, because I feel like it assumes we can understand the simulators and their motivations, when in reality we cannot (these may not be the future-human simulators philosophers typically think about, mind you, they could be so radically different that ordinary reasoning about their world doesn't apply). But anyway, this latter part of your argument, even if valid, only effects the quantitative part of the initial estimates, not the qualitative part, so I'm not particularly concerned with it.

The largest part of my second part is "If consciousness is possible at all for simulated beings, it seems likely that it's not some "special sauce" that they can apply separately to some entities and not to otherwise identical entities, but a property of the structure of the entities themselves." This mostly isn't about simulators and their motivations, but about the nature of consciousness in simulated entities in general.

On the other hand your argument is about simulators and their motivations, in that you believe they largely both can and will apply "special sauce" to simulated entities that are the most extreme in some human-obvious way and almost never to the others.

I don't think we have any qualitative disagreements, just about what fraction of classes of simulated entities may or may not have consciousness.

Yes okay fair enough. I'm not certain about your claim in quotes, but neither am I certain about my claim which you phrased well in your second paragraph. You have definitely answered this better than anyone else here.

But still, I feel like this problem is somehow similar to the Presumtuous Philosopher problem, and so there should be some anthropic reasoning to deduce which universe I'm likely in / how exactly to update my understanding. 

I'm afraid I don't understand a lot of your assumptions. For example, why you think you being an example of any given superlative is somehow a falsifying observation of the reality - especially if other people/objects don't exist in uniform distributions. So it's not like a video game where every other NPC  exactly 10 HP, but through use of cheat-code you've got 1000. And even so, that data from within the 'simulation' as you call it is not proof of something 'without'. I think the only evidence of that would be if you find yourself in a situation like Daffy Duck, the walls of reality closing in on you - meeting your maker directly.

I also wonder, how much Kant or Plato have you read and did you do any research, even on SEP before you asked? I feel like anyone who has questions about the 'simulation' would be best served reading the philosophers who have written eloquently on the matter of how we come to represent the world and really formed the concepts and language we use.

Or you could read (Tractatus) Wittgenstein and dismiss all metaphysics all together as nonsense - literally: that which cannot be sensed and therefore musn't be spoken about.

My argument didn't even make those assumptions. Nothing in my argument "falsified" reality, nor did I "prove" the existence of something outside my immediate senses. It was merely a probabilistic, anthropic argument. Are you familiar with anthropics? I want to hear from someone who knows anthropics well.

Indeed, your video game scenario is not even really qualitatively different from my own situation. Because if I were born with 1000 HP, you could still argue "data from within the 'simulation'...is not proof of something 'without'." And you could update your "scientific" understanding of the distribution of HP to account for the fact that precisely one character has 1000 HP.

The difference between my scenario and the video game one is merely quantitative: Pr(1000 HP | I'm not in a video game) < Pr(I'm a superlative | I'm not in a simulation), though both probabilities are very low.

I never said "falsified" in that reply - I said fake - a simulation is by definition fake. That is the meaning of the word in general sense. If i make a indistinguishible replica of the Mona Lisa and pass it off as real, I have made a fake. If some kind of demiurge makes a simulation and passes it off as 'reality' - it is a fake.

I've never heard of "anthropics" but I am familiar with the Anthropic Principle it's antecedents in pre-socratic philosophers like Heraclitus who are the first known record of the concept. Have you heard of Kant and German Idealism?

Indeed, your video game scenario is not even really qualitatively different from my own situation.

How? To take your example of being the tallest person: If all human beings were exactly 6 feet tall, and you were 600 feet tall, then you're saying that would be proof that you are in fact in a simulation. That might suggest you are in fact extremely special and unique, if you want to believe in a solipistiic, Truman-show style world.

if I were born with 1000 HP, you could still argue "data from within the 'simulation'...is not proof of something 'without'."

Yes. Exactly. I could. Although it would intuitively be less persuasive. But there aren't any 600 feet tall people in a world of otherwise uniform height.

The difference between my scenario and the video game one is merely quantitative: Pr(1000 HP | I'm not in a video game) < Pr(I'm a superlative | I'm not in a simulation), though both probabilities are very low

I don't understand where you're pulling that quantitative difference. Can you elaborate more?

I don't appreciate your tone sir! Anyway, I've now realized that this is a variant on the standard Presumptuous Philosopher problem, which you can read about here if you are mathematically inclined: https://www.lesswrong.com/s/HFyami76kSs4vEHqy/p/LARmKTbpAkEYeG43u#1__Proportion_of_potential_observers__SIA

I didn't think there was anything off with my tone. But please don't consider my inquisitiveness and lack of understanding anything other than a genuine desire to fill the gaps in my reasoning.

Again, what is your understanding of Kant and German Idealism and why do you think that the dualism presented in Kantian metaphysics is insufficient to answer your question? What misgivings or where does it leave you unsatisfied and why?

I'm not immediately sure how the Presumptious Philosopher example applies here: That is saying that there's theory 1 which has x amount of observers, and theory 2 which has x times x amount of observers. However, "the world is a simulation" is but one theory, there are potentially infinite other theories, some as of yet unfathomed, and others still completely unfathomable (hence the project of metaphysics and the very paradox of Idealism).
Are you saying the presumptuous philosopher would say: "there's clearly many more theories that aren't simulation than just simulation, so we can assume it's not a simulation"
I don't think that holds, because that assumes a uniform probability distribution between all theories.
Are you prepared to make that assumption?

You are misinterpreting the PP example. Consider the following two theories:

T1 : I'm the only one that exists, everyone else is an NPC

T2 : Everything is as expected, I'm not simulated. 

Suppose for simplicity that both theories are equally likely. (This assumption really doesn't matter.) If I define Presumptuous Philosopher=Distinct human like myself=1/(10,000) humans, then I get in most universes, I am indeed the only one, but regardless, most copies of myself are not simulated.

I'm still not sure how it is related.

The implicit fear is that you are in a world which is manufactured because you, the presumed observer are so unique, right? Because you're freakishly tall or whatever.

However, as per the anthropic principle, any universe that humans exist in, and any universe that observer exists in is a universe where it is possible for them to exist. Or to put it another way: the rules of that universe are such that the observer doesn't defy the rules of that universe. Right?

So freakishly tall or average height: by the anthropic principle you are a possibility within that universe. (but, you are not the sole possibility in that universe - other observers are possible, non-human intelligent lifeforms aren't impossible just because humans are)

Why should we entertain the possibility that you are not possible within this universe, and therefore that some sort of demiurge or AGI or whatever watchmaker-stand-in you want for this thought experiment has crafted a simulation just for the observer?

How do we get that to the probability argument?

I don't understand. We should entertain the possibility because it is clearly possible (since it's unfalsifiable), because I care about it, because it can dictate my actions, etc. And the probability argument follows after specifying a reference class, such as "being distinct" or "being a presumptuous philosopher."

We should entertain the possibility because it is clearly possible (since it's unfalsifiable), because I care about it, because it can dictate my actions, etc.


What makes you care about it? What makes it persuasive to you? What decisions would you make differently and what tangible results within this presumed simulation would you expect to see differently pursuant to proving this? (How do you expect your belief in the simulation to pay rent in anticipated experiences?)

Also, the general consensus in rational or at least broadly in science is if something is unfalsifiable then it must not be entertained. 
 

And the probability argument follows after specifying a reference class, such as "being distinct" or "being a presumptuous philosopher."

Say more? I don't see how they are the same reference class.