I assume simulated observers are quite likely to be 'special' or 'distinct' with respect to the class of other entities in their simulated world that appear to be observers. (Though perhaps this assumption is precisely my error.
Yes, it is your main error. Think how justified this assumption is according to your knowledge state. How much evidence do you actually have? Have you check many simulations before generalizing that principle? Or are you just speculating based on total ignorance?
Should I be applying SIA here to argue that this latter probability is much smaller? Because simulated worlds in which the other observers are real and not 'illusory' would have low probability of distinctiveness and far more observers? I don't know if this is sound. Should be using SSA instead here to make an entirely separate argument?
For your own sake, please don't. Both SIA and SSA are also unjustified assumptions out of nowhere and lead to more counterintuitive conclusions.
Instead consider these two problems.
Problem 1:
There is a grey bag filled with equal proportion with balls of a hundred distinct colors. And there is a blue bag, half of which balls are blue. Someone has put their hand in one of the bag, picked a random ball from it and given it to you. The ball happened to be blue. What are the odds that it's from the blue bag?
Problem 2:
There is a grey bag with some balls. And there is a blue bag with some balls. Someone has put their hand in one of the bag, picked a random ball from it and given it to you. The ball happened to be blue. What are the odds that it's from the blue bag?
Are you justified to believe that Problem 2 has the same answer as Problem 1? That you can simply assume that half of the balls in blue bag are blue? Not after you went and checked a hundred random blue bags and in all of them half the balls were blue but just a priori? And likewise with a grey bag. Where would these assumptions be coming from?
You can come up with some plausibly sounding just-so story. That people who were filling the bag felt the urge to put blue balls in a blue bag. But what about the opposite just-so story, where people were disincentivized to put blue balls in a blue bag? Or where people payed no attention to the color of bag? Or all the other possible just-so stories? Why do you prioritize this one in particular?
Maybe you imagine yourself tasked with filling two bags with balls of different colors. And when you inspect your thinking process in such situation, you feel the urge to put a lot of blue balls in blue bag.
But why would the way you'd fill the bags, be entangled with the actual causal process that filled these bags in a general case? You don't know that bags were filled by people with your sensibilities. You don't know that they were filled by people, to begin with.
Or spin it the other way. Suppose, you could systematically produce correct reasoning by simply assuming things like that. What would be the point in gathering evidence then? Why spend extra energy on checking the way blue bags and grey bags are organized if you can confidently a priori deduce that?
But, on second thought, why are you confident that the way I'd fill the bags is not "entangled with the actual causal process that filled these bags in a general case?" It seems likely that my sensibilities reflect at least in some manner the sensibilities of my creator, if such a creator exists.
Actually, in addition, my argument still works if we only consider simulations in which I'm the only human and I'm distinct (on my aforementioned axis) from other human-seeming entities. So the 0.5 probability becomes identically 1, and I sidestep your argument. So...
For what it's worth I do think observers that observe themselves to be highly unique in important axes rationally should increase their credence in simulation hypotheses.
Everyone is unique, given enough dimensions of measurement. Humans as a species are unique, as far as we can tell. "unique on a common, easy metric" is ... rare, but there are still lots of metrics to choose from, so there are likely many who can say that. If you're one in a million, there are 5 of you in Manhattan and 1400 of you in China.
The problem with anthropic calculations is the same as any singleton observation - your prior is going to be the main determinant of the posterior. The problem with this specific calculation is why in the simulator's green earth you'd think the chance of uniqueness on this dimension is greater if you're simulated than if you're not. If they can simulate you, they can simulate billions or trillions, right?
I don't think anything observable is useful evidence for or against simulation.
Good questions. Firstly, let's just take as an assumption that I'm very distinct — not just unique. In my calculation, I set Pr(I'm distinct | I'm not in a simulation)=0.0001 to account for this (1 in 10,000 people), but honesty I think the real probability is much much lower than this figure (maybe 1 in a million) — so I was even being generous to your point there.
To your second question, the reason why, in my simulator's earth, I imagine the chance of uniqueness to be larger is that if I'm in a simulation then there could be what I will call "NPCs." Peop...
No, I don't think it is.
In this case, Pr(I'm distinct | I'm not in a simulation) = Pr(I'm distinct | I'm in a simulation) because no special treatment has been shown to you. If you think it is very unlikely that you just happened to be distinct in a real world, then in this scenario, you ought to think that it is very unlikely that you just happened to be distinct in a simulated world.
But this scenario also raises a few questions. Why would the simulators make you a real observer and everyone else a p-zombie? Apparently, p-zombies are able to carry on any tasks that are useful for observation just as well as actual observers.
But even leaving that aside, it is unclear why is your Pr(I'm distinct | I'm in a simulation) so high? Computer-game-style simulations where you are "the main character" are plausible, but so are other types of simulations. For example, imagine a civilization wanting to learn about its past and running a simulation of its history. Or imagine a group of people who want to run an althistory simulation. Perhaps they want to see what could've happened had Nazi Germany won WW2 (specifics are irrelevant here). Clearly, in the althistory simulation, there would be no "main characters," so it would be plausible that everyone would be an actual observer (as opposed to there being only one actual observer). And let's also imagine for a second that althistory simulation has one hundred billion (10¹¹) observers in total. The chances of you being the only real observer in this world vs. being one of a trillion real observers in althistory world are 1:10¹¹.
And from here, we can generalize. If it is plausible that simulators would ever run a simulation with 10¹¹ observers (or any other large number), then it would require 10¹¹ (or any other large number) simulations with only one observer to match the odds of you being in a "one observer" simulation.
Other people here have responded in similar ways to you; but the problem with your argument is that my original argument could also just consider only simulations in which I am the only observer. In which case Pr(I'm distinct | I'm in a simulation)=1, not 0.5. And since there's obviously some prior probability of this simulation being true, my argument still follows.
I now think my actual error is saying Pr(I'm distinct | I'm not in a simulation)=0.0001, when in reality this probability should be 1, since I am not a random sample of all humans (i.e., SSA is...
The answer is yes, trivially, because under a wide enough conception of computation, basically everything is simulatable, so everything is evidence for the simulation hypothesis because it includes effectively everything.
It will not help you infer anything else though.
More below:
http://www.amirrorclear.net/academic/ideas/simulation/index.html
In a large universe, you, and everyone else, exists both in and not in simulations. That is: The pattern you identify with exists in both basement reality (in many places) and also in simulations (in many places).
There is a question of what proportion of the you-patterns exist in basement reality, but it has a slightly different flavour, I think. It seems to trigger some deep evolved patterns (around fakeness?) less than the kind of existential fear that simulations with the naive conception of identity sometimes brings up.
But to answer that question: Maybe simulators tend to prefer "flat" simulations, where the entire system is simulated evenly to avoid divergence from the physical system it's trying to gather information about. Maybe your unique characteristic is the kind of thing that makes you more likely to be simulated in higher fidelity than the average human, and simulators prefer uneven simulations. Or maybe it's unusual but not particularly relevant for tactical simulations of what emerges from the intelligence explosion (which is probably where the majority of the simulation compute goes).
But, either way, that update is probably pretty small compared to the background high rate of simulations of "humans around at the time of the singularity". Bostrom's paper covers the general argument for simulations generally outnumbering basement reality due to ancestor simulations: https://simulation-argument.com/simulation.pdf
However, even granting all of the background assumptions that go into this: Not all observers who are you live in a simulation. You exist in both types of places. Simulations don't reduce your weight in the basement reality, they can only give you more places which you exist.
Why are you so sure it's a computer simulation? How do you know it's not a drug trip? A fever dream? An unfathomable organism staring into some kind of (to it's particular phenomenology) plugging it's senses into a pseudo-random pattern generator from which is hallucinates or infers the experience of OP?
How could we falsify the simulation hypothesis?
You make the assumption that half of all simulated observers are distinctively unique in an objectively measurable property within simulated worlds having on the order of billions of entities in the same class. Presumably you also mean a property that requires very few bits to specify - such as, if you asked a bunch of people for their lists of such properties that someone could be "most extreme" in, and entropy-code the results, then the property in question would be in the list and correspond to very few bits (say, 5 or fewer).
That seems like a massive overestimate, and is responsible for essentially all of your posterior probability ratio.
I give this hypothesis very much lower weight.
That makes sense. But to be clear, it makes intuitive sense to me that the simulators would want to make their observers so 'lucky' as I am, so I assigned 0.5 probability to this hypothesis. Now I realize this is not the same as Pr(I'm distinct | I'm in a simulation) since there's some weird anthropic reasoning going on since only one side of this probability has billions of observers. But what would be the correct way of approaching this problem? Should I have divided 0.5 by 8 billion? That seems too much. What is the correct mathematical approach?
Think MMORPGs - what are the chances of simulation being like that vs a simulation with just a few special beings, and the rest NPCs?. Even if you say it's 50/50, then given that MMORPG-style simulations have billions of observes and "observers are special" ones only have a few, then an overwhelming majority of simulates observers are actually not that special in their simulations.
Thank you Anon User. I thought a little more about the question and I now think it's basically the Presumptuous Philosopher problem in disguise. Consider the following two theories that are equally likely:
T1 : I'm the only real observer
T2: I'm not the only real observer
For SIA, the ratio is 1:(8 billion / 10,000)=800,000, so indeed, as you said above, most copies of myself are not simulated.
For the SSA, the ratio is instead 10,000:1, so in most universes in the "multiverse of possibilities", I am the only real observer.
So it's just a typical Presumptuous Philosopher problem. Does this sound right to you?
There is no correct mathematical treatment, since this is a disagreement about models of reality. Your prior could be correct if reality is one way, though I think it's very unlikely.
I will point out though that for your reasoning to be correct, you must literally have Main Character Syndrome, believing that the vast majority of other apparently conscious humans in such worlds as ours are actually NPCs with no consciousness.
I'm not sure why you think that simulators will be sparse with conscious entities. If consciousness is possible at all for simulated beings, it seems likely that it's not some "special sauce" that they can apply separately to some entities and not to otherwise identical entities, but a property of the structure of the entities themselves. So in my view, an exceptionally tall human won't be given "special sauce" to make them An Observer, but all sufficiently non-brain-damaged simulated humans will be observers (or none of them).
It might be different if the medically and behaviourally similar (within simulation) "extremest" and "other" humans are not actually structurally similar (in the system underlying the simulation), but are actually very different types of entities that are just designed to appear almost identical from examination within the simulation. There may well be such types of simulations, but that seems like a highly complex additional hypothesis, not the default.
I suspect it's quite possible to give a mathematical treatment for this question, I just don't know what that treatment is. I suspect it has to do with anthropics. Can't anthropics deal with different potential models of reality?
The second part of your answer isn't convincing to me, because I feel like it assumes we can understand the simulators and their motivations, when in reality we cannot (these may not be the future-human simulators philosophers typically think about, mind you, they could be so radically different that ordinary reasoning about their world doesn't apply). But anyway, this latter part of your argument, even if valid, only effects the quantitative part of the initial estimates, not the qualitative part, so I'm not particularly concerned with it.
The largest part of my second part is "If consciousness is possible at all for simulated beings, it seems likely that it's not some "special sauce" that they can apply separately to some entities and not to otherwise identical entities, but a property of the structure of the entities themselves." This mostly isn't about simulators and their motivations, but about the nature of consciousness in simulated entities in general.
On the other hand your argument is about simulators and their motivations, in that you believe they largely both can and will apply "special sauce" to simulated entities that are the most extreme in some human-obvious way and almost never to the others.
I don't think we have any qualitative disagreements, just about what fraction of classes of simulated entities may or may not have consciousness.
Yes okay fair enough. I'm not certain about your claim in quotes, but neither am I certain about my claim which you phrased well in your second paragraph. You have definitely answered this better than anyone else here.
But still, I feel like this problem is somehow similar to the Presumtuous Philosopher problem, and so there should be some anthropic reasoning to deduce which universe I'm likely in / how exactly to update my understanding.
I'm afraid I don't understand a lot of your assumptions. For example, why you think you being an example of any given superlative is somehow a falsifying observation of the reality - especially if other people/objects don't exist in uniform distributions. So it's not like a video game where every other NPC exactly 10 HP, but through use of cheat-code you've got 1000. And even so, that data from within the 'simulation' as you call it is not proof of something 'without'. I think the only evidence of that would be if you find yourself in a situation like Daffy Duck, the walls of reality closing in on you - meeting your maker directly.
I also wonder, how much Kant or Plato have you read and did you do any research, even on SEP before you asked? I feel like anyone who has questions about the 'simulation' would be best served reading the philosophers who have written eloquently on the matter of how we come to represent the world and really formed the concepts and language we use.
Or you could read (Tractatus) Wittgenstein and dismiss all metaphysics all together as nonsense - literally: that which cannot be sensed and therefore musn't be spoken about.
My argument didn't even make those assumptions. Nothing in my argument "falsified" reality, nor did I "prove" the existence of something outside my immediate senses. It was merely a probabilistic, anthropic argument. Are you familiar with anthropics? I want to hear from someone who knows anthropics well.
Indeed, your video game scenario is not even really qualitatively different from my own situation. Because if I were born with 1000 HP, you could still argue "data from within the 'simulation'...is not proof of something 'without'." And you could update your "scientific" understanding of the distribution of HP to account for the fact that precisely one character has 1000 HP.
The difference between my scenario and the video game one is merely quantitative: Pr(1000 HP | I'm not in a video game) < Pr(I'm a superlative | I'm not in a simulation), though both probabilities are very low.
I never said "falsified" in that reply - I said fake - a simulation is by definition fake. That is the meaning of the word in general sense. If i make a indistinguishible replica of the Mona Lisa and pass it off as real, I have made a fake. If some kind of demiurge makes a simulation and passes it off as 'reality' - it is a fake.
I've never heard of "anthropics" but I am familiar with the Anthropic Principle it's antecedents in pre-socratic philosophers like Heraclitus who are the first known record of the concept. Have you heard of Kant and German Idealism?
Indeed, your video game scenario is not even really qualitatively different from my own situation.
How? To take your example of being the tallest person: If all human beings were exactly 6 feet tall, and you were 600 feet tall, then you're saying that would be proof that you are in fact in a simulation. That might suggest you are in fact extremely special and unique, if you want to believe in a solipistiic, Truman-show style world.
if I were born with 1000 HP, you could still argue "data from within the 'simulation'...is not proof of something 'without'."
Yes. Exactly. I could. Although it would intuitively be less persuasive. But there aren't any 600 feet tall people in a world of otherwise uniform height.
The difference between my scenario and the video game one is merely quantitative: Pr(1000 HP | I'm not in a video game) < Pr(I'm a superlative | I'm not in a simulation), though both probabilities are very low
I don't understand where you're pulling that quantitative difference. Can you elaborate more?
I don't appreciate your tone sir! Anyway, I've now realized that this is a variant on the standard Presumptuous Philosopher problem, which you can read about here if you are mathematically inclined: https://www.lesswrong.com/s/HFyami76kSs4vEHqy/p/LARmKTbpAkEYeG43u#1__Proportion_of_potential_observers__SIA
I didn't think there was anything off with my tone. But please don't consider my inquisitiveness and lack of understanding anything other than a genuine desire to fill the gaps in my reasoning.
Again, what is your understanding of Kant and German Idealism and why do you think that the dualism presented in Kantian metaphysics is insufficient to answer your question? What misgivings or where does it leave you unsatisfied and why?
I'm not immediately sure how the Presumptious Philosopher example applies here: That is saying that there's theory 1 which has x amount of observers, and theory 2 which has x times x amount of observers. However, "the world is a simulation" is but one theory, there are potentially infinite other theories, some as of yet unfathomed, and others still completely unfathomable (hence the project of metaphysics and the very paradox of Idealism).
Are you saying the presumptuous philosopher would say: "there's clearly many more theories that aren't simulation than just simulation, so we can assume it's not a simulation"
I don't think that holds, because that assumes a uniform probability distribution between all theories.
Are you prepared to make that assumption?
You are misinterpreting the PP example. Consider the following two theories:
T1 : I'm the only one that exists, everyone else is an NPC
T2 : Everything is as expected, I'm not simulated.
Suppose for simplicity that both theories are equally likely. (This assumption really doesn't matter.) If I define Presumptuous Philosopher=Distinct human like myself=1/(10,000) humans, then I get in most universes, I am indeed the only one, but regardless, most copies of myself are not simulated.
I'm still not sure how it is related.
The implicit fear is that you are in a world which is manufactured because you, the presumed observer are so unique, right? Because you're freakishly tall or whatever.
However, as per the anthropic principle, any universe that humans exist in, and any universe that observer exists in is a universe where it is possible for them to exist. Or to put it another way: the rules of that universe are such that the observer doesn't defy the rules of that universe. Right?
So freakishly tall or average height: by the anthropic principle you are a possibility within that universe. (but, you are not the sole possibility in that universe - other observers are possible, non-human intelligent lifeforms aren't impossible just because humans are)
Why should we entertain the possibility that you are not possible within this universe, and therefore that some sort of demiurge or AGI or whatever watchmaker-stand-in you want for this thought experiment has crafted a simulation just for the observer?
How do we get that to the probability argument?
I don't understand. We should entertain the possibility because it is clearly possible (since it's unfalsifiable), because I care about it, because it can dictate my actions, etc. And the probability argument follows after specifying a reference class, such as "being distinct" or "being a presumptuous philosopher."
We should entertain the possibility because it is clearly possible (since it's unfalsifiable), because I care about it, because it can dictate my actions, etc.
What makes you care about it? What makes it persuasive to you? What decisions would you make differently and what tangible results within this presumed simulation would you expect to see differently pursuant to proving this? (How do you expect your belief in the simulation to pay rent in anticipated experiences?)
Also, the general consensus in rational or at least broadly in science is if something is unfalsifiable then it must not be entertained.
And the probability argument follows after specifying a reference class, such as "being distinct" or "being a presumptuous philosopher."
Say more? I don't see how they are the same reference class.
Edit 2: I'm now fairly confident that this is just the Presumptuous Philosopher problem is disguise, which is explained clearly in Section 6.1 here https://www.lesswrong.com/s/HFyami76kSs4vEHqy/p/LARmKTbpAkEYeG43u
This is my first post ever on LessWrong. Let me explain my problem.
I was born in a unique situation — I shall omit the details of exactly what this situation was, but for my argument's sake, assume I was born as the tallest person in the entire world. Or instead suppose that I was born into the richest family in the world. In other words, take as an assumption that I was born into a situation entirely unique relative to all other humans on an easily measurable dimension such as height or wealth (i.e., not some niche measure like "longest tongue"). And indeed, my unique situation is perhaps more immediate and obvious to myself and others than even height or wealth.
For that reason, I've always had an unconscious fear that I'm living in a fake, or simulated world. That fear recently entered my awareness. I reasoned a couple days ago that the fear is motivated by an implicit use of anthropic reasoning. Something along the lines of, "I could have been any human, so that fact that I'm this particular one, this unique human, means there's 'something wrong' with my world. And therefore I'm in a simulation." Something like that. I read through various posts on this site related to anthropic reasoning, including when to use SSA and SIA, but none of them seem to address my concern specifically. Hopefully someone reading this can help me.
To be clear, the question I want answered is the following: "Based on the theory of anthropic reasoning as it is currently understood, from my perspective alone (not your perspective, as the person responding to me, but my own), is my distinctiveness strong evidence for being in a simulation? And if it is, by how much should I 'update' my belief in the simulation given my own observation of my distinctiveness?"
Please let me know if you need any clarifications on this question. The question matters a lot to me, so thank you to anyone who responds.
Edit: In particular, I wonder if the following Bayesian update is sound:
As rough estimates, let Pr(I'm in a simulation) = 0.01, Pr(I'm distinct | I'm not in a simulation) = 0.0001, Pr(I'm distinct | I'm in a simulation) = 0.5 — a high probability since I assume simulated observers are quite likely to be 'special' or 'distinct' with respect to the class of other entities in their simulated world that appear to be observers. (Though perhaps this assumption is precisely my error. Should I be applying SIA here to argue that this latter probability is much smaller? Because simulated worlds in which the other observers are real and not 'illusory' would have low probability of distinctiveness and far more observers? I don't know if this is sound. Should be using SSA instead here to make an entirely separate argument?)
From these estimates, we calculate Pr(I'm distinct)=0.0060, and then using Bayes' theorem, we find Pr(I'm in a simulation | I'm distinct) = 0.98. So even with a quite small, 0.01 prior on being in a simulation, the fact that I'm distinct gives me a 98% chance that I'm in a simulation.