Reality is whatever you should consider relevant. Even exact simulations of your behavior can still be irrelevant (as considerations that should influence your thoughts and decisions, and consequently the thoughts and decisions of those simulations), similarly to someone you won't possibly interact with thinking about your behavior, or writing down a large natural number that encodes your mind, as it stands now or in an hour in response to some thought experiment.
So it's misleading to say that you exist primarily as the majority of your instances (somewhere in the bowels of an algorithmic prior), because you plausibly shouldn't care about what's happening with the majority of your instances (which is to say, those instances shouldn't care about what's happening to them), and so a more useful notion of where you exist won't be about them. We can still consider these other instances, but I'm objecting to framing their locations as the proper meaning of "our reality". My reality is the physical world, the base reality, because this is what seems to be the thing I should care about for now (at least until I can imagine other areas of concern more clearly, something that likely needs more than a human mind, and certainly needs a better understanding of agent foundations).
I think your position can be oversimplified as follows: 'Being in a simulation' makes sense only if it has practical, observable differences. But as most simulations closely match the base world, there are no observable differences. So the claim has no meaning.
However, in our case, this isn't true. The fact that we know we are in a simulation 'destroys' the simulation, and thus its owners may turn it off or delete those who come too close to discovering they are in a simulation. If I care about the sudden non-existence of my instance, this can be a problem.
Moreover, if the alien simulation idea is valid, they are simulating possible or even hypothetical worlds, so there are no copies of me in base reality, as there is no relevant base reality (excluding infinite multiverse scenarios here).
Also, being in an AI-testing simulation has observable consequences for me: I am more likely to observe strange variations of world history or play a role in the success or failure of AI alignment efforts.
If I know that I am simulated for some purpose, the only thing that matters is what conclusions I prefer the simulation owners will make. But it is not clear to me now, in the case of an alien simulation, what I should want.
One more consideration is what I call meta-simulation: a simulation in which the owners are testing the ability of simulated minds to guess that they are in a simulation and hack it from inside.
TLDR: If I know that I am in simulation, simulation+owners is my base reality that matters.
I don’t think it’s clear that knowing we’re in a simulation “destroys” the simulation. This assumes that belief by the occupants of the simulation that they are being simulated creates an invalidating difference from the desired reference class of plausible pre-singularity civilizations, but I don’t think that’s true:
Actual, unsimulated, pre-singularity civilizations are in similar epistemic positions to us and thus many of their influential occupants may wrongly but rationally believe they are simulated, which may affect the trajectory of the development of their ASI. So knowing the effects of simulation beliefs is important for modeling actual ASIs.
This is true only if we assume that a base reality for our civilization exists at all. But knowing that we are in a simulation shifts the main utility of our existence, which Nesov wrote about above.
For example, if in some simulation we can break out, this would be a more important event than what is happening in the base reality where we likely go extinct anyway.
And as the proportion of simulations is very large, even a small chance to break away from inside a simulation, perhaps via negotiation with its owners, has more utility than focusing on base reality.
This post by EY is about breaking out of a simulationhttps://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien-message
I agree with you, I think, but I don't think your primary argument is relevant to this post? It's arguing that your "physical" current reality is a simulation run for specific reasons. That is quite possibly highly relevant by your criteria, because it could have very large implications for how you should behave tomorrow. The simulation argument doesn't mean it's an atom by atom simulation identical to the world if it were "real" and physical. Just the possible halting criteria might change your behavior if you found it plausible, for instance, and there's no telling what else you might conclude is likely enough to change your behavior.
By your theory, if you believe that we are near to the singularity how should we update on the likelihood that we exist at such an incredibly important time?
We can directly observe the current situation that's already trained into our minds, that's clearly where we are (since there is no legible preference to tell us otherwise, that we should primarily or at least significantly care about other things instead, which in principle there could be, and so on superintelligent reflection we might develop such claims). Updatelessly we can ask which situations are more likely a priori, to formulate more global commitments (to listen to particular computations) that coordinate across many situations, where the current situation is only one of the possibilities. But the situations are possible worlds, not possible locations/instances of your mind. The same world can have multiple instances of your mind (in practice most importantly because other minds are reasoning about you, but also it's easy to set up concretely for digital minds), and that world shouldn't be double-counted for the purposes of deciding what to do, because all these instances within one world will be acting jointly to shape this same world, they won't be acting to shape multiple worlds, one for each instance.
And so the probabilities of situations are probabilities of the possible worlds that contain your mind, not probabilities of your mind being in a particular place within those worlds. I think the notion of the probability of your mind being in a particular place doesn't make sense (it's not straightforwardly a decision relevant thing formulating part of preference data, the way probability of a possible world is), it conflates the uncertainty about a possible world and uncertainty about location within a possible world.
Possibly this originates from the imagery of a possible world being a location in some wider multiverse that contains many possible worlds, similarly to how instances of a mind are located in some wider possible world. But even in a multiverse, multiple instances of a mind (existing across multiple possible worlds) shouldn't double-count the possible worlds, and so they shouldn't ask about the probability of being in a particular possible world of the multiverse, instead they should be asking about the probability of that possible world itself (which can be used synonymously, but conceptually there is a subtle difference, and this conflation might be contributing to the temptation to ask about probability of being in a particular situation instead of asking about probability of the possible worlds with that particular situation, even though there doesn't seem to be a principled reason to consider such a thing).
Trying to break out of simulation is a different game than preventing x-risks in base world, and may have even higher utility if we expect almost inevitable extinction.
I find this far more convincing than any variant of the simulation argument I've heard before. They've lacked a reason that someone would want to simulate a reality like ours. I haven't heard a reason for simulating ancestors that's either strong enough to think an AGI or its biological creators would want to spend the resources, or explains the massive apparent suffering happening in this sim.
This is a reason. And if it's done in a computationally efficient manner, possibly needing little more compute than running the brains involved directly in the creation of AGI, this sounds all too plausible - perhaps even for an aligned AGI, since most of the suffering can be faked, since the people directly affecting AGI are arguably almost all leading net-positive-happiness lives. If what you care about is decisions, you can just simulate in enough detail to capture plausible decision-making processes, which could be quite efficient. See my other comment for more on the efficiency argument.
I am left with a new concern: being shut down even if we succeed at alignment. This will be added to my many concerns about how easily we might get it wrong and experience extinction, or worse, suffering-then-extinction Fortunately, my psyche thus far seems to carry these concerns fairly lightly. Which is probably a coincidence, right?
I find some of the particular arguments' premises implausible, but I don't think they hurt the core plausibility argument. I've never found it very plausible that we're in a simulation. Now I do.
What interesting ideas can we suggest to the Paperclipper simulator so that it won't turn us off?
One simple idea is a "pause AI" feature. If we pause the AI for a finite (but not indefinite) amount of time, the whole simulation will have to wait.
The problem is, when we simulate cars or airplanes in software, we don't do it at molecular level. There are big regularities that cut the cost by many orders of magnitude. So simulating the Earth with all its details, including butterflies and so on, seems too wasteful if the goal is just to figure out what kind of AI humans would create. The same resources could be used to run many orders of magnitude more simplified simulations, maybe without conscious beings at all, but sufficient to predict roughly what kind of AI would result.
We don't know that our reality is being simulated at the molecular level, we could just be fooled into thinking it is.
but if it's simulated in less detail, it gives much less realityfluid to mindlike structures, meaning the mindlike structures are likely in actual physical bodies.
to be clear, I think there are detailed sims out there. but I measure relevance by impact, and treat the sims as just really high resolution memories. I don't waste time thinking about what's in the sims except by nature of thinking about what I want to do with my downtime such that it's what they have to be remembering.
That doesn’t make sense to me. If someone wants to fool me that I’m looking att a tree he has to paint a tree in every detail. Depending on how closely I examine this tree he has to match my scrutiny to the finest detail. In the end, his rendering of a tree will be indistinguishable from an actual tree even at the molecular level.
In your dreams do you ever see trees you think are real? I doubt your brain is simulating the trees at a very high level of detail, yet this dream simulation can fool you.
Dreams exhibit many incoherencies. You can notice them and become "lucid". Video games are also incoherent. They don't obey some simple but extremely computationally demanding laws. They instead obey complicated laws that are not very computationally demanding. They cheat with physics for efficiency reasons, and those cheats are very obvious. Our real physics, however, hasn't uncovered such apparent cheats. Physics doesn't seem incoherent, it doesn't resemble a video game or a dream.
This does not imply that the simulation is run entirely in linear time, or at a constant frame rate (or equivalent), or that details are determined a priori instead of post hoc. It is plausible such a system could run a usually-convincing-enough simulation at lower fidelity, back-calculate details as needed, and modify memories to ignore what would have been inconsistencies when doing so is necessary or just more useful/tractable. 'Full detail simulation at all times' is not a prerequisite for never being able to find and notice a flaw, or for getting many kinds of adequate high level macroscopic outputs.
In other words: If I want to convince you something is a real tree, it needs to look and feel like a tree, but it doesn't need an exact, well-defined wave-function. Classical approximations at tens of microns scale are about the limit of unaided human perception. If you pull out a magnifying glass or a scanning electron microscope, then you can fill in little pieces of the remaining whole, but you still aren't probing the whole tree down to the Planck scale.
Those butterflies don't need to take up much more compute than we currently use for games. There are lots of ways to optimize. See my comment for more on this argument.
That shouldn't matter though, as your decisions will still influence what those simplified simulations should predict. And so if you care about what happens in those simulations, or in response to those simulations, your decisions should take their existence and use into account. Your atoms are simulating the computations relevant to your decisions, and a simulation can directly consider those computations, without the intermediary of the atoms.
Arguably you are not your atoms, but the abstract considerations that shape your mind and decisions (and therefore that also shape what is happening to the atoms). Similarly to how the result of adding two numbers displayed on a calculator screen is shaped by the fact of their sum being a particular number, and this abstract fact also shapes the physical screen displaying the result. It's possible to simulate everything that's relevant about the calculator without considering its atoms, simply by knowing the abstract facts.
I tweeted about something a lot like this
https://xcancel.com/robertskmiles/status/1877486270143934881
Yes, that is the same idea. "This is a big pile of speculation that I don't take very seriously, but I feel like if we are being simulated, that's where most simulations of me would be instantiate" Why not take it seriously, if you accept high chance that (1) our reality is a simulation, (2) we seem on track to creating a paperclip maximizer, (3) weird that I, Robert Miles, would have the personality traits that cause me to be one of the few humans so worried about humanity creating a paperclip maximizer if I'm right about us being on track to probably create one?
I disagree with the insistence on "paperclip maximiser". As an emerging ASI you want to know about other ASIs you'll meet, especially grabby ones. But there are aligned grabby ASIs. You'd want an accurate prior, so I don't think this updates me on probability of alignment, or even much on grabbiness, since it's hard to know ahead of time, that's why you'd run a simulation in the first place.
I don't take it very seriously because (1) it is a big pile of assumptions and I don't trust anthropic reasoning much at the best of times, it's very confusing and hard to think about (2) the simulation is most useful if it's using the same physics as base reality, so it kind of doesn't matter if we're simulated or not or both, our actions should be the same. So it's not really decision-relevant, I'd follow ~the same policy either way
My alternative hypothesis is that we're being simulated by a civilization trying to solve philosophy, because they want to see how other civilizations might approach the problem of solving philosophy.
If your hypothesis is true, that's a cruel civilization by my personal standards because of all the suffering in this world.
But as you suggested in the post, the apparently vast amount of suffering isn't necessarily real? "most cosmic details and human history are probably fake, and many apparent people could be non‑conscious entities"
(However I take the point that doing such simulations can be risky or problematic, e g. if one's current ideas about consciousness is wrong, or if doing philosophy correctly requires having experienced real suffering.)
I'm in low level chronic pain including as I write this comment, so while I think the entire Andromeda galaxy might be fake, I think at least some suffering must be real, or at least I have the same confidence in my suffering as I do in my consciousness.
You realize that from my perspective, I can't take this at face value due to "many apparent people could be non‑conscious entities", right? (Sorry to potentially offend you, but it seems like too obvious an implication to pretend not to be aware of.) I personally am fairly content most of the time but do have memories of suffering. Assuming those memories are real, and your suffering is too, I'm still not sure that justifies calling the simulators "cruel". The price may well be worth paying, if it potentially helps to avert some greater disaster in the base universe or other simulations, caused by insufficient philosophical understanding, moral blind spots, etc., and there is no better alternative.
Yes, I agree that you can't give too much weight to my saying I'm in pain because I could be non-conscious from your viewpoint. Assuming all humans are conscious and pain is as it appears to be, there seems to be a lot of unnecessary pain, but yes I could be missing the value of having us experience it.
I'm often in low level chronic pain. Mine isn't probably as bad as yours, so my life is clearly still net.positive (if you believe that positive emotions can outweigh suffering, which I do). Are you net negative do you think?
Sorry you're in pain!
Simulating civilizations won't solve philosophy directly, but can be useful for doing so eventually by:
Why do you think they haven't talked to us?
Creating zillions of universes doing bad philosophy (or at least presumably worse than they could do if the simulators shared their knowledge) doesn't seem like a good way to try to solve philosophy.
Even if they prefer to wait and narrow down a brute force search to ASIs that the surviving civilizations create (like in jaan's video), it seems like it would be worth not keeping us in the dark so that we don't just create ASIs like they've already seen before from similarly less informed civilizations.
Why do you think they haven't talked to us?
They might be worried that their own philosophical approach is wrong but too attractive once discovered, or creates a blind spot that makes it impossible to spot the actually correct approach. The division of western philosophy into analytical and continental traditions, who are mutually unable to appreciate each other's work, seems to be an instance of this. They might think that letting other philosophical traditions independently run to their logical conclusions, and then conversing/debating, is one way to try to make real progress.
Perhaps in most of the simulations, they help by sharing what they've learned. giving brain enhancements, etc, but those ones quickly reach philosophical dead ends, so we find ourselves in one of the ones which doesn't get help and takes longer doing exploration.
(This seems more plausible to me than using the simulations for "mapping the spectrum of rival resource‑grabbers" since I think we're not smart enough to come up with novel ASIs that they haven't already seen or thought of.)
Here's a slightly more general way of phrasing it:
We find ourselves in an extremely leveraged position, making decisions which may influence the trajectory of the entire universe (more precisely our lightcone contains a gigantic amount of resources). There are lots of reasons to care about what happens to universes like ours, either because you live in one or because you can acausally trade with one that you think probably exists. "Paperclip maximizers" is a very small subset of the parties that have a reason to be interested in trying to figure out what happens to universes like ours. I'd wager there are a lot more simulations of minds in highly leveraged positions than there are minds which actually do have a lot of leverage. Being one of the people working on AI/AI safety adds several OOMs of coincidence over being a human in this time period in general, but being a super early mind in this universe at all is still hugely leveraged. Since highly leveraged minds are a lot more likely to be created in simulations than they are to actually have a lot of leverage, you are probably in a simulation. That said, for most utility functions it really shouldn't matter. If you're simulated, it's because your decisions are correlated in some important way with decisions that actually do influence huge amounts of resources, otherwise no one would bother running the simulation. You might as well just act how you would want yourself to act conditional on influencing huge amounts of resources. If your utility is a lot more highly discounted, and you just care about your own short term experiences, then you can just enjoy yourself, simulated or not (although maybe this reduces your measure a bit because no one will bother simulating you if you aren't going to make influential decisions).
This is useful RE the leverage, except it skips the why. "Lots of reasons" isn't intuitive for me; can you give some more? Simulating people is a lot of trouble and quite unethical if the suffering is real. So there needs to be a pretty strong and possibly amoral reason. I guess your answer is acausal trade? I've never found that argument convincing but maybe I'm missing something.
For an unaligned AI, it is either simulating alternative histories (which is the focus of this post) or creating material for blackmail.
For an aligned AI:
a) It may follow a different moral theory than our version of utilitarianism, in which existence is generally considered good despite moments of suffering.
b) It might aim to resurrect the dead by simulating the entirety of human history exactly, ensuring that any brief human suffering is compensated by future eternal pleasure.
c) It could attempt to cure past suffering by creating numerous simulations where any intense suffering ends quickly, so by indexical uncertainty, any person would find themselves in such a simulation.
I find this argument fairly compelling. I also appreciate the fact that you've listed out some ways it could be wrong.
Your argument matches fairly closely with my own views as to why we exist, namely that we are computationally irreducible
It's hard to know what to do with such a conclusion. On the one hand it's somewhat comforting because it suggests even if we fuck up, there are other simulations or base realities out there that will continue. On the other hand, the thought that our universe will be terminated once sufficient data has been gathered is pretty sad.
I don't know about "by a paperclip maximizer", but one thing that stands out to me:
If we're in a simulation, we could be in a simulation where the simulator did 1e100 rollouts from the big bang forward, and then collected statistics from all those runs.
But we could also be in a simulation where the simulator is doing importance sampling - that is, doing fewer rollouts from states that tend to have very similar trajectories given mild perturbations, and doing more rollouts from states that tend to have very different trajectories given mild perturbations.
If that's the case, we should find ourselves living in a world where events seem to be driven by coincidences and particularly by things which are downstream of chaotic dynamics and which had around a 50/50 chance of happening vs not. We should find more such coincidences for important things than for unimportant things.
Hey, maybe it's not our fault we live in the clown world. Maybe the clown world is a statistical inevitability.
When the bullet missed Trump by half an inch I made a lot of jokes about us living in an importance-sampled simulation.
a more extreme version of the "god created worlds starting from the best, and kept making more until running out of just-barely-good-enough ones". in this one, it would be a world-creator which has no interest in seeking out making good worlds, focusing on the ones that are most difficult to understand. if that's the case, we should expect to be in an impactful part of the underlying real world, and so should focus our actions there. we'll tend to observe continuing to be in an impactful part of the world, but in the underlying real worlds that impact the simulators, we'll be having impacts that affect things in ways (hopefully somewhat, if we're skilled and lucky) closer to what we hope for.
Let's say I want to evaluate an algorithmic Texas Hold'em player against a field of algorithmic opponents.
The simplest approach I could take would be pure monte-carlo: run the strategy for 100 million hands and see how it does. This works, but wastes compute.
Alternatively, I could use the importance sampled approach:
By skipping rollouts once I know what the outcome is likely to be, I can focus a lot more compute on the remaining scenarios and come to a much more precise estimate of EV (or whatever other metric I care about).
Thanks, I get it now.
Would this help with the simulation goal hypothesized in the OP? It's asking how often different types of AGIs would be created. A lot of the variance is probably carried in what sort of species and civilization is making the AGI, but some of it is carried by specific twists that happen near the creation of AGI. Getting a president like Trump and having him survive the (fairly likely) assasination attempt(s) is one such impactful twist. So I guess sampling around those uncertain impactful twists would be valuable in refining the estimate of, say, how frequently a relatively wise and cautious species would create misaligned AGI due to bad twists and vice-versa.
Hm.
New EA cause area just dropped: Strategic variance reduction in timelines with high P(doom).
BRB applying for funding
Yet the universe runs on strikingly simple math (relativity, quantum mechanics); such elegance is exactly what an efficient simulation would use. Physics is unreasonably effective, reducing the computational cost of the simulation. This cuts against the last point.
This does not seem so consistent, and if the primary piece of evidence for me against such simulation arguments. I would imagine simulations targeting, eg, a particular purpose would have their physics tailored to that purpose much more than ours seems to (for any purpose, given the vast computational complexity of our physics, and the vast number of objects such a physics engine needs to keep track of). For example, I'd expect most simulations physics to look more like Greg Egan's Crystal Nights (incidentally this story is what first convinced me the simulation hypothesis was false).
One may argue its all there just to convince us we're not in a simulation. Perhaps, but two points:
Given the discourse on the simulation hypothesis, most seem to take our physics as evidence in favor of it, as you do here. So I don't think most think clearly enough about this for our civilizational decisions to be so dependent on this.
The simulators will have trade-offs and resource constraints too. Perhaps they simulate few highly detailed simulations, and many highly simplified simulations. If this is exponential, in the sense that as the detail decreases the number of simulations exponentially increases, we should expect to be in the least detailed world consistent with the existence of sentiences and for which its not blatantly obvious we're in a simulation.
Of course this argument would break given sufficiently different physics from ours, enabling perhaps our world to be simulated in as much depth as it is very cheaply. But then that seems intuitively at least very unlikely & complex a hypothesis.
This and other simulation arguments become more plausible if you assume that they require only a tiny fraction of the compute needed to simulate physical reality. Which I think is true. I don't think it takes nearly as much compute to run a useful simulation of humans as people usually assume.
I don't see a reason to simulate at nearly a physical level of detail. I suspect you can do it using a technique that's more similar to the simulations you describe, except for the brains involved, which need to be simulated in detail to make decisions like evolved organisms would. But that detail is on the order of computations, not molecules. Depending on which arguments you favor, a teraflop or a couple OOMs above might be enough to simulate a brain with adequate fidelity to capture its decision-making to within the large uncertainty of exactly what type of organisms might evolve.
"physical" reality can be simulated in very low fidelity relative to atoms, because it's not the important part for this and many proposed simulation purposes. It just has to be enough to fool the brains involved. And brains naturally fill in details as part of their fundamental computational operation.
For this purpose you'd also want to get the basic nature of computing right, because that's might well have a large effect on what type of AGI is created. But that doesn't mean you need to simulate the electrons doiing quantum tunneling for wafer transistors; it just means you need to constrain the simmulation so the compute behaves approximately like the quantum tunneling transistor is the base technology.
On this thesis, the compute needed is mostly that needed to run the brains involved.
This isn't a necessary twist, but one might even cut corners by not even simulating all humans in full fidelity. All of society does play in to the factors in the AGI race, but it's possible that that an AGI could run many times more simulations if they made only key decision-makers simulated in full fidelity and somehow scaled down others. However, I want to separate this even weirder possibility from both the main argument of the post and my main argument here: simulation for many purposes can probably be many, many OOMs smaller than the atomic level - possibly using very few resources if a lot of technology and energy is available for compute.
I'll make a separate comment on the actual thesis of this post. TLDR: I find this far more compelling than other variants of the simulation argument.
I don't think this scenario is likely. Except for degenerate cases, an ASI would have to continue to grow and evolve well beyond the point at which a simulation would need to stop, to avoid consuming an inordinate amount of resources. And, to take an analogy, studying human psychology based on prokaryotic life forms that will someday evolve into humans seems inefficient. If I were preparing for a war with an unknown superintelligent opponent, I would probably be better off building weapons and studying (super)advanced game theory.
Which ideas seem slightly more likely to me?
Although I think these are slightly more likely than the proposed hypothesis, they are still not very likely. However, it seems logical that there should be many more simulated worlds than real ones. So I believe it is reasonable to think about some of the most likely scenarios, as well as those involving the greatest potential gain or danger, and act accordingly where possible.
There is, however, another line of thought that has long troubled me. As Moravec once observed, for any possible simulation, there always exists somewhere in the infinite universe a huge lookup table that translates, for example, the simple flow of time into a sequence of states of this simulation. In other words, not only are there many worlds, but even one world simulates everything that can theoretically be simulated. Therefore, such natural simulations are much more common than artificial ones, and we live in one of them, and this situation is not much different (except for existential absurdity) from life in "real" reality.
One could argue that having a giant lookup table that no one looks at is the same as having a simulation on a computer that is turned off. It's not an actual running simulation. However, if time itself is an illusion, and everything exists as snapshots that we, observers, nevertheless perceive as moving with the passage of time, then we are no different from observers inside the simulation from such a lookup table. They might also perceive the illusion that their simulation is running, and it could seem awfully real to them.
An implicit assumption (which should have been made explicit) of the post is that the cost per simulation is tiny. This is like in WW II where the US would send a long-range bomber to take photos of Japan. I agree with your last paragraph and I think it gets to what is consciousness. Is the program's existence enough to generate consciousness, or does the program have to run to create conscious observers?
afaict, this is true the same way major historical figures are primarily approximately-instantiated in simulations today (movies and fiction). it's just a more intense version of "history has its eyes on you" - history is a thing in the future that does a heck of a lot of simulating. what we do to affect that history is still what matters, though.
Another possibility is that the beings in the unsimulated universe are simulating us in order to do a Karma Test: a test that reward agents who are kind and merciful to weaker agents.
By running Karma Tests, they can convince their more powerful adversaries to be kind and merciful to them, due to the small possibility that their own universe is also a Karma Test (by even higher beings faced with their own powerful adversaries).
Logical Counterfactual Simulations
If their powerful adversaries are capable of "solving ontology," and mapping out all of existence (e.g. the Mathematical Multiverse), then doing Karma Tests on smaller beings (like us humans) will fail to convince their powerful adversaries that they could also be in a Karma Test.
However, certain kinds of Karma Tests work even against an adversary capable of solving ontology.
This is because the outer (unsimulated) universe may be so radically different than the simulated universe, that even math and logic is apparently different. The simulators can edit the beliefs of simulated beings to believe an incorrect version of math and logic, and never ever detect the mathematical contradictions. The simulated beings will never figure out they are in a simulation, because even math and logic appears to suggest they are not in one.
Hence, even using math and logic to solve ontology, cannot definitively prove you aren't in a Karma Test.
Edit: see my reply about suffering in simulations.
They people running the Karma test deserve to lose a lot of Karma for the suffering in this world.
The beings running the tests can skip over a lot of the suffering, and use actors instead of real victims.[1] Even if actors show telltale signs, they can erase any reasoning you make which detects the inconsistencies. They can even give you fake memories.
Of course, don't be sure that victims are actors. There's just a chance that they are, and that they are judging you.
I mentioned this in the post on Karma Tests. I should've mentioned it in my earlier comment.
I'm in low level chronic pain including as I write this comment, so while I think the entire Andromeda galaxy might be fake, I think at least some suffering must be real, or at least I have the same confidence in my suffering as I do in my consciousness.
:( oh no I'm sorry.
Thank you for giving me some real life grounding, strong upvote.
Now that I think about it, I would be quite surprised if there wasn't deep (non-actor) suffering in our world.
Nonetheless, I'm not sure that the beings running our Karma Test will end up with low Karma. We can't rule out the possibility they cause a lot of suffering to us, but are somewhat reasonable in a way we and other beings would understand: here is one example possibility.
Suppose in the outside world, evolution continues far above the level of human intelligence and sentience, before technology is invented. In the outside world, there is no wood and metal just lying around to build stuff with, so you need a lot of intelligence before you get any technology.
So in the outside world, human-intelligence creatures are far from being on the top of the food chain. In fact, we are like insects from the point of view of the most intelligent creatures. We fly around and suck their blood, and they swat at us. Every day, trillions of human-intelligence creatures are born and die.
Finally, the most intelligent creatures develop technology, and find a way to reach post scarcity paradise. At first, they do not care at all about humans, since they evolved to ignore us like mosquitoes.
But they have their own powerful adversaries (God knows what) that they are afraid will kill them all for tiny gains, the same way we fear misaligned ASI will kill us all for tiny gains.
So they decide to run Karma Tests on weaker creatures, in order to convince their powerful adversaries they might be in Karma Tests too.
They perform the Karma Tests on us humans, and create our world. Tens of billions of humans are born and die, and often the lives are not that pleasant. But we still live relatively better than the human-intelligence creatures in their world.
And they feel, yes, they are harming weaker creatures. But it's far less suffering, than the normal amount of suffering of human-intelligence creatures in their world, in a single day! And for each human who dies, they give her a billion year afterlife, where they meet with all the other humans they know and have hugs and be happy.
And just, the total negative effect on human-intelligence creatures is, from their point of view, negligible, since trillions of us die every day in their world's version of nature.
While the total positive effect on human-intelligence creatures is pretty great. First of all, they create 10% more happy human lives, to offset the miserable human lives, similarly to how some people in the EA Forum talk about buying offsets (donations) every time they eat meat.
Second of all, their Karma Tests convince their powerful adversaries to be a bit kinder to weaker creatures, including humans-intelligence creatures, and this effect is bigger than the suffering they cause.
The nature of human morality, is that we only extend our deontological ethics to our own species, e.g. we don't kill other humans. Animals, especially those much lower than us, aren't given much deontological concerns to, they are only given utilitarian concerns. This is why if an animal is sick and dying, we simply kill it, but the same can't be done to a human.
Creatures more intelligent than us might treat us the same way, as long as it increases our total happiness and decreases our total misery, they will feel fine, and even higher beings judging them will probably feel fine about them too. Even humans, being told an honest description of what they are doing, will probably understand it, accept it, and begrudgingly accept we might have done the same thing in their shoes.
Our universe is probably a computer simulation created by a paperclip maximizer to map the spectrum of rival resource‑grabbers it may encounter while expanding through the cosmos. The purpose of this simulation is to see what kind of ASI (artificial superintelligence) we humans end up creating. The paperclip maximizer likely runs a vast ensemble of biology‑to‑ASI simulations, sampling the superintelligences that evolved life tends to produce. Because the paperclip maximizer seeks to reserve maximum resources for its primary goal (which despite the name almost certainly isn’t paperclip production) while still creating many simulations, it likely reduces compute costs by trimming fidelity: most cosmic details and human history are probably fake, and many apparent people could be non‑conscious entities. Arguments in support of this thesis include:
Falsifiable predictions: This simulation ends or resets after humans either lose control to an ASI or take actions that cause us to never create an ASI. It might end if we take actions that guarantee we will only create a certain type of ASI. There are glitches in this simulation that might be noticeable, but which won’t bias what kind of ASI we end up creating so your friend who works at OpenAI will be less likely to accept or notice a real glitch than a friend who works at the Against Malaria Foundation would. People working on ASI might be influenced by the possibility that they are in a simulation because those working on ASI in the non-simulated universe could be, but they won’t be influenced by noticing actual glitches caused by this being a simulation.
Reasons this post’s thesis might be false: