"Free will" being a illusion fits pretty well with the simulation hypothesis.
Similar to a game of The Sims the characters actions are chosen in advance.
A string of actions were your last action effects the next one and were actions are cancelled out and changed.
Your next action is to prepare a meal. You walk to the kitchen to start preparing the meal when you open the fridge and notice you don't have any food. The action is now cancelled and replaced with "Go to the store to buy food".
Many Worlds against Simulation?
Lets assume few things:
1. Many Worlds is real.
2. All identical consciousnesses measures as 1 in anthropics . So if we have set of consciousness: 1xA,1xB and 1000000xC, it is still 1/3 chance, to perceive being C.
Now say some intelligent being (i.e. human) starts another human brain simulation on silicon chip. The operations it does are all discrete, so despite the chip splitting in to many chips in many worlds, the simulated consciousness itself remain just 1 (because of #2 assumption).
But that is not true for human who started the simulation as he differs somehow in every Everett branch and reaches billions different consciousnesses really fast.
Is there some mistake in reasoning, that real persons should heavily outweigh simulations, despite, how many of them are running, given such assumptions?
Updating towards the simulation hypothesis because you think about AI
(This post is both written up in a rush and very speculative so not as rigorous and full of links as a good post on this site should be but I'd rather get the idea out there than not get around to it.)
Here’s a simple argument that could make us update towards the hypothesis that we live in a simulation. This is the basic structure:
1) P(involved in AI* | ¬sim) = very low
2) P(involved in AI | sim) = high
Ergo, assuming that we fully accept this the argument and its premises (ignoring e.g. model uncertainty), we should strongly update in favour of the simulation hypothesis.
Premise 1
Supposed you are a soul who will randomly awaken in one of at least 100 billion beings (the number of homo sapiens that have lived so far), probably many more. What you know about the world of these beings is that at some point there will be a chain of events that leads to the creation of superintelligent AI. This AI will then go on to colonize the whole universe, making its creation the most impactful event the world will see by an extremely large margin.
Waking up, you see that you’re in the body of one of the first 1000 beings trying to affect this momentous event. Would you be surprised? Given that you were randomly assigned a body, you probably would be.
(To make the point even stronger and slightly more complicated: Bostrom suggests to use observer moments, e.g. an observer-second, rather than beings as the fundamental unit of anthropics. You should be even more surprised to find yourself as an observer-second thinking about or even working on AI since most of the observer seconds in people's lives don’t do so. You reading this sentence may be such a second.)
Therefore, P(involved in AI* | ¬sim) = very low.
Premise 2
Given that we’re in a simulation, we’re probably in a simulation created by a powerful AI which wants to investigate something.
Why would a superintelligent AI simulate the people (and even more so, the 'moments’) involved in its creation? I have an intuition that there would be many reasons to do so. If I gave it more thought I could probably name some concrete ones, but for now this part of the argument remains shaky.
Another and probably more important motive would be to learn about (potential) other AIs. It may be trying to find out who its enemies are or to figure out ways of acausal trade. An AI created with the 'Hail Mary’ approach would need information about other AIs very urgently. In any case, there are many possible reasons to want to know who else there is in the universe.
Since you can’t visit them, the best way to find out is by simulating how they may have come into being. And since this process is inherently uncertain you’ll want to run MANY simulations in a Monte Carlo way with slightly changing conditions. Crucially, to run these simulations efficiently, you’ll run observer-moments (read: computations in your brain) more often the more causally important they are for the final outcome.
Therefore, the thoughts of people which are more causally connected to the properties of the final AI will be run many times and that includes especially the thoughts of those who got involved first as they may cause path-changes. AI capabilities researchers would not be so interesting to simulate because their work has less effect on the eventual properties of an AI.
If figuring out what other AIs are like is an important convergent instrumental goal for AIs, then a lot of minds created in simulations may be created for this purpose. Under SSA, the assumption that “all other things equal, an observer should reason as if they are randomly selected from the set of all actually existent observers [or observer moments] (past, present and future) in their reference class”, it would seem rather plausible that,
P(involved in AI | sim) = high
The closer the causal chain to (capabilities research etc)
If you read this, you’re probably one of those people who could have some influence over the eventual properties of a superintelligent AI and as a result should update towards living some simulation that’s meant to figure out the creation of an AI.
Why could this be wrong?
I could think of four general ways in which this argument could go wrong:
1) Our position in the history of the universe is not that unlikely
2) We would expect to see something else if we were in one of the aforementioned simulations.
3) There are other, more likely, situations we should expect to find ourselves in if we were in a simulation created by an AI
4) My anthropics are flawed
I’m most confused about the first one. Everyone has some things in their life that are very exceptional by pure chance. I’m sure there’s some way to deal with this in statistics but I don’t know it. In the interest of my own time I’m not going to go elaborate further on these failure modes and leave that to the commentators.
Conclusion
Is this argument flawed? Or has it been discussed elsewhere? Please point me to it. Does it make sense? Then what are the implications for those most intimately involved with the creation of superhuman AI?
Appendix
My friend Matiss Apinis (othercenterism) put the first premise like this:
“[…] it's impossible to grasp that in some corner of the Universe there could be this one tiny planet that just happens to spawn replicators that over billions of painful years of natural selection happen to create vast amounts of both increasingly intelligent and sentient beings, some of which happen to become just intelligent enough to soon have one shot at creating this final invention of god-like machines that could turn the whole Universe into either a likely hell or unlikely utopia. And here we are, a tiny fraction of those almost "just intelligent enough" beings, contemplating this thing that's likely to happen within our lifetimes and realizing that the chance of either scenario coming true may hinge on what we do. What are the odds?!"
Newcomb, Bostrom, Calvin: Credence and the strange path to a finite afterlife
This is a bit rough, but I think that it is an interesting and potentially compelling idea. To keep this short, and accordingly increase the number of eyes over it, I have only sketched the bare bones of the idea.
1) Empirically, people have varying intuitions and beliefs about causality, particularly in Newcomb-like problems (http://wiki.lesswrong.com/wiki/Newcomb's_problem, http://philpapers.org/surveys/results.pl, and https://en.wikipedia.org/wiki/Irresistible_grace).
2) Also, as an empirical matter, some people believe in taking actions after the fact, such as one-boxing, or Calvinist “irresistible grace”, to try to ensure or conform with a seemingly already determined outcome. This might be out of a sense of retrocausality, performance, moral honesty, etc. What matters is that we know that they will act it out, despite it violating common sense causality. There has been some great work on decision theory on LW about trying to thread this needle well.
3) The second disjunct of the simulation argument (http://wiki.lesswrong.com/wiki/Simulation_argument) shows that the decision making of humanity is evidentially relevant in what our subjective credence should be that we are in a simulation. That is to say, if we are actively headed toward making simulations, we should increase our credence of being in a simulation, if we are actively headed away from making simulations, through either existential risk or law/policy against it, we should decrease our credence.
4) Many, if not most, people would like for there to be a pleasant afterlife after death, especially if we could be reunited with loved ones.
5) There is no reason to believe that simulations which are otherwise nearly identical copies of our world, could not contain, after the simulated bodily death of the participants, an extremely long-duration, though finite, "heaven"-like afterlife shared by simulation participants.
6) Our heading towards creating such simulations, especially if they were capable of nesting simulations, should increase credence that we exist in such a simulation and should perhaps expect a heaven-like afterlife of long, though finite, duration.
7) Those who believe in alternative causality, or retrocausality, in Newcomb-like situations should be especially excited about the opportunity to push the world towards surviving, allowing these types of simulations, and creating them, as it would potentially suggest, analogously, that if they work towards creating simulations with heaven-like afterlives, that they might in some sense be “causing” such a heaven to exist for themselves, and even for friends and family who have already died. Such an idea of life-after-death, and especially for being reunited with loved ones, can be extremely compelling.
8) I believe that people matching the above description, that is, holding both an intuition in alternative causality, and finding such a heaven-like-afterlife compelling, exist. Further, the existence of such people, and their associated motivation to try to create such simulations, should increase the credence even of two-boxing types, that we already live in such a world with a heaven-like afterlife. This is because knowledge of a motivated minority desiring simulations should increase credence in the likely success of simulations. This is essentially showing that “this probably happened before, one level up” from the two-box perspective.
9) As an empirical matter, I also think that there are people who would find the idea of creating simulations with heaven-like afterlives compelling, even if they are not one-boxers, from a simply altruistic perspective, both since it is a nice thing to do for the future sim people, who can, for example, probabilistically have a much better existence than biological children on earth can, and as it is a nice thing to do to increase the credence (and emotional comfort) of both one-boxers and two-boxers in our world thinking that there might be a life after death.
10) This creates the opportunity for a secular movement in which people work towards creating these simulations, and use this work and potential success in order to derive comfort and meaning from their life. For example, making donations to a simulation-creating or promoting, or existential threat avoiding, think-tank after a loved one’s death, partially symbolically, partially hopefully.
11) There is at least some room for Pascalian considerations even for two-boxers who allow for some humility in their beliefs. Nozick believed one-boxers will become two boxers if Box A is raised to 900,000, and two-boxers will become one-boxers if Box A is lowered to $1. Similarly, trying to work towards these simulations, even if you do not find it altruistically compelling, and even if you think that the odds of alternative or retrocausality is infinitesimally small, might make sense in that the reward could be extremely large, including potentially trillions of lifetimes worth of time spent in an afterlife “heaven” with friends and family.
Finally, this idea might be one worth filling in (I have been, in my private notes for over a year, but am a bit shy to debut that all just yet, even working up the courage to post this was difficult) if only because it is interesting, and could be used as a hook to get more people interested in existential risk, including the AI control problem. This is because existential catastrophe is probably the best enemy of credence in the future of such simulations, and accordingly in our reasonable credence in thinking that we have such a heaven awaiting us after death now. A short hook headline like “avoiding existential risk is key to afterlife” can get a conversation going. I can imagine Salon, etc. taking another swipe at it, and in doing so, creating publicity which would help in finding more similar minded folks to get involved in the work of MIRI, FHI, CEA etc. There are also some really interesting ideas about acausal trade, and game theory between higher and lower worlds, as a form of “compulsion” in which they punish worlds for not creating heaven containing simulations (therefore effecting their credence as observers of the simulation), in order to reach an equilibrium in which simulations with heaven-like afterlives are universal, or nearly universal. More on that later if this is received well.
Also, if anyone would like to join with me in researching, bull sessioning, or writing about this stuff, please feel free to IM me. Also, if anyone has a really good, non-obvious pin with which to pop my balloon, preferably in a gentle way, it would be really appreciated. I am spending a lot of energy and time on this if it is fundamentally flawed in some way.
Thank you.
*******************************
November 11 Updates and Edits for Clarification
1) There seems to be confusion about what I mean by self-location and credence. A good way to think of this is the Sleeping Beauty Problem (https://wiki.lesswrong.com/wiki/Sleeping_Beauty_problem)
If I imagine myself as Sleeping Beauty (and who doesn’t?), and I am asked on Sunday what my credence is that the coin will be tails, I will say 1/2. If I am awakened during the experiment without being told which day it is and am asked what my credence is that the coin was tails, I will say 2/3. If I am then told it is Monday, I will update my credence to ½. If I am told it is Tuesday I update my credence to 1. If someone asks me two days after the experiment about my credence of it being tails, if I somehow do not know the days of the week still, I will say ½. Credence changes with where you are, and with what information you have. As we might be in a simulation, we are somewhere in the “experiment days” and information can help orient our credence. As humanity potentially has some say in whether or not we are in a simulation, information about how humans make decisions about these types of things can and should effect our credence.
Imagine Sleeping Beauty is a lesswrong reader. If Sleeping Beauty is unfamiliar with the simulation argument, and someone asks her about her credence of being in a simulation, she probably answers something like 0.0000000001% (all numbers for illustrative purposes only). If someone shows her the simulation argument, she increases to 1%. If she stumbles across this blog entry, she increases her credence to 2%, and adds some credence to the additional hypothesis that it may be a simulation with an afterlife. If she sees that a ton of people get really interested in this idea, and start raising funds to build simulations in the future and to lobby governments both for great AI safeguards and for regulation of future simulations, she raises her credence to 4%. If she lives through the AI superintelligence explosion and simulations are being built, but not yet turned on, her credence increases to 20%. If humanity turns them on, it increases to 50%. If there are trillions of them, she increases her credence to 60%. If 99% of simulations survive their own run-ins with artificial superintelligence and produce their own simulations, she increases her credence to 95%.
2) This set of simulations does not need to recreate the current world or any specific people in it. That is a different idea that is not necessary to this argument. As written the argument is premised on the idea of creating fully unique people. The point would be to increase our credence that we are functionally identical in type to the unique individuals in the simulation. This is done by creating ignorance or uncertainty in simulations, so that the majority of people similarly situated, in a world which may or may not be in a simulation, are in fact in a simulation. This should, in our ignorance, increase our credence that we are in a simulation. The point is about how we self-locate, as discussed in the original article by Bostrom. It is a short 12-page read, and if you have not read it yet, I would encourage it: http://simulation-argument.com/simulation.html. The point about past loved ones I was making was to bring up the possibility that the simulations could be designed to transfer people to a separate after-life simulation where they could be reunited after dying in the first part of the simulation. This was not about trying to create something for us to upload ourselves into, along with attempted replicas of dead loved ones. This staying-in-one simulation through two phases, a short life, and relatively long afterlife, also has the advantage of circumventing the teletransportation paradox as “all of the person" can be moved into the afterlife part of the simulation.
Simulations Map: what is the most probable type of the simulation in which we live?
There is a chance that we may be living in a computer simulation created by an AI or a future super-civilization. The goal of the simulations map is to depict an overview of all possible simulations. It will help us to estimate the distribution of other multiple simulations inside it along with their measure and probability. This will help us to estimate the probability that we are in a simulation and – if we are – the kind of simulation it is and how it could end.
Simulation argument
The simulation map is based on Bostrom’s simulation argument. Bostrom showed that that “at least one of the following propositions is true:
(1) the human species is very likely to go extinct before reaching a “posthuman” stage;
(2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof);
(3) we are almost certainly living in a computer simulation”. http://www.simulation-argument.com/simulation.html
The third proposition is the strongest one, because (1) requires that not only human civilization but almost all other technological civilizations should go extinct before they can begin simulations, because non-human civilizations could model human ones and vice versa. This makes (1) extremely strong universal conjecture and therefore very unlikely to be true. It requires that all possible civilizations will kill themselves before they create AI, but we can hardly even imagine such a universal course. If destruction is down to dangerous physical experiments, some civilizations may live in universes with different physics; if it is down to bioweapons, some civilizations would have enough control to prevent them.
In the same way, (2) requires that all super-civilizations with AI will refrain from creating simulations, which is unlikely.
Feasibly there could be some kind of universal physical law against the creation of simulations, but such a law is impossible, because some kinds of simulations already exist, for example human dreaming. During human dreaming very precise simulations of the real world are created (which can’t be distinguished from the real world from within – that is why lucid dreams are so rare). So, we could conclude that after small genetic manipulations it is possible to create a brain that will be 10 times more capable of creating dreams than an ordinary human brain. Such a brain could be used for the creation of simulations and strong AI surely will find more effective ways of doing it. So simulations are technically possible (and qualia is no problem for them as we have qualia in dreams).
Any future strong AI (regardless of whether it is FAI or UFAI) should run at least several million simulations in order to solve the Fermi paradox and to calculate the probability of the appearance of other AIs on other planets, and their possible and most typical goal systems. AI needs this in order to calculate the probability of meeting other AIs in the Universe and the possible consequences of such meetings.
As a result a priory estimation of me being in a simulation is very high, possibly 1000000 to 1. The best chance of lowering this estimation is to find some flaws in the argument, and possible flaws are discussed below.
Most abundant classes of simulations
If we live in a simulation, we are going to be interested in knowing the kind of simulation it is. Probably we belong to the most abundant class of simulations, and to find it we need a map of all possible simulations; an attempt to create one is presented here.
There are two main reasons for simulation domination: goal and price. Some goals require the creation of very large number of simulations, so such simulations will dominate. Also cheaper and simpler simulations are more likely to be abundant.
Eitan_Zohar suggested http://lesswrong.com/r/discussion/lw/mh6/you_are_mostly_a_simulation/ that FAI will deliberately create an almost infinite number of simulations in order to dominate the total landscape and to ensure that most people will find themselves inside FAI controlled simulations, which will be better for them as in such simulations unbearable suffering can be excluded. (If in the infinite world an almost infinite number of FAIs exist, each of them could not change the landscape of simulation distribution, because its share in all simulations would be infinitely small. So we need a casual trade between an infinite number of FAIs to really change the proportion of simulations. I can't say that it is impossible, but it may be difficult.)
Another possible largest subset of simulations is the one created for leisure and for the education of some kind of high level beings.
The cheapest simulations are simple, low-resolution and me-simulations (one real actor, with the rest of the world around him like a backdrop), similar to human dreams. I assume here that simulations are distributed as the same power law as planets, cars and many other things: smaller and cheaper ones are more abundant.
Simulations could also be laid on one another in so-called Matryoshka simulations where one simulated civilization is simulating other civilizations. The lowest level of any Matryoshka system will be the most populated. If it is a Matryoshka simulation, which consists of historical simulations, the simulation levels in it will be in descending time order, for example the 24th century civilization models the 23rd century one, which in turn models the 22nd century one, which itself models the 21st century simulation. A simulation in a Matryoshka will end on the level where creation of the next level is impossible. The beginning of 21st century simulations will be the most abundant class in Matryoshka simulations (similar to our time period.)
Argument against simulation theory
There are several possible objections against the Simulation argument, but I find them not strong enough to do it.
1. Measure
The idea of measure was introduced to quantify the extent of the existence of something, mainly in quantum universe theories. While we don’t know how to actually measure “the measure”, the idea is based on intuition that different observers have different powers of existence, and as a result I could find myself to be one of them with a different probability. For example, if we have three functional copies of me, one of them is the real person, another is a hi-res simulation and the third one is low-res simulation, are my chances of being each of them equal (1/3)?
The «measure» concept is the most fragile element of all simulation arguments. It is based mostly on the idea that all copies have equal measure. But perhaps measure also depends on the energy of calculations. If we have a computer which is using 10 watts of energy to calculate an observer, it may be presented as two parallel computers which are using five watts each. These observers may be divided again until we reach the minimum amount of energy required for calculations, which could be called «Plank observer». In this case our initial 10 watt computer will be equal to – for example – one billion plank observers.
And here we see a great difference in the case of simulations, because simulation creators have to spend less energy on calculations (or it would be easier to make real world experiments). But in this case such simulations will have a lower measure. But if the total number of all simulations is large, then the total measure of all simulations will still be higher than the measure of real worlds. But if most real worlds end with global catastrophe, the result would be an even higher proportion of real worlds which could outweigh simulations after all.
2. Universal AI catastrophe
One possible universal global catastrophe could happen where a civilization develops an AI-overlord, but any AI will meet some kind of unresolvable math and philosophical problems which will terminate it at its early stages, before it can create many simulations. See an overview of this type of problem in my map “AI failures level”.
3. Universal ethics
Another idea is that all AIs converge to some kind of ethics and decision theory which prevent them from creating simulations, or they create p-zombie simulations only. I am skeptical about that.
4. Infinity problems
If everything possible exists or if the universe is infinite (which are equal statements) the proportion between two infinite sets is meaningless. We could overcome this conjecture using the idea of mathematical limit: if we take a bigger universe and longer periods of time, the simulations will be more and more abundant within them.
But in all cases, in the infinite universe any world exists an infinite number of times, and this means that my copies exist in real worlds an infinite number of times, regardless of whether I am in a simulation or not.
5. Non-uniform measure over Universe (actuality)
Contemporary physics is based on the idea that everything that exists, exists in equal sense, meaning that the Sun and very remote stars have the same measure of existence, even in casually separated regions of the universe. But if our region of space-time is somehow more real, it may change simulation distribution which will favor real worlds.
6. Flux universe
The same copies of me exist in many different real and simulated worlds. In simple form it means that the notion that “I am in one specific world” is meaningless, but the distribution of different interpretations of the world is reflected in the probabilities of different events.
E.g. the higher the chances that I am in a simulation, the bigger the probability that I will experience some kind of miracles during my lifetime. (Many miracles almost prove that you are in simulation, like flying in dreams.) But here correlation is not causation.
The stronger version of the same principle implies that I am one in many different worlds, and I could manipulate the probability of finding myself in a set of possible worlds, basically by forgetting who I am and becoming equal to a larger set of observers. It may work without any new physics, it only requires changing the number of similar observers, and if such observers are Turing computer programs, they could manipulate their own numbers quite easily.
Higher levels of flux theory do require new physics or at least quantum mechanics in a many worlds interpretation. In it different interpretations of the world outside of the observer could interact with each other or experience some kind of interference.
See further discussion about a flux universe here: http://lesswrong.com/lw/mgd/the_consequences_of_dust_theory/
7. Bolzmann brains outweigh simulations
It may turn out that BBs outweigh both real worlds and simulations. This may not be a problem from a planning point of view because most BBs correspond to some real copies of me.
But if we take this approach to solve the BBs problem, we will have to use it in the simulation problem as well, meaning: "I am not in a simulation because for any simulation, there exists a real world with the same “me”. It is counterintuitive.
Simulation and global risks
Simulations may be switched off or may simulate worlds which are near global catastrophe. Such worlds may be of special interest for future AI because they help to model the Fermi paradox and they are good for use as games.
Miracles in simulations
The map also has blocks about types of simulation hosts, about many level simulations, plus ethics and miracles in simulations.
The main point about simulation is that it disturbs the random distribution of observers. In the real world I would find myself in mediocre situations, but simulations are more focused on special events and miracles (think about movies, dreams and novels). The more interesting my life is, the less chance that it is real.
If we are in simulation we should expect more global risks, strange events and miracles, so being in a simulation is changing our probability expectation of different occurrences.
This map is parallel to the Doomsday argument map.
Estimations given in the map of the number of different types of simulation or required flops are more like place holders, and may be several orders of magnitude higher or lower.
I think that this map is rather preliminary and its main conclusions may be updated many times.
The pdf of the map is here, and jpg is below.
Previous posts with maps:
A map: AI failures modes and levels
A Roadmap: How to Survive the End of the Universe
A map: Typology of human extinction risks
Roadmap: Plan of Action to Prevent Human Extinction Risks

Simulation argument meets decision theory
Person X stands in front of a sophisticated computer playing the decision game Y which allows for the following options: either press the button "sim" or "not sim". If she presses "sim", the computer will simulate X*_1, X*_2, ..., X*_1000 which are a thousand identical copies of X. All of them will face the game Y* which - from the standpoint of each X* - is indistinguishable from Y. But the simulated computers in the games Y* don't run simulations. Additionally, we know that if X presses "sim" she receives a utility of 1, but "not sim" would only lead to 0.9. If X*_i (for i=1,2,3..1000) presses "sim" she receives 0.2, with "not sim" 0.1. For each agent it is true that she does not gain anything from the utility of another agent despite the fact she and the other agents are identical! Since all the agents are identical egoists facing the apparently same situation, all of them will take the same action.
Now the game starts. We face a computer and know all the above. We don't know whether we are X or any of the X*'s, should we now press "sim" or "not sim"?
EDIT: It seems to me that "identical" agents with "independent" utility functions were a clumsy set up for the above question, especially since one can interpret it as a contradiction. Hence, it might be better to switch to identical egoists whereas each agent only cares about her receiving money (linear monetary value function). If X presses "sim" she will be given 10$ (else 9$) in the end of the game; each X* who presses "sim" receives 2$ (else 1$), respectively. Each agent in the game wants to maximize the expected monetary value they themselves will hold in their own hand after the game. So, intrinsically, they don't care how much money the other copies make.
To spice things up: What if the simulation will only happen a year later? Are we then able to "choose" which year it is?
How realistic would AI-engineered chatbots be?
I'm interested in how easy it would be to simulate just one present-day person's life rather than an entire planet's worth of people. Currently our chatbots are bad enough that we could not populate the world with NPC's; the lone human would quickly figure out that everyone else was... different, duller, incomprehensibly stupid, etc.
But what if the chatbots were designed by a superintelligent AI?
If a superintelligent AI was simulating my entire life from birth, would it be able to do it (for reasonably low computational resources cost, i.e. less than the cost of simulating another person) without simulating any other people in sufficient detail that they would be people?
I suspect that the answer is yes. If the answer is "maybe" or "no," I would very much like to hear tips on how to tell whether someone is an ideal chatbot.
Thoughts?
EDIT: In the comments most people are asking me to clarify what I mean by various things. By popular demand:
I interact with people in more ways than just textual communication. I also hear them, and see them move about. So when I speak of chatbots I don't mean bots that can do nothing but chat. I mean an algorithm governing the behavior of a simulated entire-human-body, that is nowhere near the complexity of a brain. (Modern chatbots are algorithms governing the behavior of a simulated human-hands-typing-on-keyboard, that are nowhere near the complexity of a brain.)
When I spoke of "simulating any other people in sufficient detail that they would be people" I didn't mean to launch us into a philosophical discussion of consciousness or personhood. I take it to be common ground among all of us here that very simple algorithms, such as modern chatbots, are not people. By contrast, many of us think that a simulated human brain would be a person. Assuming a simulated human brain would be a person, but a simple chatbot-like algorithm would not, my question is: Would any algorithm complex enough to fool me into thinking it was a person over the course of repeated interactions actually be a person? Or could all the bodies around me be governed by algorithms which are too simple to be people?
I realize that we have no consensus on how complex an algorithm needs to be to be a person. That's OK. I'm hoping that this conversation can answer my questions anyhow; I'm expecting answers along the lines of
(A) "For a program only a few orders of magnitude more complicated than current chatbots, you could be reliably fooled your whole life" or
(B) "Any program capable of fooling you would either draw from massive databases of pre-planned responses, which would be impractical, or actually simulate human-like reasoning."
These answers wouldn't settle the question for good without a theory of personhood, but that's OK with me, these answers would be plenty good enough.
Does the universe contain a friendly artificial superintelligence?
First and foremost, let's give a definition of "friendly artificial superintelligence" (from now on, FASI). A FASI is a computer system that:
- is capable to deduct, reason and solve problems
- helps human progress, is incapable to harm anybody and does not allow anybody to come to any kind of harm
- is so much more intelligent than any human that it has developed molecular nanotechnology by itself, making it de facto omnipotent
In order to find an answer to this question, we must check whether our observations on the universe match with what we would observe if the universe did, indeed, contain a FASI.
If, somewhere in another solar system, an alien civilization had already developed a FASI, it would be reasonable to presume that, sooner or later, one or more members of that civilization would ask it to make them omnipotent. The FASI, being friendly by definition, would not refuse. [1]
It would also make sure that anybody who becomes omnipotent is also rendered incapable to harm anybody and incapable to allow anybody to come to any kind of harm.
The new omnipotent beings would also do the same to anybody who asks them to become omnipotent. It would be a short time before they use their omnipotence to leave their own solar system, meet other intelligent civilizations and make them omnipotent too.
In short, the ultimate consequence of the appearance of a FASI would be that every intelligent being in the universe would become omnipotent. This does not match with our observations, so we must conclude that a FASI does not exist anywhere in the universe.
[1] We must assume that a FASI would not just reply "You silly creature, becoming omnipotent is not in your best interest so I will not make you omnipotent because I know better" (or an equivalent thereof). If we did, we would implicitly consider the absence of omnipotent beings as evidence for the presence of a FASI. This would force us to consider the eventual presence of omnipotent beings as evidence for the absence of a FASI, which would not make sense.
Based on this conclusion, let's try to answer another question: is our universe a computer simulation?
According to Nick Bostrom, if even just one civilization in the universe
- survives long enough to enter a posthuman stage, and
- is interested to create "ancestor simulations"
then the probability that we are living in one is extremely high.
However, if a civilization did actually reach a posthuman stage where it can create ancestor simulations, it would also be advanced enough to create a FASI.
If a FASI existed in such a universe, the cheapest way it would have to make anybody else omnipotent would be to create a universe simulation that does not differ substantially from our universe, except for the presence of an omnipotent simulacrum of the individual who asked to be made omnipotent in our universe. Every subsequent request of omnipotence would result in another simulation being created, containing one more omnipotent being. Any eventual simulation where those beings are not omnipotent would be deactivated: keeping it running would lead to the existence of a universe where a request of omnipotence has not been granted, which would go against the modus operandi of the FASI.
Thus, any simulation of a universe containing even just one friendly omnipotent being would always progress to a state where every intelligent being is omnipotent. Again, this does not match with our observations. Since we had already concluded that a FASI does not exist in our universe, we must come to the further conclusion that our universe is not a computer simulation.
Baseline of my opinion on LW topics
To avoid repeatly saying the same I'd like to state my opinion on a few topics I expect to be relevant to my future posts here.
You can take it as a baseline or reference for these topics. I do not plan to go into any detail here. I will not state all my reasons or sources. You may ask for separate posts if you are interested. This is really only to provide a context for my comments and posts elsewhere.
If you google me you may find some of my old (but not that off the mark) posts about these position e.g. here:
http://grault.net/adjunct/index.cgi?GunnarZarncke/MyWorldView
Now my position on LW topics.
The Simulation Argument and The Great Filter
On The Simulation Argument I definitely go for
"(1) the human species is very likely to go extinct before reaching a “posthuman” stage"
Correspondingly on The Great Filter I go for failure to reach
"9. Colonization explosion".
This is not because I think that humanity is going to self-annihilate soon (though this is a possibility). Instead I hope that humanity will earlier or later come to terms with its planet. My utopia could be like that of the Pacifists (a short story in Analog 5).
Why? Because of essential complexity limits.
This falls into the same range as "It is too expensive to spread physically throughout the galaxy". I know that negative proofs about engineering are notoriously wrong - but that is currently my best guess. Simplified one could say that the low hanging fruits have been taken. I have lots of empirical evidence of this on multiple levels to support this view.
Correspondingly there is no singularity because progress is not limited by raw thinking speed but by effective aggregate thinking speed and physical feedback.
What could prove me wrong?
If a serious discussion would ruin my well-prepared arguments and evidence to shreds (quite possible).
At the very high end a singularity might be possible if a way could be found to simulate physics faster than physics itself.
AI
Basically I don't have the least problem with artificial intelligence or artificial emotioon being possible. Philosophical note: I don't care on what substrate my consciousness runs. Maybe I am simulated.
I think strong AI is quite possible and maybe not that far away.
But I also don't think that this will bring the singularity because of the complexity limits mentioned above. Strong AI will speed up some cognitive tasks with compound interest - but only until the physical feedback level is reached. Or a social feedback level is reached if AI should be designed to be so.
One temporary dystopia that I see is that cognitive tasks are out-sourced to AI and a new round of unemployment drives humans into depression.
- A simplified layered model of the brain; deep learning applied to free inputs (I cancelled this when it became clear that it was too simple and low level and thus computationally inefficient)
- A nested semantic graph approach with propagation of symbol patterns representing thought (only concept; not realized)
I'd really like to try a 'synthesis' of these where microstructure-of-cognition like activation patterns of multiple deep learning networks are combined with a specialized language and pragmatics structure acquisition model a la Unsupervised learning of natural languages. See my opinion on cognition below for more in this line.
What could prove me wrong?
On the low success end if it takes longer than I think it would take me given unlimited funding.
On the high end if I'm wrong with the complexity limits mentioned above.
Conquering space
Humanity might succeed at leaving the planet but at high costs.
With leaving the planet I mean permanently independent of earth but not neccessarily leaving the solar system any time soon (speculating on that is beyond my confidence interval).
I think it more likely that life leaves the planet - that can be
- artificial intelligence with a robotic body - think of curiosity rover 2.0 (most likely).
- intelligent life-forms bred for life in space - think of Magpies those are already smart, small, reproducing fast and have 3D navigation.
- actual humans in suitable protective environment with small autonomous biosperes harvesting asteroids or mars.
- 'cyborgs' - humans altered or bred to better deal with certain problems in space like radiation and missing gravity.
- other - including misc ideas from science fiction (least likely or latest).
For most of these (esp. those depending on breeding) I'd estimate a time-range of a few thousand years.
What could prove me wrong?
If I'm wrong on the singularity aspect too.
If I'm wrong on the timeline I will be long dead likely in any case except (1) which I expect to see in my lifetime.
Cognitive Base of Rationality, Vaguesness, Foundations of Math
How can we as humans create meaning out of noise?
How can we know truth? How does it come that we know that 'snow is white' when snow is white?
Cognitive neuroscience and artificial learning seems to point toward two aspects:
Fuzzy learning aspect
Correlated patterns of internal and external perception are recognized (detected) via multiple specialized layered neural nets (basically). This yields qualia like 'spoon', 'fear', 'running', 'hot', 'near', 'I'. These are basically symbols, but they are vague with respect to meaning because they result from a recognition process that optimizes for matching not correctness or uniqueness.
Semantic learning aspect
Upon the qualia builds the semantic part which takes the qualia and instead of acting directly on them (as is the normal effect for animals) finds patterns in their activation which is not related to immediate perception or action but at most to memory. These may form new qualia/symbols.
The use of these patterns is that the patterns allow to capture concepts which are detached from reality (detached in so far as they do not need a stimulus connected in any way to perception).
Concepts like ('cry-sound' 'fear') or ('digitalis' 'time-forward' 'heartache') or ('snow' 'white') or - and that is probably the demain of humans: (('one' 'successor') 'two') or (('I' 'happy') ('I' 'think')).
Concepts
The interesting thing is that learning works on these concepts like on the normal neuronal nets too. Thus concepts that are reinforced by positive feedback will stabilize and mutually with them the qualia they derive from (if any) will also stabilize.
For certain pure concepts the usability of the concept hinges not on any external factor (like "how does this help me survive") but on social feedback about structure and the process of the formation of the concepts themselves.
And this is where we arrive at such concepts as 'truth' or 'proposition'.
These are no longer vague - but not because they are represented differently in the brain than other concepts but because they stabilize toward maximized validity (that is stability due to absence of external factors possibly with a speed-up due to social pressure to stabilize). I have written elsewhere that everything that derives its utility not from some external use but from internal consistency could be called math.
And that is why math is so hard for some: If you never gained a sufficient core of self-consistent stabilized concepts and/or the usefulness doesn't derive from internal consistency but from external ("teachers password") usefulness then it will just not scale to more concepts (and the reason why science works at all is that science values internal consistency so highly and there is little more dangerous to science that allowing other incentives).
I really hope that this all makes sense. I haven't summarized this for quite some time.
A few random links that may provide some context:
http://www.blutner.de/NeuralNets/ (this is about the AI context we are talking about)
http://www.blutner.de/NeuralNets/Texts/mod_comp_by_dyn_bin_synf.pdf (research applicable to the above in particular)
http://c2.com/cgi/wiki?LeibnizianDefinitionOfConsciousness (funny description of levels of consciousness)
http://c2.com/cgi/wiki?FuzzyAndSymbolicLearning (old post by me)
http://grault.net/adjunct/index.cgi?VaguesDependingOnVagues (dito)
Note: Details about the modelling of the semantic part are mostly in my head.
What could prove me wrong?
Well. Wrong is too hard here. This is just my model and it is not really that concrete. Probably a longer discussion with someone more experienced with AI than I am (and there should be many here) might suffice to rip this appart (provided that I'd find time to prepare my model suitably).
God and Religion
I wasn't indoctrinated as a child. My truely loving mother is a baptised christian living it and not being sanctimony. She always hoped that I would receive my epiphany. My father has a scientifically influenced personal christian belief.
I can imagine a God consistent with science on the one hand and on the other hand with free will, soul, afterlife, trinity and the bible (understood as a mix of non-literal word of God and history tale).
I mean, it is not that hard if you can imagine a timeless (simulation of) the universe. If you are god and have whatever plan on earth but empathize with your creations, then it is not hard to add a few more constraints to certain aggregates called existences or 'person lifes'. Constraints that realize free-will in the sense of 'not subject to the whole universe plan satisfaction algorithm'.
Surely not more difficult than consistent time-travel.
And souls and afterlife should be easy to envision for any science fiction reader familiar with super intelligences.
But why? Occams razor applies.
There could be a God. And his promise could be real. And it could be a story seeded by an emphatizing God - but also a 'human' God with his own inconsistencies and moods.
But it also could be that this is all a fairy tale run amok in human brains searching for explanations where there are none. A mass delusion. A fixated meme.
Which is right? It is difficult to put probabilities to stories. I see that I have slowly moved from 50/50 agnosticism to tolerent atheism.
I can't say that I wait for my epiphany. I know too well that my brain will happily find patterns when I let it. But I have encouraged to pray for me.
My epiphanies - the aha feelings of clarity that I did experience - have all been about deeply connected patterns building on other such patterns building on reliable facts mostly scientific in nature.
But I haven't lost my morality. It has deepend and widened. I have become even more tolerant (I hope).
So if God does against all odds exists I hope he will understand my doubts, weight my good deeds and forgive me. You could tag me godless christian.
What could prove me wrong?
On the atheist side I could be moved a bit further by more proofs of religion being a human artifact.
On the theist side there are two possible avenues:
- If I'd have an unsearched for epiphany - a real one where I can't say I was hallucinating but e.g. a major consistent insight or a proof of God.
- If I'd be convinced that the singularity is possible. This is because I'd need to update toward being in a simulation as per Simulation argument option 3. That's because then the next likely explanation for all this god business is actually some imperfect being running the simulation.
Thus I'd like to close with this corollary to the simulation argument:
Arguments for the singularity are also (weak) arguments for theism.
DRAFT:Ethical Zombies - A Post On Reality-Fluid
I came up with this after watching a science fiction film, which shall remain nameless due to spoilers, where the protagonist is briefly in a similar situation to the scenario at the end. I'm not sure how original it is, but I certainly don't recall seeing anything like it before.
Imagine, for simplicity, a purely selfish agent. Call it Alice. Alice is an expected utility maximizer, and she gains utility from eating cakes. Omega appears and offers her a deal - they will flip a fair coin, and give Alice three cakes if it comes up heads. If it comes up tails, they will take one cake away her stockpile. Alice runs the numbers, determines that the expected utility is positive, and accepts the deal. Just another day in the life of a perfectly truthful superintelligence offering inexplicable choices.
The next day, Omega returns. This time, they offer a slightly different deal - instead of flipping a coin, they will perfectly simulate Alice once. This copy will live out her life just as she would have done in reality - except that she will be given three cakes. The original Alice, however, receives nothing. She reasons that this is equivalent to the last deal, and accepts.
(If you disagree, consider the time between Omega starting the simulation and providing the cake. What subjective odds should she give for receiving cake?)
Imagine a second agent, Bob, who gets utility from Alice getting utility. One day, Omega show up and offers to flip a fair coin. If it comes up heads, they will give Alice - who knows nothing of this - three cakes. If it comes up tails, they will take one cake from her stockpile. He reasons as Alice did an accepts.
Guess what? The next day, Omega returns, offering to simulate Alice and give her you-know-what (hint: it's cakes.) Bob reasons just as Alice did in the second paragraph there and accepts the bargain.
Humans value each other's utility. Most notably, we value our lives, and we value each other not being tortured. If we simulate someone a billion times, and switch off one simulation, this is equivalent to risking their life at odds of 1:1,000,000,000. If we simulate someone and torture one of the simulations, this is equivalent to risking a one-in-a-billion chance of them being tortured. Such risks are often acceptable, if enough utility is gained by success. We often risk our own lives at worse odds.
If we simulate an entire society a trillion times, or 3^^^^^^3 times, or some similarly vast number, and then simulate something horrific - an individual's private harem or torture chamber or hunting ground - then the people in this simulation *are not real*. Their needs and desires are worth, not nothing, but far less then the merest whims of those who are Really Real. They are, in effect, zombies - not quite p-zombies, since they are conscious, but e-zombies - reasoning, intelligent beings that can talk and scream and beg for mercy but *do not matter*.
My mind rebels at the notion that such a thing might exist, even in theory, and yet ... if it were a similarly tiny *chance*, for similar reward, I would shut up and multiply and take it. This could be simply scope insensitivity, or some instinctual dislike of tribe members declaring themselves superior.
Well, there it is! The weirdest of Weirdtopias, I should think. Have I missed some obvious flaw? Have I made some sort of technical error? This is a draft, so criticisms will likely be encorporated into the final product (if indeed someone doesn't disprove it entirely.)
The substrate
If we're part of a simulation, how likely is it that whatever it's running on is using the same sort of atoms we've discovered?
I think the answer is it's very unlikely. The closest resemblance I find plausible is that our atoms are simplified versions of the substrate atoms, and I wouldn't count on even that much.
I'm pretty sure that a simulation has to be smaller in some sense than the universe that's running it, which means that it has fewer things or simpler things (these might be equivalent because more simplicity means fewer sub-components in things) than the home universe.
You might do a meticulous job of simulating your matter in a simulation, but I suggest that you'd only bother in a small and/or specialized simulation, and even if you did, there's a reasonable chance that you don't have a full understanding of your own physics.
When I look at the range of human-created simulations (dreams, daydreams, fiction, games, art, scientific, political, and commercial simulations) and contemplate that we've probably only explored a small part of the possibilities for simulation, it seems vanishingly unlikely that we're in an ancestor simulation.
When I first came up with the question of the nature of our possible substrate, I didn't think there was a way to get a grip on it at all, but at least now I've got some clarity about the difficulties I think.
So onwards to practical questions. Is there any conceivable way of telling whether we're in a simulation and if so, learning something about its nature? Is it worth trying to get out of the Big Box?
Edited to add: I should think that being a simulation is an existential risk.
SIA, conditional probability and Jaan Tallinn's simulation tree
If you're going to use anthropic probability, use the self indication assumption (SIA) - it's by far the most sensible way of doing things.
Now, I am of the strong belief that probabilities in anthropic problems (such as the Sleeping Beauty problem) are not meaningful - only your decisions matter. And you can have different probability theories but still always reach the decisions if you have different theories as to who bears the responsibility of the actions of your copies, or how much you value them - see anthropic decision theory (ADT).
But that's a minority position - most people still use anthropic probabilities, so it's worth taking a more through look at what SIA does and doesn't tell you about population sizes and conditional probability.
This post will aim to clarify some issues with SIA, especially concerning Jaan Tallinn's simulation-tree model which he presented in exquisite story format at the recent singularity summit. I'll be assuming basic familiarity with SIA, and will run away screaming from any questions concerning infinity. SIA fears infinity (in a shameless self plug, I'll mention that anthropic decision theory runs into far less problems with infinities; for instance a bounded utility function is a sufficient - but not necessary - condition to ensure that ADT give you sensible answers even with infinitely many copies).
But onwards and upwards with SIA! To not-quite-infinity and below!
SIA does not (directly) predict large populations
One error people often make with SIA is to assume that it predicts a large population. It doesn't - at least not directly. What SIA predicts is that there will be a large number of agents that are subjectively indistinguishable from you. You can call these subjectively indistinguishable agents the "minimal reference class" - it is a great advantage of SIA that it will continue to make sense for any reference class you choose (as long as it contains the minimal reference class).
The SIA's impact on the total population is indirect: if the size of the total population is correlated with that of the minimal reference class, SIA will predict a large population. A correlation is not implausible: for instance, if there are a lot of humans around, then the probability that one of them is you is much larger. If there are a lot of intelligent life forms around, then the chance that humans exist is higher, and so on.
In most cases, we don't run into problems with assuming that SIA predicts large populations. But we have to bear in mind that the effect is indirect, and the effect can and does break down in many cases. For instance imagine that you knew you had evolved on some planet, but for some odd reason, didn't know whether your planet had a ring system or not. You have managed to figure out that the evolution of life on planets with ring systems is independent of the evolution of life on planets without. Since you don't know which situation you're in, SIA instructs you to increase the probability of life on ringed and on non-ringed planets (so far, so good - SIA is predicting generally larger populations).
And then one day you look up at the sky and see:
If we live in a simulation, what does that imply?
If we live in a simulation, what does that imply about the world of our simulators and our relationship to them? [1]
Here are some proposals, often mutually contradictory, none stated with anything near certainty.
1. The simulators are much like us, or at least are our post-human descendants.
Drawing on some of the key points in Bostrom's Simulation Argument:
Today, we often simulate our human ancestors' lives, e.g., Civilization. Our descendants will likely want to simulate their own ancestors, namely us, and they may have much-improved simulation technology which support sentience. So, our simulators are likely to be our (post-)human descendants.
2. Our world is smaller than we think.
Robin Hanson has said that computational power will be dedicated to running only a small part of the simulation at low resolution, including the part which we are in. Other parts of the simulation will be run at a lower resolution. Everything outside our vicinity, e.g., outside our solar system, will be calculated planetarium-style, and not from the level of particle physics.
(I wonder what it would be like if we are in the low-res part of the simulation.)
3. The world is likely to end soon.
There is no a priori reason for an base-level (unsimulated) universe to flicker out of existence. In fact, it would merely add complexity to the laws of physics for time to suddenly end with no particular cause.
But a simulator may decide that they have learned all they wanted to from their simulation; or that acausal trade has been completed; or that they are bored with the game; and that continuing the simulation is not worth the computational cost.
The previous point was that the world is spatially smaller than we think. This point is that the world is temporally smaller than we hope.
4. We are living in a particularly interesting part of our universe.
The small part of the universe which the simulators would choose to focus on is the part which is interesting or entertaining to them. Today's video games are mostly about war, fighting, or various other challenges to be overcome. Some, like the Sims, are about everyday life, but even in those, the players want to see something interesting.
So, you are likely to be playing a pivotal role in our (simulated) world. Moreover, if you want to continue to be simulated, do what you can to make a difference in the world, or at least to do something entertaining.
5. Our simulators want to trade with us.
One reason to simulate another agent is to trade acausally with it.
Alexander Kruel's blog entry and this LW Wiki entry summarize the concept. In brief, agent P simulates or otherwise analyzes agent Q and learns that Q does something that P wants, and also learns that the symmetrical statement is true: Q can simulate or analyze P well enough to know that P likewise does something that Q wants.
This process may involves simulating the other agent for the purpose of learning its expected behavior. Moreover, for P to "pay" Q, it may well run Q -- i.e., simulate it.
So, if we live in a simulation, maybe our simulators are going to get some benefit from us humans, and we from them. (The latter will occur when we simulate these other intelligences).
In Jaan Tallinn's talk at Singularity Summit 2012, he gave an anthropic argument for our apparently unusual position at the cusp of the Singularity. If post-Singularity superintelligences across causally disconnected parts of the multiverse are trying to communicate with each other by mutual simulation, perhaps for the purpose of acausal trade, then they might simulate the entire history of the universe from the Big Bang to find the other superintelligences in mindspace. A depth-first search across all histories would spend most of the time where we are, right before the point at which superintelligences emerge.
6. We are part of a multiverse.
Today, we run many simulations in our world. Similarly, says Bostrom, our descendants are likely to be running many simulations of our universe: A multiverse.
Max Tegmark's Level IV multiverse theory is motivated partly by the idea that, following Occam's Razor, simpler universes are more likely. Treating the multiverse as a computation, among the most likely computations is one that generates all possible strings/programs/universes.
The idea of the universe/multiverse as computation is still philosophically controversial. But if we live in a simulation, then our universe is indeed a computation, and Tegmarks' Level IV argument applies.
However, this is very different from the ancestor simulation described in points 1-3 above. That argument relies on the lower conditional complexity of the scenario -- we and our descendants are similar enough that if one exists, the other is not too improbable.
A brute-force universal simulation is an abstract possibility that specifies no role for simulators. In addition, if the simulators are anything like us, not enough computational power exists, nor would it be the most interesting possibility.
But we don't know what computational power is available to our simulators, what their goals are, nor even if their universe is constrained by laws of physics remotely similar to ours.
7. [Added] The simulations are stacked.
If we are in a simulation, then (a) at least one universe, ours, is a simulation; and (b) at least one world includes a simulation with sentience. This gives some evidence that being simulated or being a simulator are not too unusual. The stack may lead way down to the basement world, the ultimate unsimulated simulator; or else the stack may go down forever; or [H/T Pentashagon], all universes may be considered to be simulating all others.
Are there any other conclusions about our world that we can reach from the idea that we live in a simulation?
[1] If there is a stack of simulators, with one world simulating another, the "basement level" is the world in which the stack bottoms out, the one which is simulating and not simulated. This uses a metaphor in which the simulators are below the simulated. An alternative metaphor, in which the simulators "look down" on the simulated, is also used.
Completeness of simulations
Suppose I have an exact simulation of a human. Feeling ambitious, I decide to print out a GLUT of the action this human will take in every circumstance; while the simulation of course works at the level of quarks, I have a different program that takes lists of quark movements and translates them into a suitably high-level language, such as "Confronted with the evidence that his wife is also his mother, the subject will blind himself and abdicate".
Now, one possible situation is "The subject is confronted with the evidence that his wife is also his mother, and additionally with the fact that this GLUT predicts he will do X". Is it clear that an accurate X exists? In high-level language, I would say that, whatever the prediction is, the subject may choose to do something different. More formally we can notice that the simulation is now self-referential: Part of the result is to be used as the input to the calculation, and therefore affects the result. It is not obvious to me that a self-consistent solution necessarily exists.
It seems to me that this is somehow reminiscent of the Halting Problem, and can perhaps be reduced to it. That is, it may be possible to show that an algorithm that can produce X for arbitrary Turing machines would also be a Halting Oracle. If so, this seems to say something interesting about limitations on what a simulation can do, but I'm not sure exactly what.
[Link] SMBC on choosing your simulations carefully
I'm increasingly impressed by the power of Zach Wiener's comic to demonstrate in a few images why hard problems are hard. It would be a vast task, but perhaps it would be useful to create an index of such problem-demonstrating comics to add to the Wiki, giving us something to point newbies at which would be less intimidating than formal Sequence postings. I get the impression that a common hurdle is just to get people to accept that problems of AI (and simulation, ethics, what have you) are actually difficult.
Evidence For Simulation
The recent article on overcomingbias suggesting the Fermi paradox might be evidence our universe is indeed a simulation prompted me to wonder how one would go about gathering evidence for or against the hypothesis that we are living in a simulation. The Fermi paradox isn't very good evidence but there are much more promising places to look for this kind of evidence. Of course there is no sure fire way to learn that one isn't in a simulation, nothing prevents a simulation from being able to perfectly simulate a non-simulation universe, but there are certainly features of the universe that seem more likely if the universe was simulated and their presence or absence thus gives us evidence about whether we are in a simulation.
In particular, the strategy suggested here is to consider the kind of fingerprints we might leave if we were writing a massive simulation. Of course the simulating creatures/processes may not labor under the same kind of restrictions we do in writing simulations (their laws of physics might support fundamentally different computational devices and any intelligence behind such a simulation might be totally alien). However, it's certainly reasonable to think we might be simulated by creatures like us so it's worth checking for the kinds of fingerprints we might leave in a simulation.
Computational Fingerprints
Simulations we write face several limitations on the computational power they can bring to bear on the problem and these limitations give rise to mitigation strategies we might observe in our own universe. These limitations include the following:
- Lack of access to non-computable oracles (except perhaps physical randomness).
While theoretically nothing prevents the laws of physics from providing non-computable oracles, e.g., some experiment one could perform that discerns whether a given Turing machine halts (halting problem = 0') all indications suggest our universe does not provide such oracles. Thus our simulations are limited to modeling computable behavior. We would have no way to simulate a universe that had non-computable fundamental laws of physics (except perhaps randomness).
It's tempting to conclude that the fact that our universe apparently follows computable laws of physics modulo randomness provides evidence for us being a simulation but this isn't entirely clear. After all had our laws of physics provided access to non-computable oracles we would presumably not expect simulations to be so limited either. Still, this is probably weak evidence for simulation as such non-computable behavior might well exist in the simulating universe but be practically infeasable to consult in computer hardware. Thus our probability for seeing non-computable behavior should be higher conditional on not being a simulation than conditional on being a simulation. - Limited ability to access true random sources.
The most compelling evidence we could discover of simulation would be the signature of a psuedo-random number generator in the outcomes of `random' QM events. Of course, as above, the simulating computers might have easy access to truly random number generators but it's also reasonable they lack practical access to true random numbers at a sufficient rate. - Limited computational resources.
We always want our simulations to run faster and require less resources but we are limited by the power of our hardware. In response we often resort to less accurate approximations when possible or otherwise engineer our simulation to require less computational resources. This might appear in a simulated universe in several ways.
- Computationally easy basic laws of physics. For instance the underlying linearity of QM (absent collapse) is evidence we are living in a simulation as such computations have a low computational complexity. Another interesting piece of evidence would be discovering that an efficient global algorithm could be used that generates/uses collapse to speed computation.
- Limited detail/minimal feature size. An efficient simulation would be as course grained as possible while still yielding the desired behavior. Since we don't know what the desired behavior might be for a universe simulation it's hard to evaluate this criteria but the indications that space is fundamentally quantized (rather than allowing structure at arbitrarily small scales) seems to be evidence for simulation.
- Substitution of approximate calculations for expensive calculations in certain circumstances. Weak evidence could be gained here by merely observing that the large scale behavior of the universe admits efficient accurate approximations but the key piece of data to support a simulated universe would be observations revealing that sometimes the universe behaved as if it was following a less accurate approximation rather than behaving as fundamental physics prescribed. For instance discovering that distant galaxies behave as if they are a classical approximation rather than a quantum system would be extremely strong evidence.
- Ability to screen off or delay calculations in regions that aren't of interest. A simulation would be more efficient if it allowed regions of less interest to go unsimilated or at least to delay that simulation without impacting the regions of greater interest. While the finite speed of light arguably provides a way to delay simulation of regions of lesser interest QM's preservation of information and space-like quantum correlations may outweigh the finite speed of light on this point tipping it towards non-simulation.
- Limitations on precision.
Arguably this is just a variant of 3 but it has some different considerations. As with 3 we would expect a simulation to bottom out and not provide arbitrarily fine grained structure but in simulations precision issues also bring with them questions of stability. If the law's of physics turn out to be relatively unaffected by tiny computational errors that would push in the direction of simulation but if they are chaotic and quickly spiral out of control in response to these errors it would push against simulation. Since linear systems are virtually always stable te linearity of QM is yet again evidence for simulation. - Limitations on sequential processing power.
We find that finite speed limits on communication and other barriers prevent building arbitrarily fast single core processors. Thus we would expect a simulation to be more likely to admit highly parallel algorithms. While the finite speed of light provides some level of parallelizability (don't need to share all info with all processing units immediately) space-like QM correlations push against parallelizability. However, given the linearity of QM the most efficient parallel algorithms might well be semi-global algorithms like those used for various kinds of matrix manipulation. It would be most interesting if collapse could be shown to be a requirement/byproduct of such efficient algorithms. - Imperfect hardware
Finally there is the hope one might discover something like the Pentium division bug in the behavior of the universe. Similarly one might hope to discover unexplained correlations in deviations from normal behavior, e.g., correlations that occur at evenly spaced locations relative to some frame of reference, arising from transient errors in certain pieces of hardware.
Software Fingerprints
Another type of fingerprint that might be left are those resulting from the conceptual/organizational difficulties occuring in the software design process. For instance we might find fingerprints by looking for:
- Outright errors, particularly hard to spot/identify errors like race conditions or the like. Such errors might allow spillover information about other parts of the software design that would let us distinguish them from non-simulation physical effects. For instance, if the error occurs in a pattern that is reminiscent of a loop a simulation might execute but doesn't correspond to any plausible physical law it would be good evidence that it was truly an error.
- Conceptual simplicity in design. We might expect (as we apparently see) an easily drawn line between initial conditions and the rules of the simulation rather than physical laws which can't be so easily divided up, e.g., laws that take the form of global constraint satisfaction. Also relatively short laws rather than a long regress into greater and greater complexity at higher and higher energies would be expected in a simulation (but would be very very weak evidence).
- Evidence of concrete representations. Even though mathematically relativity favors no reference frame over another often conceptually and computationally it is desierable to compute in a particular reference frame (just as it's often best to do linear algebra on a computer relative to an explicit basis). One might see evidence for such an effect in differences in the precision of results or rounding artifacts (like those seen in re-sized images).
Design Fingerprints
This category is so difficult I'm not really going to say much about it but I'm including it for completeness. If our universe is a simulation created by some intentional creature we might expect to see certain features receive more attention than others. Maybe we would see some really odd jiggering of initial conditions just to make sure some events of interest occurred but without a good idea what is of interest it is hard to see how this could be done. Another potential way for design fingerprints to show up is in the ease of data collection from the simulation. One might expect a simulation to make it particularly easy to sift out the interesting information from the rest of the data but again we don't have any idea what interesting might be.
Other Fingerprints
I'm hoping the readers will suggest some interesting new ideas as to what one might look for if one was serious about gathering evidence about whether we are in a simulation or not.
Does quantum mechanics make simulations negligible?
I've written a prior post about how I think that the Everett branching factor of reality dominates that of any plausible simulation, whether the latter is run on a Von Neumann machine, on a quantum machine, or on some hybrid; and thus the probability and utility weight that should be assigned to simulations in general is negligible. I also argued that the fact that we live in an apparently quantum-branching world could be construed as weak anthropic evidence for this idea. My prior post was down-modded into oblivion for reasons that are not relevant here (style, etc.) If I were to replace this text you're reading with a version of that idea which was more fully-argued, but still stylistically-neutral (unlike my prior post), would people be interested?
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)