The mathematical universe: the map that is the territory
This post is for people who are not familiar with the Level IV Multiverse/Ultimate Ensemble/Mathematical Universe Hypothesis, people who are not convinced that there’s any reason to believe it, and people to whom it appears believable or useful but not satisfactory as an actual explanation for anything.
I’ve found that while it’s fairly easy to understand what this idea asserts, it is more difficult to get to the point where it actually seems convincing and intuitively correct, until you independently invent it for yourself. Doing so can be fun, but for those who want to skip that part, I’ve tried to write this post as a kind of intuition pump (of the variety, I hope, that deserves the nonderogatory use of that term) with the goal of leading you along the same line of thinking that I followed, but in a few minutes rather than a few years.
Once upon a time, I was reading some Wikipedia articles on physics, clicking links aimlessly, when I happened upon a page then titled “Ultimate Ensemble”. It described a multiverse of all internallyconsistent mathematical structures, thereby allegedly explaining our own universe — it’s mathematically possible, so it exists along with every other possible structure.
Now, I was certainly interested in the question it was attempting to answer. It’s one that most young aspiring deep thinkers (and many very successful deep thinkers) end up at eventually: why is there a universe at all? A friend of mine calls himself an agnostic because, he says, “Who created God?” and “What caused the Big Bang?” are the same question. Of course, they’re not quite the same, but the fundamental point is valid: although nothing happened “before” the Big Bang (as a more naïve version of this query might ask), saying that it caused the universe to exist still requires us to explain what brought about the laws and circumstances allowing the Big Bang to happen. There are some hypotheses that try to explain this universe in terms of a more general multiverse, but all of them seemed to lead to another question: “Okay, fine, then what caused that to be the case?”
The Ultimate Ensemble, although interesting, looked like yet another one of those nonexplanations to me. “Alright, so every mathematical structure ‘exists’. Why? Where? If there are all these mathematical structures floating around in some multiverse, what are the laws of this multiverse, and what caused those laws? What’s the evidence for it?” It seemed like every explanation would lead to an infinite regress of multiverses to explain, or a stopsign like “God did it” or “it just exists because it exists and that’s the end of it” (I’ve seen that from several atheists trying to convince themselves or others that this is a nonissue) or “science can never know what lies beyond this point” or “here be dragons”. This was deeply vexing to my 15yearold self, and after a completely secular upbringing, I suffered a mild bout of spirituality over the following year or so. Fortunately I made a full recovery, but I gave in and decided that Stephen Hawking was right that “Why does the universe bother to exist?” would remain permanently unanswerable.
Last year, I found myself thinking about this question again — but only after unexpectedly making my way back to it while thinking about the idea of an AI being conscious. And the path I took actually suggested an answer this time. As I worked on writing it up, I noticed that it sounded familiar. After I remembered what that Wikipedia article was called, and after actually looking up Max Tegmark’s papers on it this time, I confirmed that it was indeed the same essential idea. (Don’t you hate/love it when you find out that your big amazing groundbreaking idea has already been advocated by someone smarter and more important than you? It’s so disappointing/validating.) One of the papers briefly explores reasoning similar to that which I had accidentally used to convince myself of it, but it’s an argument that I haven’t seen emphasized in any discussions of it hereabouts, and it’s one which seems inescapable with no assumptions outside of ordinary materialism and reductionism.
I shall now get to the point.
Suppose this universe is a computer simulation.
It isn’t, but we’ll imagine for the next few paragraphs that it is.
Suppose everything we see — and all of the Many Worlds that we don’t see, and everything in this World that is too distant for us to ever see — is the product of a precise simulation being performed by some amazing supercomputer. Let’s call it the Grand Order Deducer, or G.O.D. for short.
Actually, let’s say that G.O.D. is not an amazing supercomputer, but a 386 with an insanely large hard drive. Obviously, we wouldn’t notice the slowness from the inside, any more than the characters in a movie would notice that your DVD player is being choppy.
Clearly, then, if G.O.D. were turned off for a billion years, and then reactivated at the point where it left off, we wouldn’t notice anything either. How about if the state of the simulation were copied to a very different kind of computer (say, a prototypical tapebased universal Turing machine, or an immortal person doing lambda calculus operations by hand) and continued? If our universe’s physics turns out to be fundamentally timesymmetrical, then if G.O.D. started from the end of the universe and simulated backwards, would we experience our lives backwards? If it saved a copy of the universe at the beginning of your life and repeatedly ran the simulation from there until your death (if any), would it mean anything to say that you are experiencing your life multiple times? If the state of the simulation were copied onto a million identical computers, and continued thence on all of them, would we feel a million times as real (or would there be a million “more” of each of us in any meaningful sense), and would the implausibly humanlike agent who hypothetically created this simulation feel a million times more culpable for any suffering taking place within it? It would be hard to argue that any of this should be the case without resorting to some truly ridiculous metaphysics. Every computer is calculating the same thing, even the ones that don’t seem plausible as universecontainers under our intuitions about what a simulation would look like.
But what, then, makes us feel real? What if, after G.O.D. has been turned off for a billion years… it stays off? If we can feel real while being simulated by a hundred computers, and no less real while being simulated by one computer, how about if we’re being simulated by zero computers? More concretely, and perhaps more disturbingly, if torturing a million identical simulations is the same thing as torturing one (I’d argue that it is), is torturing one the same as torturing zero?
2 + 2 will always be 4 whether somebody is computing it or not. (No Platonism is necessary here; only the Simple Truth that taking the string “2 + 2” and applying certain rules of inference to it always results in the string “4”.) Similarly, even if this universe is nothing but a hypothetical, not being computed by anyone, not existing in anything larger, there are certain things that are necessarily true about the hypothetical, including facts about the subjective mental states of us selfaware substructures. Nothing magical happens when a simulation runs. Most of us agree that consciousness is probably purely mechanistic, and that we could therefore create a conscious AI or emulate an uploaded brain, and that it would be just as conscious as we are; that if we could simulate Descartes, we’d hear him make the usual arguments about the duality of the material body and the extraphysical mind, and if we could simulate Chalmers, he’d come to the same familiar nonsensical conclusions about qualia and zombies. But the fact remains that it’s just a computer doing what computers always do, with no special EXIST or FEEL opcodes added to its instruction set. If a mind, from the outside, can be a selfcontained and timeless structure, and the full structure can be calculated (within given finite limits) from some initial state by a normal computer, then its consciousness is a property of the structure itself, not of the computer or the program — the program is not causing it, it’s just letting someone notice it. So deep runs the dualist intuition that even when we have reduced spirits and consciousness and free will to normal physical causality, there’s still sometimes a tendency to think as though turning on a sufficiently advanced calculator causes something to mysteriously blink into existence or awareness, when all it is doing is reporting facts about some very large numbers that would be true one way or the other.
G.O.D. is doing the very same thing, just with numbers that are even more unimaginably huge: a universe instead of an individual mind. The distilled and generalized argument is thus: If we can feel real inside a nonmagical computer simulation, then our feeling of reality must be due to necessary properties of the information being computed, because such properties do not exist in the abstract process of computing, and those properties will not cease to be true about the underlying information if the simulation is stopped or is never created in the first place. This is identically true about every other possible reality.
By Occam’s Razor, I conclude that if a universe can exist in this way — as one giant subjunctive — then we must accept that that is how and why our universe does exist; even if we are being simulated on a computer in some outer universe, or if we were created by an actual deity (which, from a nonintervening deity’s perspective, would probably look about the same as running a simulation anyway), or if there is some other explanation for this particular universe, we now see that this would not actually be the cause of our existence. Existence is what mathematical possibility feels like from the inside. Turn off G.O.D., and we’ll go on with our lives, not noticing that anything has changed. Because the only thing that has changed is that the people who were running the simulation won’t get to find out what happens next.
Tegmark has described this as a “theory of everything”. I’d discourage that use, merely as a matter of consistency with common usage; conventionally, “theory of everything” refers to the underlying laws that define the regularities of this universe, and whatever heroic physicists eventually discover those laws should retain the honour of having their theory known as such. As a metaphysical theory (less arbitrary than conventional metaphysics, but metaphysical nonetheless), this does not fit that description; it gives us almost no useful information about our own universe. It is a theory of more than everything, and a theory of nothing (in the same way that a program that prints out every possible bit string will eventually print out any given piece of information, while its actual information content is near zero).
That said, this theory and the argument I presented are not entirely free of implications about and practical applications within this particular universe. Here are some of them.

The simulation argument is dissolved. At this point, the idea of “living in a computer simulation” is meaningless. Simulating a universe should properly be viewed as comparable more to looking in a window than building the house. (Most of Robin Hanson’s thoughts about metaethics and selfpreservation within a simulation are similarly dissolved, since a reality doesn’t pop out of existence when people stop simulating it; the only relevant part is the section about “If our descendants sometimes play parts in their simulations”, and this doesn’t seem to be the case anyway.)

As I mentioned, this significantly changes the dynamics of thought experiments like The AI In A Box Boxes You. Torturing a thousand identical simulations is the same as torturing one, and torturing one is the same as torturing zero — if and only if the structure within the simulation(s) is not being causally influenced by any ongoing circumstances in this universe. If it is, then the two realities are entangled to the point where they are essentially different parts of the same structure, and it is worth thinking about how much we should care about each one.

That leads me to a more general point about metaethics: although there are other realities out there where there are very sentient and very intelligent beings experiencing suffering literally 3^^^3 times greater than anything we can imagine, and others where there are beings experiencing bliss in the same proportions, we must resist the urge to feel (respectively) sorry for them or jealous of them. Your intuitive sense of what “really exists” should remain limited to this universe.
Perhaps this caution only applies to me in the first place. I am, admittedly, the only person I know who has to leave the room when people are playing The Sims because I can’t stand to watch those little nowherenearsentient structures being put in torturous or even merely uncomfortable situations, so maybe it’s only my own empathy that’s a bit overactive. However, when we’re talking about sentient, sapient structures, we really do need to think about where to draw the line. I’d draw it at the point where a simulation starts to interact with this universe, in both directions — of course it will affect our universe if we are observing the simulation and reacting based on it, but we should only start caring about its feelings if we have designed the software such that it is affected by our actions beyond our choices for its initial conditions. That’s what I referred to as entanglement earlier. Once there’s that bilateral feedback, it’s no longer one structure observing another; they are both part of the same reality. (Take that as a practicality, not as a statement of an alleged metaphysical law. We’re trying to eliminate the need for metaphysical laws here.)

This theory results in a variation on the Boltzmann brain scenario: regardless of this universe’s ability to create Boltzmann brains, there’s also the possibility (and, therefore, necessity) of disembodied mindstructures hallucinating their own realities. My best guess as to the solution to this problem (if we’re to take it as a problem) is that any mindstructure that contains enough information to reliably hallucinate an orderly, mechanistic reality must be isomorphic to that reality.

It raises other strange anthropic questions too. The one that comes most immediately to my mind is this: If every possible mathematical structure is real in the same way that this universe is, then isn’t there only an infinitesimal probability that this universe will turn out to be ruled entirely by simple regularities? Given a universe governed by a small set of uniformly applied laws, there will be an infinity of universes governed by the same laws plus arbitrary variations, possibly affecting the internally observable structure only at very specific points in space and time. This results in a sort of anti–Occam’s Razor (Macco’s Rozar? Occam’s Beard Tonic?), where the larger the irregularity, the more likely it becomes over the space of all possible universes, because there are that many more ways for it to happen. (For example, there is a universe — actually, a huge number (possibly infinity) of barely different universes — identical to this one, except that, for no reason explainable by the usual laws of quantum mechanics, but not ruled out as a logically possible law unto itself, your head will explode as soon as you finish reading this post. I hope that possibility does not dissuade you from doing so, but I accept no responsibility if this does turn out to be one of those universes.)
From the outside, this would appear to be a nonissue. Consider people in some other reality simulating this one (assuming that this one really is as simple and consistent as it appears). By some extraordinary luck, they’ve zoomed in on this exact planet in this exact Everett branch, and from there, they’ve even zoomed in on me writing this. “What does this guy mean,” they ask themselves, “wondering what the probability is that this particular reality will have the laws that it does? It’s not like anyone had any choice in the matter.” Yes, there will be versions of the universe that really are that orderly, and if this is one of them, than that would be why this universe’s version of me is wondering about the apparent astronomical unlikelihood of being in this universe. But from the inside, this seems terribly unsatisfying — if these slightlyirregular universes are possible, then we don’t know for sure what kind we’re in, so should we expect to find such irregularities? Perhaps such exceptions would constitute such a departure from quantum mechanics that they couldn’t be made consistent with it even as a special case. (Tegmark makes a related point in one paper: the hypothesis “does certainly not imply that all imaginable universes exist. We humans can imagine many things that are mathematically undefined and hence do not correspond to mathematical structures.”) Or perhaps the infinity of universes where such irregularities exist in places we’ll never observe (outside our light cone, in vast areas of empty space, etc.) is a much larger infinity (in probability density, not cardinality) than that of those universes where any of those irregularities will actually affect us. I’m leaning toward that explanation, but maybe a simpler one is that I’m reasoning about this incorrectly — after a conversation about this with Justin Shovelain, I’m reconsidering whether it’s actually correct to use probabilities to reason about an infinite space of apparently equallylikely items — or maybe this reasoning is correct and it observationally refutes the hypothesis. We’ll see.
One last comment: some people I’ve discussed this with have actually taken it as a reductio ad absurdum against the idea that a being within a simulation could feel real. As we say, one person’s modus ponens is another person’s modus tollens. Since the conclusion I’m arguing for is merely unusual, not inconsistent (as far as I can tell), that takes out the absurdum; therefore, in the apparent absence of any specific alternatives at all, you can weigh the probability of this hypothesis against the standin alternatives that there is something extraphysical about our own existence, something noncomputable about consciousness, or something metaphysically significant about processes equivalent to universal computation (or any other alternatives that I’ve neglected to think of).
Finally, as I mentioned, the main goal of this post was to serve as an intuition pump for the Level IV Multiverse idea (and to point out some of the rationalityrelated questions it raises, so we’ll have something apropos to discuss here), not to explore it in depth. So if this was your first exposure to it, you should probably read Max Tegmark’s The Mathematical Universe now.
Comments (118)
Upvoted for quality of exposition even though I disagree with the conclusion. :)
Ditto. I'm sure you and others realize that given its length your post is highly scrutinizable, but if you replace "must" with "may", I did like your statement in italics as foodforthought:
... though my thoughts will have to eat it awhile longer before I digest and judge it as an actual idea :)
I'm already a bit late to be commenting here, but I would suggest that people who are interested in further thought experiments along these lines read Egan's /Permutation City/. I don't totally agree with how the author answers his own thought experiments, but the experiments themselves are very closely related to the topic of the post.
While I agree this is an excellent and relevant book, I would like to warn that it's a horror novel, and you may want to take that into consideration. As a horror novel it is outstanding, I couldn't put it down. But don't expect a pleasant reading experience.
Horror Novel? I thought it was a lighthearted piece of metaphysics. Surely it's more 'Star Trek' than 'Misery'.
I would have lent this to any child capable of understanding it without the slightest worry until I read this comment. Now I'd worry very slightly. Am I a bad person who shouldn't lend books to children?
Yeah it's a horror novel.
Read Ultimate Meta Mega Crossover for a fix fic that tries to repair the damage.
I dunno, I'm not good a modeling children who can read Egan. Peer's storyline in particular was horrifying, especially his eternal suicide at the end of Part 1.
As a teenager, I don't see how it can be horror. I thought it was inspiring, honestly.
Teenagers are immune to cosmic horror. Wellknown fact.
One of the major messages I got from it was thus: even if you never physically die, eventually over eternity one of these two things will happen  1) your utility function will drift enough, and your memories fade and change enough, they you will be unrecognizable as the person you were. You as you are now will effectively be dead. 2) you will successfully resist change, and will be stuck thinking and doing the same things endlessly in a loop. You might as well be dead.
Even if we defeat death, living long enough is essential death anyway. You are doomed, there is no escape.
Isn't that just due to the author's inability to imagine/describe a mind capable of becoming increasingly and unboundedly complex without losing its identity? Why take it as an inevitable conclusion?
Well sure, but that wouldn't make for a good horror novel. :)
Hmm the "map that is the territory"... The essence of ata's post is a mirror of Daniel Dennett's argument against pzombies.
You take too much confidence in a confusing idea. What does it tell, exactly (in anticipated experience, moral evaluation of decisions, etc.)? What does the original question about the cause for the world mean?
When considering the simulation scenario, you are discussing the world in which a simulation is running. In that world, it does matter how many simulations are running, and what they are running: the stuff of the world is organized in certain patterns, the patterns that implement the simulations. For any nontrivial preference about states of that world, different ways of organizing its matter will be valued differently, and any (hypothetical) change in content of the simulations or their number is a change to the content of that world, change that will be either preferred or not.
Now, with the perfectly autonomous simulations, the inhabitants of those simulations have no knowledge of the content of the outside world, they don't even have any knowledge about whether their own simulation exists (they are not able to pick out that particular hypothesis). But this lack of knowledge is a separate issue from moral weight of possible states of the (outside) world, or their own decisions possibly affecting the state of the outside world by getting observed. High level of uncertainty doesn't rob specific situations of distinctions in value.
The simulation scenario challenges the definition of the "world" you are considering, as it's fair game for decisionmaking. If the computation inside a simulation can affect the outside world, and an agent inside a simulation has preference detailed enough to distinguish between possible states of the outside world, then it'll try to act in such a way as to make the outside worlds better. This is acausal control, the question of which world the agent "really" lives in is meaningless in this context, the agent is controlling all possibilities defined in terms of the agent and relevant to its preference, including the ones containing it inside a simulated "apparent" world.
There are other fundamental problems with this stuff, of course, like inability to say what "all possible mathematical structures" is. You won't find this defined mathematically, it's all confusion and analogy.
Thanks for writing this, I enjoyed learning more about the mathematical universe theory, and it at least seems I've gained a useful intuition. I'm not sure the intuition has practical uses, and I also think I disagree on what can be drawn from this.
When the simulation is stopped, later to be restarted, we experience no difference because nothing is changed in our simulation by the progression of time in the "larger universe". Whether the "next step" of the simulation takes place, in that larger world, a femtosecond later, a year later, or a million years later, the state of our simulation rests in perfect stasis until the next step is taken. And as soon and to the extent the "next steps" are taken, our simulated existence continues. I think we agree on that.
Now what if the simulation is never continued, and the "next steps" are never taken? Just as we don't expect to experience anything while not being simulated in the aforementioned gap in our simulation, we shouldn't expect to experience anything when our simulation is turned off. You use the term "notice", saying that we won't notice either a gap in simulation or not being simulated at all. In the simulation gap, "not noticing" feels like we're talking about some unobservable detail that, assuming no concern about the larger world, can be ignored, which is true. For the case of the simulation being turned off, it's true we won't "notice" the simulation has turned off, but we won't "notice" ANYTHING past that point, ever again.
I've gained the intuition that consciousness or sentience is a property of a mathematical structure, but it still seems like computation is necessary in order to create/fully specify that mathematical structure. At least with my possibly naive view of physics/metaphysics/math, even for us to conceptualize a mathematical structure is to compute it in some way.
While I don't agree with your intuition pump, I'm not convinced that the mathematical universe idea is false, and my point is primarily regarding that pump. Also in reference to a computer simulating us backwards, I can't immediately guess what COMPUTING backwards would do, but I recall the idea that "when" time flows backwards, entropy decreases, light leaves our eyes and returns to light sources, and our neurons remove connections based on evidence as we "unlearn". I.e. it is only possible to experience life in one direction, though it might flow in both.
Indeed! I haven't read Tegmark's paper yet (but I've resolved to do so now), and I sorta came to the same conclusion myself a while back. Here are some more thoughtexperiments that can be shortcuts to the idea you have described:
Zhuangzi didn't know whether he had dreamt he was a butterfly, or whether he was a butterfly dreaming of being Zhuangzi. The two points of view are equallyprivileged.
Tweedledee and Tweedledum told Alice that she was in the Red King's dream, and if he were to wake up, she would cease to exist. Contrariwise, Alice would instead keep on existing, independent of whether anyone dreamed about her.
It is conceivable that if you converted the digits of pi into a digital video stream, it would show you images from another world. Would our world still be privileged with existence over that other one? (I find that Stephen Wolfram's A New Kind Of Science not only provides a pretty illustration of what simulating a universe could look like, but encourages one to think of the thing being simulated as an abstract mathematical object.)
What if there are two universes, and each one contains a simulation of the other? (Running very slowly, of course, and for an infinite amount of time.) Is one privileged over the other?
It's a nice metaphysical idea, but I don't see how it can tell us about the simulation argument or Boltzmann brains or anthropics if we don't have a prior probability distribution over mathematical structures. I'll see if Tegmark's paper addresses this!
When I was a kid I was led to the Ultimate Ensemble hypothesis from thinking about time travel. I imagined a scenario where I had just witnessed a time traveler disappearing into the past to go kill my parents before I was born. I concluded that nothing would 'happen' to me. After all, my future was determined by the current state of the universe, and that state certainly existed at that moment, since I had just observed it.
From there I hit on the EverythingIsMath theory, which I lovingly called the ErisPolygenesis Effect.
I think Eliezer's Anthropic Trilemma is relevant to this discussion.
The fundamental problems don't seem to require an Ultimate Ensemble to appear. They already exist in our current universe, most likely.
Universe as a counterfactual or hypothetical object seems intuitive, but claiming that it doesn't make a difference if you torture two or one identical beings in identical ways seems to be a bit problematic here. I think there are two ways to understand this, given two ways you can interpret the word "existence", as in, location within a given universe, and this metaphysical "does universe exist at all" sense, and I also think that ethical worth of happiness or sadness can add up in both ways without this argument failing.
http://www.nickbostrom.com/papers/experience.pdf
This paper by Nick Bostrom seems a bit relevant, though I'm not implying it supports my claims.
The Tegmark Level IV multiverse idea is quite elegant, and this was a fairly wellthoughtout post, although with some problems, most of which Mallah already identified. One more thing though:
This has to decrease the probability of this theory, as stated, being correct, as we appear to live in a fairly regular universe. However, it could be possible that simpler systems do have more measure even in a level IV multiverse.
Also, c
Did this post get cut off?
Probab
I guess it did. Sadly, I don't remember what I intended to say.
I do believe it is a reference to the universal constant c, the speed of light.
Ata, there are many things wrong with your ideas. (Hopefully saying that doesn't put you off  you want to become less wrong, I assume.)
I have indeed independently invented the "all math exists" idea myself, years ago. I used to believe it was almost certainly true. I have since downgraded its likelihood of being true to more like 50% as it has intractable problems.
Of course. (Well, it might be better to say that multiple guys like you are experiencing their own lives.)
Otherwise, it would mean that all types of people have the same measure of consciousness. Thus, for example, the fact that people who seem to be products of Darwinian evolution are more numerous would mean nothing  they are more numerous in terms of copies, not in terms of types, so the typical observer would not be one. So more copies = more measure. A similar argument applies to high measure terms in the quantum wavefunction. None of these considerations change if we assume that all math structures exist.
You assume that this would make no difference to our consciousness, but you don't actually present any argument for that. You just assert it in the post. So I would have to say that your argument  being nonexistent  has zero credibility. That doesn't mean that your conclusion must be false, just that your argument provides no evidence in favor of it. The measure argument shows that your conclusion is false  though with the caveat that Platonic computers might count as real enough to simulate us. So let's continue.
So you are abandoning the question of "Why does anything exist?" in favor of just accepting that it does, which is what you warned against doing in the first place.
If all math must exist in a strong Platonic sense, then obviously, it does. If it merely can so exist as far as we know, or OTOH might not, then we have no answer as to why anything exists. "Nothing exists" would seem to be the simplest thing that might have been true, if we had no evidence otherwise.
That said, "everything exists" is prima facie simpler that "something exists" so, given that at least something exists, Occam's Razor suggests that everything exists. Hence my interest in it.
There's a problem, though.
Good question. There is an argument based on Turing machines that the simplest programs (i.e. laws of physics) have more measure, because a random string is more likely to have a short segment at the beginning that works well and then a random section of 'don't care' bits, as opposed to needing a long string that all works as part of the program. So if we run all TM programs Platonically, simpler "laws of physics" have more measure, possibly resulting in universes like ours being typical. Great, right?
But there are problems with this. First, there are many possible TMs that could run such programs. We need to choose one  but such a choice contradicts the "inevitable" nature that Platonism is supposed to have. So why not just use all of them? There are infinitely many, so there is no unique measure to use for them. Any choice we can make of how to run them all is inevitably arbritrary, and thus, we are back to "something" rather than "everything". We can have a very "big" something, since all programs do run, but it's still something  some nonzero information that pure math doesn't know anything about.
That's just TMs, but there's no reason other types of math structures such as continuous functions shouldn't exist, and we don't even have the equivalent of a TM to put a measure distribution on them.
I don't know for sure that there isn't some natural measure, but if there is I don't think we can know about it. Maybe I'm overlooking some selection effect that makes things work without arbritrariness.
Ok, so suppose we ignore the arbritrariness problem. The resulting 'everything' might not be Platonism, but at least it would be a high level and fairly simple theory of physics. Does the TM measure in fact predict a universe like ours?
I don't know. Selecting a fairly simple TM, in practice the differences resulting from choice of TM are negligable. But we still have the Boltzmann brain question. I don't know if a BB is typical in such an ensemble or not. At least that is a question that can be studied mathematically.
I'm not so sure, Mallah. Your first argument seems to say that if someone simulated universe A a thousand times and then simulated universe B once, and you knew only that you were in one of those simulations, then you'd expect to be in universe A. I think your expectation depends entirely on your prior, and it I don't see why your prior should assign equal probabilities to all instances of simulation rather than assigning equal probabilities to all computationally distinct simulations.
(I'm assuming the simulation of universe A includes every Everett branch, or else it includes only a single Everett branch and it's the same one in every instance.)
What if you run a simulation of universe A on a computer whose memory is mirrored a thousand times on backup hard disks? What if it only has one hard disk, but it writes each bit a thousand times, just to be safe? Does this count as a thousand copies of you?
As for wavefunction amplitudes, I don't see why that should have anything to do with the number of instantiations of a simulation.
That's right, Nisan (all else being equal, such as A and B having the same # of observers).
In the latter case, at least in a large enough universe (or quantum MWI, or the Everything), the prior probability of being a Boltzmann brain (not product of Darwinian evolution) would be nearly 1, since most distinct brain types are. We are not BBs (perhaps not prior info, but certainly info we have) so we must reject that method.
No. That is not a case of independent implementations, so it just has the measure of a single A.
A similar argument applies  more amplitude means more measure, or we would probably be BB's. Also, in the Turing machine version of the Tegmarkian everything, that could only be explained by more copies.
For an argument that even in the regular MWI, more amplitude means more implementations (copies), as well as discussion of what exactly counts as an implementation of a computation, see my paper
MCI of QM
For continuous functions, we do. See "abstract stone duality".
Interesting. Do you know of place on the net where I can see what other (independent, mathematically knowledgeable) people have to say about its implications? It's asking for a lot maybe, but I think that would be the most efficient way for me to gain info about it, if there is.
The choice of your turing machine doesn't much matter, since all turing machines can simulate each other. If you choose the "wrong" turing machine, your measures will be off by at most a constant factor (the complexity penalty of an interpreter for the "right" machine language).
I'm super late to the party, but... The argument that convinces me that something like the MUH must be true is:
Suppose you simulate universes until you find one that evolves intelligent life. You could then theoretically synthesize an individual of this population in your own ("real") universe. This individual would have memories of their past life they would act as if they really existed to experience it. So sentient, experiencehaving life could in theory exist in a mathematical structure. (I do not believe you created this individual in the process of running the simulation for the same reason I believe that independent observers will calculate the same value for pi.)
I don't really grasp (read: am still confused about) how it could be possible to "have an experience" while existing in an abstract mathematical structure, but I find this argument really convincing. (I doubt I'm the only person to think of this as it seems pretty obvious but I've never seen anyone else use the argument, so there's some chance it's original.)
I don't follow that. Being able to agree on a value of pi  epistemological objectivism  does not imply the immaterial existence of pi, mathemtical realism. It is clear enough that MUH follows from mathemtical realism (and a few other assumptions, such as the supervenience of conscious experience on suitable mathetmatical structures), but mathematical realism is a very nontrivial claim.
What would "immaterial existence" even mean?
I think my claim is that the above argument shows that whatever that might be, it's equivalent to epistemological objectivism.
Specifically, to believe that they're separate, given the scenario where you simulate universes until you find a conscious mind and then construct a replica in your own universe, you have to believe both of the following at the same time:
(1) Mind X didn't have real memories/experiences until you simulated it in the "real" world (i.e., yours), and (2) proof of mind X's running existed previously to you computing it (in the form of an execution history).
To me, accepting both points requires me to believe something like "Proofs that I don't know about aren't true", and I'll be happy if you can show me why that's not true.
I don't know exactly, but if "material existene" means something, so does "immaterial existence".
I think you argument assumes that. You say that the simulated person must have had a preexistence (ontology) because mathematicians agree about pi (epistemolology)
Specifically, to believe that they're separate, given the scenario where you simulate universes until you find a conscious mind and then construct a replica in your own universe, you have to believe both of the following at the same time:
You seem to be assuming that if a mind has "memories", then it must have preexisted, ie that the only way a mind can have "memories" at time T, i sby expereincing things at some previous time and recording them.
Rather than assuming that there are infinite numbers of real but immaterial people floaitng around somewhere, I prefer to assume that "memories" are just data that don't have any intrinsic connection to prior events.. Ie, a memory proper is a record of an event, but neurons can be configured as if there were a trace of an even that never happened.
I don't see your point.
Hm. I don't think "material existence", if it's a thing, has a unique opposite.
I guess I'd define existsintherealworld as equivalent to a theoretical predicate function that takes a model of a thing, and a digitized copy of our real world, and searches the world for patterns that are functionally isomorphic (given the physics of the world) to the model, and returns true iff it finds one or more.
This model of existence doesn't work if you don't supply the real world (or at least a world) as an argument. I'm interpreting "immaterial existance" as "exists, but not in a world" which seems like a logical impossibility to me. Of course, this is a function of how I've defined "exists", but I don't know of a better way to define it.
OK, that's a reasonable position. I'll adjust my argument. My claim now is:
(1) Given that you've found a bitstring in a simulation that represents a mind existing, taking actions, and feeling things,
(2) this bitstring is quite astonishingly long,
(3) most long bitstrings do not similarly describe minds by any reasonable mapping function,
it's therefore (4) vastly more probable that a mind actually ran to produce that bitstring, than it is that you found it randomly.
Basically, I'm treating the outputs of a mind as being something like a proof that there was a mind running. Similarly to the idea that publishing an SHA256 hash of some data is proof that you had that data at the time that you published the hash.
Failing to have a unique referent is not meaninglessness.
That is rather beside the point, since none of that is necessarily material.
Most people would interpret it as "exists, but is not made of matter". To cash that out, without contradiction, you need a notion of existence that is agnostic about materiality. You have given one above. Tegmarkians, can input a maximal mathematical structure as their world, and then say that something exists if it can be patternmatched within the structure.
So far, none of this tells us what immateriality is. But then it isn't easy to say what matter is either. For immaterialists, anything physics says about matter boils down to structures, behaviour and laws that are all mathemaitcal, and therefore within some regions of Tegmarkia.
There are more than two options. If you had evidence of a bitstring corresponding to a billions of years of biological development involving trillions of organims  amuch more comple bitstring than a mind, but not a mind, it might well be most probable to assign the production of a mind to that.
I don't know if you realise it, but your argument was Paleyian
Yeah. There's supposedly two mysterious substances. My claim is that I can't see a reason to claim they're separate, and this thought experiment is (possibly) a demonstration that they're in fact same thing. Then we still have one mysterious substance, and I'm not claiming to make that any less mysterious with this argument.
Whoa, I think you understood something pretty different from what I was trying to say. I was definitely not claiming a deity of some sort must have been responsible! Let me repeat with unambiguous labels:
(1) Given that you've found a bitstring in a simulation that represents mind M existing, taking actions, and feeling things, ... it's therefore (4) vastly more probable that mind M actually performed computations in the course of producing that bitstring, than it is that you found it randomly.
Yes, of course mind M will have been the result of some evolutionary process in any universe I can imagine finding via simulation, but that dosen't make mind M less real. I suppose you could probably see this as a special case/expansion of the Anti Zombie Principal any system that produces the outputs of a mind (at least) contains that mind.
(If you meant something else by the Paleyian comment, you'll have to spell it out for me. I'm not the one that downvoted you and I appreciate the continued interaction.)
EDITED
No, the claim of Tegmarkian immaterialism is not that there is another substance other than matter.
You were previously saying that a log or record of mental style acrivity was probably produced by a mind. This is an explanation of an argument that you said supports "something to.like the MUH ". I still don'see how it does,. I am also puzzled that you thave been arguing against immaterialism throughout.
Thanks for editing I'm still puzzled.
I also don't know what "Tegmarkian immaterialism" is and I'm not arguing for or against it. I do not know what "immaterialism" is and I'm also not arguing for or against that. (Meta: stop giving either sides of my arguments names without first giving the names definitions!)
If anything, let's call my position "nondistinctionalism" I maintain that there's no other coherent models for the word "exist" than the one I mentioned earlier, and people who use the word "exist" without that model are just talking gibberish. There's no distinction between "material existence" and "immaterial existence" in the sense that the first clause of this sentence is meaningless noise. I can be disproved by being informed of another coherent model. I maintain that my thought experiment shows that it's difficult to hold any distinction between existsinreality and existsinthemathematicalmultiverse.
(If I were king of the world, "immaterial existence" would mean "exists in an inaccessible place of the mathematical universe", but for me to use the term that way would currently be idiosyncratic and just confuse everyone further.)
You have mentioned the Mathematical Universe hypothesis several times, and Tegmark's is a name very much associated with it, as WP states:
"In physics and cosmology, the mathematical universe hypothesis (MUH), also known as the Ultimate Ensemble, is a speculative "theory of everything" (TOE) proposed by the theoretical physicist, Max Tegmark.[1]"
You second sentence doens't follow from your first. Someone can define "material existence" as existence in your sense, plus some additional constraint, such as the "world" in which the pattern is found being a material world.
Standard arguments against MUH (etc) are that they predict too much weirdness. But that is an arguemnt against the truth of MUH, not for the coherence of materialism. However, you have not acutally argued against the coherence of materialism. Your definition of existene doesn't requires worlds to be material or immaterial, but it also doesn't require them to be neither.
I think we are having very significant communication difficulties. I am very puzzled that you think that I think that I'm arguing against MUH. I think something like MUH is likely true. I do not know what "Tegmarkian materialism" is and I'm not defending or attacking it. I also cannot make sense of some of your sentences, there seems to be some sort of editing problem.
I think you have been arguing against immaterialism, and that Tegmarkian MUH is a form of immaterialism.
I have edited my previous comment.
According to this theory, so far as I can tell, the events of Star Wars literally occurred. Is that correct?
A long time ago, in a galaxy far, far away...
Yes, but perhaps not very much, and definitely not because we wrote a story about them; that's pure coincidence. Any story with finite information content should be coincidentally similar to some actual real universe, but following Solomonoff induction there's a sense in which the more complex ones are not as real as, say, us.
I personally take the socalled Tegmark Hypotheses (levels 1 and 3 being the ramifications of our universe, 2 being a subset of 4) to be defacto true. If I am not mistaken, Tegmarkianism means that quantification over "individuals" is impossible, and thus consistent utilitarism will most likely be about anthropic probabilities.
How was that fact proved?
It isn't. That would be dejure truth. My assumption is based on the fact that with my internal states of knowledge it is an obvious deduction (not inference, mind, this is mathematics). I also have some nice conversational arguments for it.
Also, the "proof" of the Tegmark 4 hypothesis tastes like it might run into some Gödelian complications.
ETAL Other things I would call defacto is P =/= NP, multiverse/multimind/mangled worlds interpretation of Quantum Mechanics and (before the LHC experiemnt) the existence of the Higgs field,
So you are personally persuaded of it, and have unstated arguments for it.
Another remark which is much less interesting than production of the proof itself.
These are things I would call "punting", or personally comitting to.
Is it your informed opinion that I should edit my comments in order to make my communications clear(er)?
Wrong. The only "truth" is the actual getting "4" when someone applies those rules, and things like that you think that someone will get "4" when they apply the rules to "2 + 2".
The phrase "if A then B" generally just means something like "I tried putting A and not B into my idea of how the universe works, and it didn't make any sense".
I wonder, how much people are there who 'invented' this idea independently? I came up with this idea on my own when I was 15, and I know at least one other person who did. And now you and Tegmark, so this is not unique at all. Is that sort of thing widespread in rationality/math/INTJ/technophile/whatever?
Greg Egan's book "Permutation City" was published in 1994. Max Tegmark's paper "Is "the theory of everything'' merely the ultimate ensemble theory?" was published in 1998. It is ironic that the paper gets more formal credit in a lot of writing than the book, even though the book explores more implications and has priority :)
Based on my reading of Egan's other works, he has a minor sense of rivalry with Tegmark. One possible reading of Egan's Orthogonal Series is that it was an inspired way of pointing out that Tegmark's intuitions about the kinds of math that might contain minds are wrong because Tegmark's intuitions are grounded in thinking that "local physics" is especially privileged, perhaps from too little attention to ideas like "kolmogorov complexity" that are more native to artificial intelligence. The Orthogonal series is set in one of the kinds of physics that Tegmark claimed was probably uninhabited.
[retracted]
Why is that? Your argument is that if there are a large or infinite number of parallel universes being simulated (or just existing) I have to resist the urge to care about the feelings of the creatures being simulated in those parallel universes. My question is why? You haven't developed this argument at all, you just claimed that it is that way but why should that be so?
Why is running 1000 parallel simulations of where I am tortured the same as running just 1? Just because our universes are supposedly completely unrelated causalitywise? What a causist argument to make! Yes I damn well care if you put me inside an atomscanner and duplicate me and put both of me in two cages where we are tortured in an identical fashion within this one universe. So why should I feel different about 1000 of me being tortured in causally unrelated universes? If I accepted your argument, then 1000 nonidentical people suffering should be the same to me as just 1 person suffering as well. You say that if those 1000 people share the same universe as I do, I should care, because I am in the same "causeandeffect bubble" as they are, but if I'm not in the same "causeandeffect bubble" I shouldn't care because... why? Because I can't do anything anyway? This argument doesn't sound right on any level I can think of.
Also, I don't get the leap from 1 to 0. What exactly am I being simulated on once the "simulation" by G.O.D. is shut down? As I understand it a simulation requires computation, which is only possible by packets if information interacting. If the simulation is shut down then that should mean the interactions stop and the simulation we supposedly live in should simply freeze in a snapshot or disintegrate altogether. What am I missing here that I didn't understand?
First off, welcome to Less Wrong! You should take a look at some of the sequences, as this exact question has been addressed (see: Disputing Definitions). To be brief, the consensus around here is that the "Tree Falls in the Forest" question is a wrong question, and should be dissolved.
I recommend looking through the Quantum Physics Sequence, or at least the "Quantum Physics Revealed As NonMysterious" and/or "And the Winner is... ManyWorlds!" subsequences. Aside from the general matters of map versus territory, our specific knowledge of quantum physics indicates that the observer/collapse effects you may have heard about are not part of what the universe is really doing.
As for selfperception... for all the problems with Descartes's philosophy of mind, "cogito ergo sum" is still a pretty good standard, at least for setting a bare minimum baseline for a definition of existence. (That is, if your definition of existence doesn't allow you to be pretty confident that you yourself exist, it can't be a very good definition.) Further, based on the assumptions of materialism and reductionism (see Zombies? Zombies! and GAZP), I concluded that if a being (whether a normal human you're interacting with, an AI, a person in a simulated universe, etc.) says that they feel conscious, real, etc., and you are confident that they have some mechanism for actually acquiring such a belief that is at least as good as your own (e.g. their program has to actually be mindlike, not just printf("I experience qualia! How mysterious!"); exit(0)), then you should take their word for it.
Be careful about how you interpret news stories about quantum physics  quantum physics is a very confusing subject, and is often distorted severely by the reporting process.
Affect and effect have tremendously different meanings in this context  please don't mix them up.
The Simple Truth is another good place to start when considering these mapterritory questions.
In my opinion, the exploding head objection is fatal to this theory. I'm not morphing into a pheasant right now.
I agreed intuitively, but then as I proceeded to flesh out the logic, I (remembered) that inconsistency of that kind is impossible anyway. (According to the theory, only what is possible exists.)
Suppose that a simulation is run for a certain number of flops under a certain set of rules, and then the rules are abruptly changed. Then this means the simulation wasn't closed  there was something external to the simulation changing the rules. Thus, something simulated within a proper subset of the universe can have the property of being selfinconsistent, but it's impossible for the universe itself to not be running a single set of selfconsistent rules.
A common worldview among scientists is that as we evolve a deeper and deeper understanding of the universe, we increasingly find that selfconsistency narrows what is possible: the universe had to have these particular rules. So maybe we are residing in the only possible universe.
As a thought experiment, we could instead discover that exactly n universes were possible. Then I guess I agree with the intuition of this post: I would think it is unlikely that someone or something 'chose' this particular universe, I would guess that all exist and I happen to be in this one.
There are quite a many coherent simple mathematical entities, so it's really unlikely that it turns out that number of those that allow subentities that are conscious is anyhow limited.
I'm currently in yellow/red hat mode for the idea that the theory described above would result in a single universe.
(I thought about moving this comment to a more appropriate thread, but it was too much work. Unless someone picks up on the idea, I'll let the idea drop for a while and imagine there are multiverses in order to be more onpage. Anyway, what follows below is my argument for why there would be a UNIverse, a continuation of a theme begun here.)
Given a set of observations, there can be a number of coherent mathematical entities that fit those observations. Also, some of those coherent mathematical entities may be independent and/or mutually exclusive, giving rise to our notion of many possible but notsimultaneous universes. However, the production of a universe from a void is a completely different context in which 'false' and 'independent' may not .. exist.
I don't know how it would go, obviously, but suppose it begins something like this: just a void, so you have the empty set. You have all sets of the empty sets, so your mathematical structure is a little more complex. Then you have more complex relationships between sets, etc.
You just have an evolution of complexity, each step derived true relationships from lower level relationships. Eventually, 'stronger'/'abundant' relationships yield the particles we know of, them and the relationships between them define our universe and the rules.
There's just one universe. A universe that is the deduction from nothing.
Contrariwise, independent coherent mathematical entities and multiverses would be interesting too.
(We all agree this is metaphysics, right? Which is to say..interesting thought exercises only?)
I'm not sure how to interpret this thought experiment, but it seems that you could still produce indistinguishablefromindependentfactintheeyesofinworldobservers like things.
Like, if we take empty set, set that contains empty set, then form a set that contains all earlier sets, and this way construcy natural numbers, and then produce a function that takes one set and gives it's "successor", as in, the next natural number. F(1) = 2, F(11) = 12 and so forth.
This could be taken as "natural, simple world", where there are no surprises, like, F(15) giving 1337 as a result, or anything. That F(15) = 1337 would count as an independent fact. But obviously rule like could be formulated, and I'm really unsure about what's there to prevent this "add one expect if 15, give 1337" rule from happening, given that the original successor function did 'happen'.
But then again, maybe I just misunderstood, I got kind of an impression that you're thinking Universe as some sort of limit as length of deduction chains approach infinity, but this idea seems counterintuitive to me.
No external meddling needed, the simulation just has an extra rule that says "cousin_it morphs into a pheasant on turn N".
The simulation wouldn't know what 'cousin_it' is or what a pheasant is. These are higher level things that evolve from the lower level rules. (They don't exist independently, and don't really exist except as categories in our mind.) All of the rules in the universe have to be in terms of the lowest level things because reductively, these are the things that are real. So the universe wouldn't be able to say "cousin_it morphs into a pheasant on turn N", it would only be able to say that "things like cousin_it morph into pheasants under these conditions", in which case we would discover that the rule happens reliably and predictably, not arbitrarily.
Another way of expressing this is that a universe which has our laws of physics is a shorter encoding than a universe that has our laws of physics, plus a detailed exception for "cousin_it turns into a pheasant." This makes it more likely, but, the set of all universes with rules much more complicated than ours which would still allow conscious observers makes it unlikely that we made it into such a simple universe. This has been covered on lesswrong and overcomingbias before.
I disagree that you're responding to my argument. I'm not making an argument about whether the universe is simple or not: I'm making the argument that if the universe has an encoding, "cousin_it turns into a pheasant", it's not going to be an exception. If the universe has that encoding, we would find that cousin_it turns into a pheasant, and upon further study, would find this was predicted all along by the lower level rules. Simply because nothing exists beyond the lower level rules. We can't expect inconsistencies at higher levels because the higher levels are just derivative of the lower ones.
What do you mean? There's no law of nature saying laws of nature must be lowlevel in all possible worlds  that's just an observation about our world. I can write a simulator that runs the lowlevel rules as you'd expect, but also it's preprogrammed to search for cousin_it in the simulation at a certain moment, turn him into a pheasant, then resume business as usual. If all simulated worlds "exist", this one "exists" too.
On the one hand, it a matter of the definition of 'lowlevel': all laws of nature must be lowlevel or derived from a lowlevel law because if you had a law that wasn't derived from a lowerlevel law, that would make it lowlevel.
Yes, I agree and I'm generally interested in this case. But as I explained, this is only possible in a subset of a universe. The whole universe could not be such a simulation, because there would be nothing 'outside it' to swoop down and make the arbitrary change.
The hypothesis says that all universes that can be simulated by computer programs exist. It doesn't restrict those computer programs by saying they must use "only local laws", or "can't swoop down", or whatever. What does this even mean? Moving the universe one step forward in time according to the Schroedinger equation qualifies as "swooping down" just as much as turning me into a pheasant, they're both things that the program just does.
I assume you're thinking of some other hypothesis, like "all universes that exist must start from a simple core and proceed logically from there", but unfortunately no one has a "logicalness predicate" that would say whether a given program simulates a "logical" universe without "swooping down". In fact, looking at a program you may not even tell if it's "simulating" any universe at all, as opposed to moving some bytes around in weird patterns.
Perhaps it is a matter of relying on different analogies for our intuitions. I was thinking of each possible universe as being identified with a single selfconsistent mathematical structure. In which case, I expect the universe to be organized and coherent because any set of mutually consistent initial facts would generate a universe with structure rather than one with haphazard and inexplicable events.
I hadn't thought of all possible computer programs each mapping to a universe ... what is the truth value of a random string of instructions? But since I don't know much about computer science, I don't expect that to work as an intuition pump for me.
Instead I was thinking along the lines of there being a universe  say  for each exotic algebra there might be and all the facts that are derived from it. An "algebragenerateduniverse" wouldn't say something random or arbitrary; instead everything about the universe would be derivable from a few facts. (Note you couldn't have too many facts, or they'd selfcontradict.)
Higherlevel behavior can be explicitly coded into lower level rules.
The only way "2+2=4" can exist is if there are first two existent objects and then a mind to come up with the construct describing their addition. "2+2=4" doesn't exist on its own.
My own view for why there is "something" rather than "nothing" is:
A. "Something" has always been here. B. "Something" hasn't always been here.
Choice A is possible but doesn't offer much explanatory power so it won't be
pursued here.
Going with choice B, if "something" hasn't always been here, then "nothing" must have beeen here before it. By "nothing", I mean complete nonexistence which would be the lack of all volume, matter, energy, ideas/concepts, etc. However, in "nothing", there is no mechanism to change this "nothing" into "something". So, if "something" is here now, the only possible way is if "nothing" and "something" are one and the same thing. I think this is logically required if we go with choice B.
If it's logically required that "nothing" and "something" are the same thing, the next step is to try and figure out how this can be since they seem different. My view on how this can be is that they only seem different because we're looking at them from two different perspectives. In thinking about "nothingness", we use our mind, which exists. Next to something that exists, "nothing" just looks like nothing. But, in true "nothing", there would be no minds there, and only then would "nothing" be completely selfdefining (it says exactly what is there) and therefore existent.
An idea that's helpful in thinking about this topic is that the mind's conception of something ("nothing" in this case) and the thing itself are different. Thanks for listening!
Hi! You seem to be asserting antirealism and trying to arrive at a correct ontology using an Aristotelian application of deductive logic. Would you like some help with that?
Max Tegmark, the physicist who proposed the mathematical multiverse theory, was aware of the antirealist position. However, there's good evidence that minds are made out of math, instead of the contrary position. It's a fairly mature debate, and it pays to be aware of the strongest arguments both sides.
This awareness also applies to the universe's beginnings, or lack thereof. Historically, deductive logic has had some problems locating true beliefs.
Also, welcome to Lesswrong! Feel free to post on the introduction thread; and start working your way through the sequences so you understand where other people here are coming from.
Wow! Are you a clippy too? Want to reconcile knowledge and mutually satisfice values?
Kharfa,
o You seem to be asserting antirealism and trying to arrive at a correct ontology using an Aristotelian application of deductive logic. Would you like some help with that?
I'm not denying the reality of anything that you can show me. Please show me where "2+2=4" is or where it exists. Using that type of argument that things like this exist is like saying Santa Claus exists. That's possible, but we can't prove it or disprove it, and you can't show him to me. There's no point in discussing it. And, by the way, I don't need any help with that. Patronizing attitudes especially when not backed up by sound reasoning are of no interest to me.
o there's good evidence that minds are made out of math, instead of the contrary position.
Is there? Evidence from simulations running on material comptuers doens't show you can make minds out of immaterial math.
Brain emulation, how much do I need to understand and know before it makes any kind of sense?
A good argument in this vein when telling people about this idea is to talk about fractals. The famous Mandelbrot set has literally uncountably infinite complexity, but arises out of very simple rules, and then ask them to characterize the difference between the abstract set of complex numbers and the rules defining it (barring model theory).
I have also (disappointingly/validatingly) thought of this and then read Tegmark. (It's even more disappointing/validating than that, though, since as well as Tegmark, you appear to have invented Syntacticism. You even have all my arguments, like subverting the simulation hypothesis and talking about 'closure'). However, I have one more thing to add, which may answer the problem of regularity. That one thing is what I call the 'causality manifold': Obviously by simulating a universe we have no causal effect upon it (if we are assuming the mathematical universe hypothesis); but it has a causal effect upon us, because it defines the results of our computation. I explore this theme somewhat in The Apparent Reality of Physics, a footnote to which mentions the problem of consistency when you have a closed loop of universes, and its putative solvability by loop unfolding / closure. Considering the ensemble of mathematical structures with the natural topology, we see that locally it's either a graph or a manifold (almost everywhere), and it has this flow defined by the causal relations (with the flow in the opposite direction to simulation), which we can consider as being a flow of subjective probability (with some equilibrium state). Of course it contains both regular and irregular universes (hereonin RUs and IUs), because adding a delta function to a differential equation gives you, simply, a different DE (well, that's 'morally' why it's true; it's more complicated in practice because not all mathematical structures are DEs; but any continuous mathematical structure can be continuously corrupted). IUs typically cannot simulate RUs, because any simulation is going to keep hitting the delta functions and being corrupted; RUs, on the other hand, can simulate both other RUs and IUs (a cosmic ray can turn your RU simulation into an IU simulation). Consequently, subjective probability flows from {IUs} to {RUs} much more strongly than the other way, so the equilibrium has most subjective probability on RUs. Thus, anthropics and cake for everyone :)
I should add that I haven't yet been able to mathematically formalise the above argument, because I haven't yet worked out the correct definitions/characterisation of the 'causality manifold' (which is, incidentally, not a manifold), and it's possible that the small probability of an IU simulating a RU screws things up, and that we should (perhaps) expect to find ourselves in a Universe with some (say) Poissondistributed degree of irregularity. Or something like that. But, at least it does allow for a mathematical universe in which anthropic experience can actually be given a probability distribution.
I'm late to the thread, but I just had to leave a few thoughts.
Firstly, a well written post and I've thought about something similar too, as I suspect many have. I look forward to reading the Tegmark paper, which I have not yet done.
At least a couple of commenters opposed the jump from one to n to zero simulations, but I find it very intuitive. One can think about it like this: Suppose there is a noniterative nonrecursive formula for calculating the state of a deterministic universe at time t. This is analogous to having a digit extraction formula for pi. The simulation can then be stopped, rewound of forwarded at will, without any change to the actual results of the simulation. Someone inside the simulation would still exist as if the simulation had been played out in order.
The idea of universes as purely mathematical objects is also rather natural. If the state of the universe (including the laws governing it) can be encoded in numbers, it can be equivalently encoded in a single set. Then all possible universes are trivially contained in the universal set.
Where this whole idea begins to break down (or at least becomes nontrivial) is with nondeterministic universes. The result of such a simulation by definition depends on the machine/RNG running it, so they do not similarly exist as a sequence of states that could be navigated at will. Ours seems to be such a universe... Even in a nondeterministic universe each state would exist as a mathematical object, so maybe I just haven't thought about this enough.
I don't think this argument works, because any interesting universe would have physics that allow the implementation of arbitrary Turing machines, and there is no noniterative nonrecursive formula for calculating the state of an arbitrary Turing machine at time t.
Doesn't this imply that no finite universe is interesting?
It's an interesting idea, with some intuitive appeal. Also reminds me of a science fiction novel I read as a kid, the title of which currently escapes me, so the concept feels a bit mundane to me, in a way. The complexity argument is problematic, thoughI guess one could assume some sort of peruniverse Kolmogorov weighting of subjective experience, but that seems dubious without any other justification.
Suppose we had a G.O.D. that takes N bits of input, and uses the input as a startingpoint for running a simulation. If the input contains more than one simulationprogram, then it runs all of them.
Now suppose we had 2^N of these machines, each with a different input. The number of instantiations of any given simulationprogram will be higher the shorter the program is (not just because a shorter bitstring is by itself more likely, but also because it can fit multiple times on one machine). Finally, if we are willing to let the number of machines shrink to zero, the same probability distribution will still hold. So a shorter program (i.e. more regular universe) is "more likely" than a longer/irregular one.
(All very speculative of course.)
I think your theory predicts a single selfconsistent universe  the one we're in. (What do you think?)
Instead, first consider only all internallyconsistent mathematical structures. That is, consider the set of all mathematically true statements. Our intuition is that they 'exist' in some way independently. These exist, and nothing else. (Everything 'else' is higherorder and, via reduction, can be expressed in terms of these.)
Then all the fundamental particles  they exist since we observe them  are somehow mapped from the initial set of mathematical truths. I would reword this as, "the set of internallyconsistent mathematical structures map to a set of things perceived as fundamental particles". This is handwaving of course, but supported by arguments I've heard that fundamental particles don't need to have any special properties of material existence, they could simply be interacting mathematical relationships and still result in the universe that we experience. The fundamental particles do interact  it turns out dynamically and causally  resulting in the universe we experience.
So our universe could be the single selfconsistent structure that results from all true mathematical relationships.
Another way of putting it is that suppose there are multiple mathematical structures possible, resulting in the multiverse originally postulated. Don't these mathematical structures have to be selfconsistent? So why not expand the definition of universe to include them? I believe particle physics already postulates extra dimensions, etc., that could be causally independent from the subset of the universe we are causally entangled with.
Are you saying that our universe's laws of physics may turn out to be the only selfconsistent set of laws defining a universelike structure, or that we should expand the definition of "the universe" to encompass all such structures? It looks like you may be saying both, but I'm not sure they're the same thing.
I meant the former, that it would be interesting to consider the possibility that there might be only a single set of selfconsistent laws that selfgenerate from a void, and that these result in our (single) universe.
Alternatively, there might be a few or infinitely many independent sets of selfconsistent laws resulting in a multiverse of entirely independent universes.
My point is that we need to consider the question, because it's a completely different question to ask, 'how many independent sets of truth selfgenerate from a void?' and, 'how many models satisfy a set of limited observations?'.
When we remark that something different could have happened (I could have chosen a different major in college), we mean this in a limited logical sense. If everything is entangled with everything else, and especially if the universe is deterministic, things possible in this logical sense might not really be possible. That is, that they're not actually the case in any universe.
Didn't Gödel show that the concept of mathematical theory of everything is inconsistent?
He showed that a structure of axioms will not be able to prove or disprove all possible theorems. You could interpret what you commented that way, but it has nothing to do with the post.
Not quite. Godel's theorems are a bit more subtle. The first incompleteness theorem says that any consistent, complete axiomatic system which can model the natural numbers must contain statetements which cannot be proved or disproved in that system. (Even that is a bit of an imprecise statement. One needs to be careful about what one means by an axiomatic system. For our purposes, this can be phrased as a set of axioms and rules of inference which can be listed and applied using an effective method).
The second, closely related theorem, is that any axiomatic system obeying the conditions of the first theorem contains a statement equivalent to its own consistency iff the system is inconsistent. This also needs a fair bit of unpacking. The rough idea here is that any axiomatic system that can contain the natural numbers and is powerful enough to talk about an analog of itself cannot prove that analog's consistency.
And anyway, if you can find a statement that can neither be proved nor disproved, then you can create two greater axiomatic systems where it's defined as true or false, respectively.
There's nothing more mysterious about this than the other axioms; math holds predictive power to the degree the axiomatic system we choose to use has axioms that match our universe.
...no? How do Gödel's incompleteness theorems relate to this at all?
And the territory is not the campaign
''Instrumentalism is an interpretation within philosophy of science that a successful scientific theory reveals nothing known either true or false about unobservable aspects of nature.[1] By instrumentalism, then, scientific theory a tool whereby humans predict observations in a particular domain of nature by organizing laws, which state regularities, but theories do not unveil hidden aspects of nature to explain the laws.[2] Initially a novel perspective introduced by Pierre Duhem in 1906, instrumentalism is largely the prevailing practice of physicists today.[2]
Rejecting the ambitions of scientific realism to attain metaphysical truth about nature,[2] instrumentalism takes is antirealist, although its mere lack of commitment to scientific theory’s realism can be termed nonrealism. Instrumentalism bypasses such discussions as whether the particles in particle physics are actually discrete entities existing independently, whether they are excitation modes of regions of field, or whether they are something else.[3][4][5] The instrumentalist view maintains that theoretical terms need not refer realistically to nature’s realities, but simply must be useful to predict the phenomena, the observed outcomes.[3]
Instrumentalism associates with the problem of underdetermination of theory by data, as any dataset can host over one explanation, how the success of any prediction does not, by affirming the consequent, a deductive fallacy, logically reveal the truth of the theory that the prediction was derived from. Thomas Kuhn’s 1962 thesis greatly undermined the conception that science progressively unveils and truer and truer view of nature. Yet even before then, the logical positivists launched philosophy of science as a devoted discipline in academia, and generally embraced instrumentalism whereby a scientific theories theoretical terms were taken as metaphorical or elliptical at observations, but otherwise were accorded no particular meaning.Not to rule science, but to enlighten and structure their own philosophical discourse, the logical positivists presumed and sought to identify a strict gap between theory versus observation, whereby a theory’s theoretical terms would correspond to observational terms, whereby posited unobservables must correspond to direct observations. Instrumentalism in scientific practice often does not even make distinction between unobservable versus observable entities,[3] By rejecting all variants of positivism via its focus on sensations rather than realism, Karl Popper asserted commitment to scientific realism, merely via the necessary uncertainty of his own falsificationism. repeatedly rejects and criticizes instrumentalism in Conjectures and Refutations, perhaps regarding it as too mechanical""  Wikipedia