The simple world hypothesis argues that Conway’s Game of Life (U_c) cannot simulate our universe (U_0).
Conway's Game of Life is Turing complete, so unless our universe is incomputable, it can be be simulated by Conway's game.
Is there good evidence about our universe being or not being computable?
For example, if positions of particles have infinitely many decimal places, then the universe is incomputable, even if its laws are relatively simple. If the positions of particles are computationally finite, that probably requires explanation for why physical processes seem the same if you e.g. rotate them by an arbitrary angle.
I think the universe could be computable even if positions have infinitely many decimal places, as long as the sequence is computable. But you are right that it would be incomputable if the sequence is basically random, and there is no proof that things are not like this.
If you have a universe of a certain complexity, then to simulate another universe of equal complexity it would have to be that universe to fully simulate it. To simulate a universe, you have to be sufficiently more complex and have sufficiently more expendable energy.
Re: axiom 1
Any possible universe is logically consistent and strictly adheres to well-defined laws.
Logically consistent against what set of logical axioms? There are a bunch of logics out there, and one man inconsistency is another man's axiom.
Axiom 2 implies that the set of laws in uncomputable, ergo has no Kolmogorov complexity, which contradicts Axiom 3.
The weak hypothesis is false.
And so on.
The principles of Aristotelian logic.
I don't understand how Axiom 2 implies uncomputability; please explain.
The other universes have their own laws, which work together with $L_{\alpha}$ to create them. Axiom 2 just implies there are an infinite number of universes, and every universe exists. Axiom 2 does not imply that $L_{\alpha}$ contains all possible laws.
The principles of Aristotelian logic.
Which is not a consistent set of axioms, but let's just pretend you said "classical propositional logic". Then why this and not something else, say intuitionistic, relevant, modal, etc.?
I don't understand how Axiom 2 implies uncomputability; please explain.
Well, as per the First Incompleteness Theorem, there's no recursive set of axioms complete for arithmetics. So if the universe realizes all arithmetic truth at least its set of laws is non-recursive, that is has no finite Kolmogorov complexity.
...
unless you meant that the Multiverse realizes all possibilities.
On the other hand, the maximum complexity realizable by a simulation is a function not only of its laws but also of its available space. As entirelyuseless already pointed out, Uc can simulate any computable universe, given enough space.
I think also that it would be great to show if this theory applied to the actual physical universe in which we exist, or it just describes a type of mathematical objects as sets or groups.
If it apllied to physical universe, we can't ignore the question of its origin, as it will have direct correlation with its observed complexity. If mathematical universe hypothesis (MUH) is true, and all math objects exists, than more complex objects should dominate. But it contradict our expiriences and Occam razor. There is a need to bring order to all possible universes. And here Logical universes hypothesis (LUH) helps. It claims that existing math objects following from simple to more complex in logical order, like 1,2,3. In that case simplest math objects appear first - and we get support to your Axiome 3.
I explored different ways of the universe origin and its correlation with observation here: http://lesswrong.com/lw/nw7/the_map_of_ideas_how_the_universe_appeared_from/
I have been thinking about the same post about the Occam razor, and stuck in question: "what is the medium complexity of the true hypothesis from all the field of the hypotheses". I hope that my question is clear without the need to longer explained what did I mean. Anyway, I will try to explain it a little bit.
Occam Razor doesn't say that simplest hypothesis is true. It just says that probability of truth is diminishing as the complexity of the hypothesis is growing. It is clear that most times the true hypothesis will be somewhere after pN1+pN2 +pNn=0.5, where n is the number of hypotheses ranged by their complexity, and p(N) is the probability that a given hypothesis is true according to Occam razor principle.
I also have a feeling that in EY writing it is always assumed that Occam razor law is diminishing very quickly and so most simple hypothesis has an overwhelming probability to be true. However, I also have a feeling that in real life medium complexity hypotheses are dominating, like somewhere near 100 from the beginning. It results in much more complex and unpredictable world.
It looks like you have been thinking in the similar direction - do you have any ideas about the medium complexity of true hypotheses?
"Simple" in O's R means "simplest that explains the facts at all". So you delete a bunch of hypotheses that are too simple to be explanatorily adequate, and then you delete the ones that are unnecessarily complex. That gives you soem sort of medium complexity.
My question was more about the medium length of an algorithm predicted by Solomonov induction. Update: According to https://wiki.lesswrong.com/wiki/Solomonoff_induction the weight of hypothesis is diminishing very quickly, like 2power(-n), where n is the program length. And in this case, the medium level will somewhere between first and second hypothesis.
I can say for sure that I do not have an "informed opinion" about physics and I also know that I do not have such knowledge about the Universe. But after reading your Blog, I agree with the above. All these hypotheses also have the opposite side, which does not prove the existence of a simple world hypothesis. There are many resources to study this. This, at least, does not require physical evidence, only pre-training and logistical skills are required.
Part of a Series in the Making: "If I Were God".
Introduction
This hypothesis posits that the current universe is the simplest universe possible which can do all that our universe can do. Here, simplicity refers to the laws which make up the universe. It may be apt to mention the Multiverse Axioms at this juncture:
Axiom 1 (axiom of consistency):
Axiom 2 (axiom of inclusivity):
Axiom 3 (axiom of simplicity):
The underlying laws governing the Multiverse are as simple as possible (while permitting 1 and 2).
The simple world hypothesis posits that our universe has the fewest laws which can enable the same degree of functionality that it currently possesses. I’ll explain the concept of “degree of functionality”. Take two universes: U_i and U_j with degrees of functionality d_i and d_j. Then the below three statements are true:
d_i > d_j implies that U_i can simulate U_j.
d_j < d_i implies that U_j cannot simulate U_i. d_i = d_j implies that U_i can simulate U_j, and U_j can in turn simulate U_i.
Let’s consider a universe like Conway’s Game of Life. It is far simpler than our universe and possesses only four laws. The simple world hypothesis argues that Conway’s Game of Life (U_c) cannot simulate our universe (U_0). The degree of functionality of Conway’s Game of Life (d_c) < the degree of functionality of our universe d_0. An advance prediction of the simple world hypothesis regarding U_c is the below:
The above implicitly assumes that Conway’s Game of Life is simpler than our universe—is that really true?
Simplicity
It is only prudent that I clarify what it is I mean by simplicity. For any two Universes U_i and U_j, let their simplicity be denoted S_i and S_j respectively. The simplicity of a universe is the Kolmogorov complexity of the set of laws which make up that universe.
For U_c, those laws are:
1. Any live cell with fewer than two live neighbours dies, as if caused by underpopulation.
2. Any live cell with two or three live neighbours lives on to the next generation.
3. Any live cell with more than three live neighbours dies, as if by overpopulation.
4. Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.
At this point, I find it prudent to mention the topic of Kolmogorov Complexity. The Kolmogorov Complexity of an object is the length (in bits) of the shortest computer program (in a predetermined language) that produces that object as output. Let’s pick any (sensible) Turing Complete Language T_x. We’re concerned with the binary length of the shortest T_x program that produces the laws that describe U_i. When discussing the simplicity of a universe, we refrain from mentioning its initial state; the degrees of functionality are qualitative and not quantitative. For example, A universe U_1 which contains only the milky way, will have d_1 = d_0. As such, we take only the Kolmogorov complexity of the laws describing the Universe, and not the Kolmogorov complexity of the universe itself. For any U_i and U_j, let the Kolmogorov complexity of the laws describing U_i and U_j be K_i and K_j respectively.
S_i = K_i-1
S_j = K_j-1
Interlude
Let the set of universes which confirm to the multiverse axiom be denoted M.
Weak Hypothesis
According to the simple world hypothesis, no U_z with K_z < K_0 has d_z >= d_0.
To be mathematically precise:
For all U_z in M there does not exist U_z: K_z < K_0 and d_z >= d_0.
Strong Hypothesis
The strong hypothesis generalises the weak form of the simple world hypothesis to all universes.
The degree of functionality of a universe is directly proportional to its Kolmogorov complexity.
To be mathematically precise:
For all U_z, U_y in M there does not exist U_z: K_z < K_y and d_z >= d_y.
Rules That Govern Universes.
When I refer to the “rules that govern a universe”, or “rules upon which a universe is constructed”, I refer to a set of axioms. The principles of formal logic are part of the Multiverse axiom, and no possible Universe can violate that. As such, the principles of formal logic are a priori part of any possible Universe U_z in M.
The rules that govern the Universe, are only those set of axioms upon which the Universe is constructed in tandem with the principles of formal logic. For example, in our Universe the laws that govern it would not include Newtonian mechanics (as such is merely a special case of Einstein’s underlying theories on relativity). I suspect (with P > 0.67) that the law(s) that govern our Universe, would be the Theory of Everything (TOE) and/or Grand Unified Theory (GUT). All other laws can be derived from them in combination with the underlying laws of formal logic.
Degree of Functionality
The degree of functionality of a Universe U_z (d_z) refers to the maximum complexity (qualitatively not quantitatively; e.g. a human brain is more complicated than a supercluster absent of life) that can potentially emerge from that universe from any potential initial state. Taking U_c to illustrate my point, the maximum complexity that any valid configuration of U_c can produce is d_c. I suspect that human level intelligence (qualitatively and not quantitatively; i.e. artificial super intelligence is included in this category. I refer merely to the potential to conceive the thought “dubito, ergo cogito, ergo sum—res cogitans”) is d_0.
Simulating a Universe.
When I mention a Universe, I do not refer specifically to that Universe itself—and all it contains—but to the set of laws (axioms) upon which that Universe is constructed. Any Universe that has the same base laws as ours—or mathematically/logically equivalent base laws—is isomorphic to our universe. I shall define a set of Universes A_i. A_i is the set of universes that possess the same set of base laws L_i or a mathematical/logical equivalent. The set of laws that govern our Universe is L_0. In my example above, U_1 is a member of A_0.
Initially, I ignored the initial state/conditions of the Universe stating them irrelevant in respect to describing the universe. For any universe U_i, let the initial state of the universe be F_{i0}. Let the set of all possible initial states (for all universes) be B. B: B = {F_{{i_1}{j_1}}, F_{{i_1}{j_2}}, F_{{i_1}{j_3}}, …., F_{{i_1}{j_n}}, {F_{{i_2}{j_1}}, F_{{i_2}{j_2}}, F_{{i_2}{j_3}}, …., F_{{i_2}{j_n}}, {F_{{i_3}{j_1}}, F_{{i_3}{j_2}}, F_{{i_3}{j_3}}, …., F_{{i_1}{j_n}}, …, F_{{i_n}{j_n}}}. Let the current/final (whichever one we are concerned with) state of any U_i be G_ij.
I shall now explain what it means for a Universe U_i to simulate another Universe U_j.
In concise English:
When I refer to producing the state of another Universe, I refer to expressing all the information that the other universe does. The rules for transformation and extracting of information confirm to the third axiom:
Expressive Power.
Earlier, I introduced the concept of A_i for any given set of Laws L_i that governs a universe. When we mention U_i, we are in fact talking of U_i in conjunction with some initial state. If we ignore the initial state, we are left with only L_i. I mentioned earlier that the degree of functionality of a Universe is the maximum complexity that can emerge from some valid configuration of that Universe. The expressive power of a universe is the expressive power of it’s L_i.
The set of L_j that can be concisely and coherently represented in L_i is the expressive power of L_I. I shall once again rely on Conway’s Game of Life (U_c). The 4 laws governing U_c can be represented in L_0, and as such L_0 has an expressive power E_0 >= E_c the expressive power of U_c. As such, E_c is a subset of L_0. If a Universe U_i can simulate another Universe U_J, then it follows that whatever U_j can simulate, then U_i can too. Thus, if U_i can simulate U_j, then E_j is a subset of E_i.
To conceive of a Universe U_i, we merely need conceive L_i. If we can conceive L_i and concisely define it, then it follows that U_0 can simulate U_i. I argue that this is so because if we could conceive and concisely define L_i then U_i can be simulate as a computer program. Any simulation that a subset/member of a universe can perform is a simulation that the universe itself can perform.
An important argument derives from the above:
The above is true, because if we could conceive it, we could define its laws, and if we could define its laws, we could simulate it with a computer program.
The below is also self-evident:
Concluding from the above, the below is self-evident:
The maximum complexity a universe can lead to is itself. Simulating a Universe involves simulating all that universe simulates. Let the maximum complexity that U_i can lead to be denoted C_i. Simulating U_I involves simulating C_i and as such the complexity of simulating U_i >= complexity of simulating C_i. Therefore, the greatest complexity U_i can lead to is a simulation of U_i.
However, can a universe simulate itself? I accepted that as true on principle, but is it really? For a universe to simulate itself, it must also simulate the simulation which would simulate the simulation and beginning an infinite regress. If any universe has finite information content, then it cannot simulate itself?
As such, the universe itself serves as a strict upper boundary for the complexity that a universe can lead to.
If a Universe attempted to simulate itself, how many simulations would there be? Would the cardinality of simulations be countably infinite? Or non-countably infinite? The answer to that question determines how plausible a universe simulating itself would be.
What about U_i being able to simulate U_j, and U_j in turn being able to simulate U_i? This implies U_i can simulate U_j simulating U_i simulating U_j … beginning another infinite regress. How many simulations are needed? Do the universes involved need to be able to hold aleph_k different simulations? Do the laws constructing those universes permit that?
Criticism of the Simple world hypothesis
While sounding nice in theory, the simple world hypothesis—in both of its forms—offers no insight into the origin of the Universe. One may ask “Why Simplicity?” “What would cause the simple world hypothesis to be true?” “What is necessary for all universes to behave as the simple world hypothesis predicts?” Indeed, might the simple world hypothesis not violate Occam’s razor by positing that all universes confirm to it?
I suggest that the simple world hypothesis does not describe the origin of the universe—that was never its aim to begin with. It merely seeks to describe how universes are, and not how they came to be.
Trivia
I conceived the simple world hypothesis when I was thinking up a blog post titled “Why Occam’s Razor?”. I had intended to make an argument along the lines of: “Even if the simple world hypothesis is false, Occam’s razor is still valuable because…”, going along that train of thought, I realised that I would have to define the simple world hypothesis.
Conclusion
I do not endorse this hypothesis; I believe in something called “informed opinion”, and due to my abject lack of knowledge regarding physics I do not consider myself as having an informed opinion on the Universe. Indeed, the simple world hypothesis was conceived to aid an argument I was thinking up to support Occam’s razor. However, I admit that if I were to design the base laws that would be used in addition with the laws constructing each possible universe, then the simple world hypothesis would be true. However, I’m not the one—if there is any—who designed the base laws that would be used in addition with the laws constructing each possible universe, and as such I do not support the simple world hypothesis. Indeed, there is not even enough evidence to locate the simple world hypothesis among the space of possible hypothesis, and as such, I do not—I cannot as long as I profess rationality—accept it—yet.
Indeed, as Aristotle said:
I shall describe the “multiverse axiom” and “base laws” in more detail in a subsequent blog post.