See here https://conwaylife.com/forums/viewtopic.php?f=7&t=1234&sid=90a05fcce0f1573af805ab90e7aebdf1 and here https://discord.com/channels/357922255553953794/370570978188591105/834767056883941406 for discussion of this topic by Life hobbyists who have a good knowledge of what's possible and not in Life.
What we agree on is that the large random region will quickly settle down into a field of 'ash': small stable or oscillating patterns arranged at random. We wouldn't expect any competitior AIs to form in this region since an area of 10^120 will only be likely to contain arbitrary patterns of sizes up to log(10^120), which almost certainly isn't enough area to do anything smart.
So the question is whether our AI will be able to cut into this ash and clear it up, leaving a blank canvas for it to create the target pattern. Nobody knows a way to do this, but it's also not known to be impossible.
Recently I tried an experiment where I slowly fired gliders at a field of ash, along twenty adjacent lanes. My hope had been that each collision of a glider with the ash would on average destroy more ash than it created, thus carving a diagonal path of width 20 into the ash. Instead I found...
This is very much a heuristic, but good enough in this case.
Suppose we want to know how many times we expect to see a pattern with n cells in a random field of area A. Ignoring edge effects, there are A different offsets at which the pattern could appear. Each of these has a 1/2^n chance of being the pattern. So we expect at least one copy of the pattern if n < log_2(A).
In this case the area is (10^60)^2, so we expect patterns of size up to 398.631. In other words, we expect the ash to contain any pattern you can fit in a 20 by 20 box.
There has been a really significant amount of progress on this problem in the last year, since this article was posted. The latest experiments can be found here, from October 2021:
https://conwaylife.com/forums/viewtopic.php?p=136948#p136948
The technology for clearing random ash out of a region of space isn't entirely proven yet, but it's looking a lot more likely than it was a year ago, that a workable "space-cleaning" mechanism could exist in Conway's Life.
As previous comments have pointed out, it certainly wouldn't be absolutely foolproof. But it might be surprisingly reliable at clearing out large volumes of settled random ash -- which could very well enable a 99+% success rate for a Very Very Slow Huge-Smiley-Face Constructor.
It seems like our physics has a few fundamental characteristics that change the flavor of the question:
I think this is an interesting question, but if poking around it would probably be nicer to work with simple rules that share (at least) these features of physics.
Have you heard of Von Neumann's universal constructor? This seems relevant. He invented the concept of "cellular automaton" and engineered a programmable replicator in one. His original formulation was a lot more complex than Conway's Life though, with 29 different states per cell. Edgar Codd later demonstrated an 8-state automaton powerful enough to build a universal constructor with.
Have you seen “Growing Neural Cellular Automata?” It seems like the authors there are trying to do something pretty similar to what you have in mind here.
I would say it all depends on whether there is a wall gadget which protects everything on one side from anything on the other side. (And don't forget the corner gadget.)
If so, cover the edges of the controlled portion in it, except for a "gate" gadget which is supposed to be a wall except openable and closable. (This is relatively easier since a width of 100 ought to be enough, and since we can stack 10000 of these in case one is broken through - rarely should chaos be able to reach through a 100x10000 rectangle.)
Wait 10^40 steps for the chaos to lose entropy. The structures that remain should be highly compressible in a CS sense, and made of a small number of natural gadget types. Send out ships that cover everything in gliders. This should resurrect the chaos temporarily, but decrease entropy further in the long run. Repeat 10^10 times, waiting 10^40 steps in between.
The rest should be a simple matter of carefully distinguishing the remaining natural gadgets with ship sensors to dismantle each. Programming a smiley face deployer that starts from an empty slate is a trivial matter.
If walls are constructible, there's no need for gates, and also one could allow a margin for error in the sensory ships: One could advance the walls after claiming some area, in case a rare encounter summons another era of chaos.
All this is less AGI than ordinary game AI - a bundle of programmed responses.
The truly arbitrary version seems provably impossible. For example, what if you're trying to make a smiley face, but some other part of the world contains an agent just like you except they're trying to make a frowny face - you obviously both can't succeed. Instead you need some special environment with low entropy, just like humans do in real life.
I recall once seeing someone say with 99.9% probability that the sun would still rise 100 million years from now, citing information about the life-cycle of stars like our sun. Someone else pointed out that this was clearly wrong, that by default that sun would be taken apart for fuel on that time scale, by us or some AI, and that this was a lesson in people's predictions about the future being highly inaccurate.
But also, "the thing that means there won't be a sun sometime soon" is one of the things I'm pointing to when talking about "general intelligence". This post reminded me of that.
While I appreciate the analogy between our real universe and simpler physics-like mathematical models like the game of life, assuming intelligence doesn't arise elsewhere in your configuration, this control problem does not seem substantially different or more AI-like from any other engineering problems. After all, there are plenty of other problems that involve leveraging a narrow form of control on a predicable physical system to achieve a more refined control, ex. building a rocket that hits a specific target. The structure that arises from a randomly initialized pattern in Life should be homogeneous in a statistical sense a so highly predictable. I expect almost all of it should stabilize to debris of stable periodic patterns. It's not clear whether it's possible to manipulate or clear the debris in controlled ways, but if it is possible, then a single strategy will work for the entire grid. It may take a great deal of intelligence to come up with such a strategy, but once such a strategy is found it can be hard-coded into the initial Life pattern, without any need for an "inner optimizer". The easiest-to-design solution may involve computer-like patterns, with the pattern keep...
This post very cleverly uses Conway's Game of Life as an intuition pump for reasoning about agency in general. I found it to be both compelling, and a natural extension of the other work on LW relating to agency & optimization. The post also spurred some object-level engineering work in Life, trying to find a pattern that clears away Ash. It also spurred people in the comments to think more deeply about the implications of the reversibility of the laws of physics. It's also reasonably self-contained, making it a good candidate for inclusion in the Review books.
My immediate impulse is to say that it ought to be possible to create the smiley face, and that it wouldn't be that hard for a good Life hacker to devise it.
I'd imagine it to go something like this. Starting from a Turing machine or simpler, you could program it to place arbitrary 'pixels': either by finding a glider-like construct which terminates at specific distances into a still, so the constructor can crawl along an x/y axis, shooting off the terminating-glider to create stable pixels in a pre-programmed pattern. (If that doesn't exist, then one could use two constructors crawling along the x/y axises, shooting off gliders intended to collide, with the delays properly pre-programmed.) The constructor then terminates in a stable still life; this guarantees perpetual stability of the finished smiley face. If one wants to specify a more dynamic environment for realism, then the constructor can also 'wall off' the face using still blocks. Once that's done, nothing from the outside can possibly affect it, and it's internally stable, so the pattern is then eternal.
This sounds like you're treating the area as empty space, whereas the OP specifies that it's filled randomly outside the area where our AI starts.
Curated.
I think this post strikes a really cool balance between discussing some foundational questions about the notion of agency and its importance, as well as posing a concrete puzzle that caused some interesting comments.
For me, Life is a domain that makes it natural to have reductionist intuitions. Compared to say neural networks, I find there are fewer biological metaphors or higher-level abstractions where you might sneak in mysterious answers that purport to solve the deeper questions. I'll consider this post next time I want to introduce some...
Related to sensitivity of instrumental convergence. i.e. the question of whether we live in a universe of strong or weak instrumental convergence. In a strong instrumental convergence universe, most possible optimizers wind up in a relatively small space of configurations regardless of starting conditions, while in a weak one they may diverge arbitrarily in design space. This can be thought of one way of crisping up concepts around orthogonality. e.g. in some universes orthogonality would be locally true but globally false, or vice versa, or locally and globally true or vice versa.
nitpick : the appendix says possible configurations of the whole grid, while it should say possible configurations. (Similarly for what it says about the number of possible configurations in the region that can be specified.)
It feels like this post pulls a sleight of hand. You suggest that it's hard to solve the control problem because of the randomness of the starting conditions. But this is exactly the reason why it's also difficult to construct an AI with a stable implementation. If you can do the latter, then you can probably also create a much simpler system which creates the smiley face.
Similarly, in the real world, there's a lot of randomness which makes it hard to carry out tasks. But there are a huge number of strategies for achieving things in the world which don't r...
Well yes, I do think that trees and bacteria exhibit this phenomenon of starting out small and growing in impact. The scope of their impact is limited in our universe by the spatial separation between planets, and by the presence of even more powerful world-reshapers in their vicinity, such as humans. But on this view of "which entities are reshaping the whole cosmos around here?", I don't think there is a fundamental difference in kind between trees, bacteria, humans, and hypothetical future AIs. I do think there is a fundamental difference in kind between those entities and rocks, armchairs, microwave ovens, the Opportunity mars rovers, and current Waymo autonomous cars, since these objects just don't have this property of starting out small and eventually reshaping the matter and energy in large regions.
(Surely it's not that it's difficult to build an AI inside Life because of the randomness of the starting conditions -- it's difficult to build an AI inside Life because writing full-AGI software is a difficult design problem, right?)
I don't think there is a fundamental difference in kind between trees, bacteria, humans, and hypothetical future AIs
There's at least one important difference: some of these are intelligent, and some of these aren't.
It does seem plausible that the category boundary you're describing is an interesting one. But when you indicate in your comment below that you see the "AI hypothesis" and the "life hypothesis" as very similar, then that mainly seems to indicate that you're using a highly nonstandard definition of AI, which I expect will lead to confusion.
I think a problem you would have is that the speed of information in the game is the same as the speed of, say, a glider. So an AI that is computing within Life would not be able to sense and react to a glider quickly enough to build a control structure in front of it.
This is an interesting question, but I think your hypothesis is wrong.
Any pattern of physics that eventually exerts control over a region much larger than its initial configuration does so by means of perception, cognition, and action that are recognizably AI-like.
In order to not include things like an exploding supernova as "controlling a region much larger than its initial configuration" we would want to require that such patterns be capable of arranging matter and energy into an arbitrary but low-complexity shape, such as a giant smiley face in Life.
If ...
Planned summary for the Alignment Newsletter:
...Conway’s Game of Life (GoL) is a simple cellular automaton which is Turing-complete. As a result, it should be possible to build an “artificial intelligence” system in GoL. One way that we could phrase this is: if we imagine a GoL board with 10^30 rows and 10^30 columns, and we are able to set the initial state of the top left 10^20 by 10^20 square, can we set that initial state appropriately such that after a suitable amount of time, we the full board evolves to a desired state (perhaps a giant smiley face), fo
Here is a simple disproof of the control question.
Among the possible ways the rest of the grid could be filled, one is that is it empty except for the diagonally opposite corner, where there is a duplicate of the top left corner, rotated 180 degrees. Since this makes the whole grid symmetrical under that rotation, every future state must also be symmetrical. The smiley does not have that symmetry, therefore it cannot be achieved.
Also, what Charlie Steiner said.
My understanding was that we just want to succeed with high probability. The vast majority of configurations will not contain enemy AIs.
My strong expectation is that in Life, there is no configuration that you can put in the starting area that is robust against randomly initializing the larger area.
There are very likely other cellular automata which do support arbitrary computation, but which are much less fragile versus evolution of randomly initialized spaces nearby.
an AI requires apparatus to perceive and act within the world, as well as the ability to move and grow if we want it to eventually exert influence over the entire grid. Most constructions within Life are extremely sensitive to perturbations.
This may be more tractable in Lenia, because lenia's smoothness means that cell-level changes take many simulation steps and can occur over multiple spatial frequencies. That might also make it harder to study, but I think more people should know about smoothlife variants. Smoothlife gliders have:
This is a post about the mystery of agency. It sets up a thought experiment in which we consider a completely deterministic environment that operates according to very simple rules, and ask what it would be for an agentic entity to exist within that.
People in the game of life community actually spent some time investigating the empirical questions that were raised in this post. Dave Greene notes:
...The technology for clearing random ash out of a region of space isn't entirely proven yet, but it's looking a lot more likely than it was a year ago, that a work
I think the GoL is not the best example for this sort of questions. See this post by Scott Aaronson discussing the notion of "physical universality" which seems relevant here.
Also, like other commenters pointed out, I don't think the object you get here is necessarily AI. That's because the "laws of physics" and the distribution of initial conditions are assumed to be simple and known. An AI would be something that can accomplish an objective of this sort while also having to learn the rules of the automaton or detect patterns in the initial conditions. Fo...
I really enjoyed this read, thanks. I'm an enjoyer of Life from afar so there may be a trivial answer to this question.
Is it possible to reverse engineer a state in Life? E.g., for time state X, can you easily determine a possible time state X-1? I know that multiple X-1 time states can lead to the same X time state, but is it possible to generate one? Can you reverse engineer any possible X-100 time state for a given time state X? I ask because I wonder if you could generate an X-(10^60) time state on a 10^30 by 10^30 grid where time state X is a large sm...
I think this is possible and it doesn’t require AI. It only requires a certain kind of "infectious Turing machine" described below.
Following Gwern’s comment, let’s consider first the easier problem of writing a program on a small portion of a Turing machine’s tape, which draws a large smiley face on the rest of the tape. This is easy even with the *worst case* initialization of the rest of the tape. Whereas our problem is not solvable in worst case, as pointed out by Richard_Kennaway.
What makes our problem harder is errors caused by the r...
Random Notes:
Firstly, why is the rest of the starting state random? In a universe where info can't be destroyed, like this one, random=max entropy. AI is only possible in this universe because the starting state is low entropy.
Secondly, reaching an arbitrary state can be impossible for reasons like conservation of mass energy momentum and charge. Any state close to an arbitrary state might be unreachable due to these conservation laws. Ie a state containing lots of negitive electric charges, and no positive charges being unreachable in our universe.
Well, q...
Seems like there's a difference between viability of AI, and ability of AI to shape a randomized environment. To have AI, you just need stable circuits, but to have an AI that can shape, you need a physics that allows observation and manipulation... It's remarkable that googling "thermodynamics of the game of life" turns up zero results.
Financial status: This is independent research. I welcome financial support to make further posts like this possible.
Epistemic status: I have been thinking about these ideas for years but still have not clarified them to my satisfaction.
Outline
This post asks whether it is possible, in Conway’s Game of Life, to arrange for a certain game state to arise after a certain number of steps given control only of a small region of the initial game state.
This question is then connected to questions of agency and AI, since one way to answer this question in the positive is by constructing an AI within Conway’s Game of Life.
I argue that the permissibility or impermissibility of AI is a deep property of our physics.
I propose the AI hypothesis, which is that any pattern that solves the control question does so, essentially, by being an AI.
Introduction
In this post I am going to discuss a celular autonoma known as Conway’s Game of Life:
In Conway’s Game Life, which I will now refer to as just "Life", there is a two-dimensional grid of cells where each cell is either on or off. Over time, the cells switch between on and off according to a simple set of rules:
A cell that is "on" and has fewer than two neighbors that are "on" switches to "off" at the next time step
A cell that is "on" and has greater than three neighbors that are "on" switches to "off" at the next time step
An cell that is "off" and has exactly three neighbors that are "on" switches to "on" at the next time step
Otherwise, the cell doesn’t change
It turns out that these simple rules are rich enough to permit patterns that perform arbitrary computation. It is possible to build logic gates and combine them together into a computer that can simulate any Turing machine, all by setting up a particular elaborate pattern of "on" and "off" cells that evolve over time according to the simple rules above. Take a look at this awesome video of a Universal Turing Machine operating within Life.
The control question
Suppose that we are working with an instance of Life with a very large grid, say 1030 rows by 1030 columns. Now suppose that I give you control of the initial on/off configuration of a region of size 1020 by 1020 in the top-left corner of this grid, and set you the goal of configuring things in that region so that after, say, 1060 time steps the state of the whole grid will resemble, as closely as possible, a giant smiley face.
The cells outside the top-left corner will be initialized at random, and you do not get to see what their initial configuration is when you decide on the initial configuration for the top-left corner.
The control question is: Can this goal be accomplished?
To repeat that: we have a large grid of cells that will evolve over time according to the laws of Life. We are given power to control the initial on/off configuration of the cells in a square region that is a tiny fraction of the whole grid. The initial on/off configuration of the remaining cells will be chosen randomly. Our goal is to pick an initial configuration for the controllable region in such a way that, after a large number of steps, the on/off configuration of the whole grid resembles a smiley face.
The control question is: Can we use this small initial region to set up a pattern that will eventually determine the configuration of the whole system, to any reasonable degree of accuracy?
[Updated 5/13 following feedback in the comments] Now there are actually some ways that we could get trivial negative answers to this question, so we need to refine things a bit to make sure that our phrasing points squarely at the spirit of the control question. Richard Kennaway points out that for any pattern that attempts to solve the control question, we could consider the possibility that the randomly initialized region contains the same pattern rotated 180 degrees in the diagonally opposite corner, and is otherwise empty. Since the initial state is symmetric, all future states will be symmetric, which rules our creating a non-rotationally-symmetric smiley face. More generally, as Charlie Steiner points out, what happens if there are patterns in the randomly initialized region that are trying to control the eventual configuration of the whole universe just as we are? To deal with this, we might amend the control question to require a pattern that "works" for at least 99% of configurations of the randomly initialized area, since most configurations of that area will not be adversarial. See further discussion in the brief appendix below.
Connection to agency
On the surface of it, I think that constructing a pattern within Life that solves the control question looks very difficult. Try playing with a Life simulator set to max speed to get a feel for how remarkably intricate can be the evolution of even simple initial states. And when an evolving pattern comes into contact with even a small amount of random noise — say a single stray cell set to "on" — the evolution of the pattern changes shape quickly and dramatically. So designing a pattern that unfolds to the entire universe and produces a goal state no matter what random noise is encountered seems very challenging. It’s remarkable, then, that the following strategy actually seems like a plausible solution:
One way that we might answer the control question is by building an AI. That is, we might find a 1020 by 1020 array of on/off values that evolve under the laws of Life in a way that collects information using sensors, forms hypotheses about the world, and takes actions in service of a goal. That goal we would give to our AI would be arranging for the configuration of the grid to resemble a smiley face after 1060 game steps.
What does it mean to build an AI in the region whose initial state is under our control? Well it turns out that it’s possible to assemble little patterns in Life that act like logic gates, and out of those patterns one can build whole computers. For example, here is what one construction of an AND gate looks like:
And here is a zoomed-out view of a computer within Life that adds integers together:
It has been proven that computers within Life can compute anything that can be computed under our own laws of physics[1], so perhaps it is possible to construct an AI within Life. Building an AI within Life is much more involved than building a computer, not only because we don’t yet know how to construct AGI software, but also because an AI requires apparatus to perceive and act within the world, as well as the ability to move and grow if we want it to eventually exert influence over the entire grid. Most constructions within Life are extremely sensitive to perturbations. The computer construction shown above, for example, will stop working if almost any "on" cell is flipped to "off" at any time during its evolution. In order to solve the control question, we would need to build a machine that is not only able to perceive and react to the random noise in the non-user-controlled region, but is also robust to glider impacts from that region.
Moreover, building large machines that move around or grow over time is highly non-trivial in Life since movement requires a machine that can reproduce itself in different spatial positions over time. If we want such a machine to also perceive, think, and act then these activities would need to be taking place simultaneously with self-reproducing movement.
So it’s not clear that a positive answer to the control question can be given in terms of an AI construction, but neither is it clear that such an answer cannot be given. The real point of the control question is to highlight the way that AI can be seen as not just a particularly powerful conglomeration of parts but as a demonstration of the permissibility of patterns that start out small but eventually determine the large-scale configuration of the whole universe. The reason to construct such thought experiments in Life rather than in our native physics is that the physics of Life is very simple and we are not as used to seeing resource-collecting, action-taking entities in Life as we are in our native physics, so the fundamental significance of these patterns is not as easy to overlook in Life as it is in our native physics.
Implications
If it is possible to build an AI inside Life, and if the answer to the control question is thus positive, then we have discovered a remarkable fact about the basic dynamics of Life. Specifically, we have learned that there are certain patterns within Life that can determine the fate of the entire grid, even when those patterns start out confined to a small spatial region. In the setup described above, the region that we get to control is much less than a trillionth of the area of the whole grid. There are a lot of ways that the remaining grid could be initialized, but the information in these cells seems destined to have little impact on the eventual configuration of the grid compared to the information within at least some parts of the user-controlled region[2].
We are used to thinking about AIs as entities that might start out physically small and grow over time in the scope of their influence. It seems natural to us that such entities are permitted by the laws of physics, because we see that humans are permitted by the laws of physics, and humans have the same general capacity to grow in influence over time. But it seems to me that the permissibility of such entities is actually a deep property of the governing dynamics of any world that permits their construction. The permissibility (or not) of AI is a deep property of physics.
Most patterns that we might construct inside Life do not have this tendency to expand and determine the fate of the whole grid. A glider gun does not have this property. A solitary logic gate does not have this property. And most patterns that we might construct in the real world do not have this property either. A chair does not have the tendency to reshape the whole of the cosmos in its image. It is just a chair. But it seems there might be patterns that do have the tendency to reshape the whole of the cosmos over time. We can call these patterns "AIs" or "agents" or "optimizers", or describe them as "intelligent" or "goal-directed" but these are all just frames for understanding the nature of these profound patterns that exert influence over the future.
It is very important that we study these patterns, because if such patterns do turn out to be permitted by the laws of physics and we do construct one then it might determine the long-run configuration of the whole of our region of the cosmos. Compared to the importance of understanding these patterns, it is relatively unimportant to understand agency for its own sake or intelligence for its own sake or optimization for its own sake. Instead we should remember that these are frames for understanding these patterns that exert influence over the future.
But even more important than this, we should remember that when we study AI, we are studying a profound and basic property of physics. It is not like constructing a toaster oven. A toaster oven is an unwieldy amalgamation of parts that do things. If we construct a powerful AI then we will be touching a profound and basic property of physics, analogous to the way fission reactors touch a profound and basic property of nuclear physics, namely the permissibility of nuclear chain reactions. A nuclear reactor is itself an unwieldy amalgamation of parts, but in order to understand it and engineer it correctly, the most important thing to understand is not the details of the bits and pieces out of which it is constructed but the basic property of physics that it touches. It is the same situation with AI. We should focus on the nature of these profound patterns themselves, not on the bits and pieces out which AI might be constructed.
The AI hypothesis
The above thought experiment suggests the following hypothesis:
Any pattern of physics that eventually exerts control over a region much larger than its initial configuration does so by means of perception, cognition, and action that are recognizably AI-like.
In order to not include things like an exploding supernova as "controlling a region much larger than its initial configuration" we would want to require that such patterns be capable of arranging matter and energy into an arbitrary but low-complexity shape, such as a giant smiley face in Life.
Influence as a definition of AI
If the AI hypothesis is true then we might choose to define AI as a pattern within physics that starts out small but whose initial configuration significantly influences the eventual shape of a much larger region. This would provide an alternative to intelligence as a definition of AI. The problem with intelligence as a definition of AI is that it is typically measured as a function of discrete observations received by some agent, and the actions produced in response. But an unfolding pattern within Life need not interact with the world through any such well-defined input/output channels, and constructions in our native physics will not in general do so either. It seems that AI requires some form of intelligence in order to produce its outsized impact on the world, but it also seems non-trivial to define the intelligence of general patterns of physics. In contrast, influence as defined by the control question is well-defined for arbitrary patterns of physics, although it might be difficult to efficiently predict whether a certain pattern of physics will eventually have a large impact or not.
Conclusion
This post has described the control question, which asks whether, under a given physics, it is possible to set up small patterns that eventually exert significant influence over the configuration of large regions of space. We examined this question in the context of Conway’s Game of Life in order to highlight the significance of either a positive or negative answer to this question. Finally, we proposed the AI hypothesis, which is that any such spatially influential pattern must operate by means of being, in some sense, an AI.
Appendix: Technicalities with the control question
The following are some refinements to the control question that may be needed.
There are some patterns that can never be produced in Conway’s Game of Life, since they have no possible predecessor configuration. To deal with this, we should phrase the control question in terms of producing a configuration that is close to rather than exactly matching a single target configuration.
There are 21060 possible configurations of the whole grid , but only 21040 possible configurations of the user-controlled section of the universe. Each configuration of the user-controlled section of the universe will give rise to exactly one final configuration, meaning that the majority of possible final configurations are unreachable. To deal with this we can again phrase things in terms of closeness to a target configuration, and also make sure that our target configuration has reasonably low Kolmogorov complexity.
Say we were to find some pattern A that unfolds to final state X and some other pattern B that unfolds to a different final state Y. What happens, then, if we put A and B together in the same initial state — say, starting in opposite corners of the universe? The result cannot be both X and Y. In this case we might have two AIs with different goals competing for control. Some tiny fraction of random initializations will contain AIs, so it is probably not possible for the amplification question to have an unqualified positive answer. We could refine the question so that our initial pattern has to produce the desired goal state for at least 1% of the possible random initializations of the surrounding universe.
A region of 1020 by 1020 cells may not be large enough. Engineering in Life tends to take up a lot of space. It might be necessary to scale up all my numbers.
Rendell, P., 2011, July. A universal Turing machine in Conway's game of life. In 2011 International Conference on High Performance Computing & Simulation (pp. 764-772). IEEE. ↩︎
There are some configurations of the randomly initialized region that affect the final configuration, such as configurations that contain AIs with different goals. This is addressed in the appendix ↩︎