http://blog.regehr.org/archives/546
John Regehr, an associate professor of computer science at the University of Utah, writes about two algorithmic optimizations for Conway's Game of Life, and speculates on the implications for self-aware entities in simulations.
Hashlife is a clever optimization that treats the Life grid as a hierarchy of quadtrees. By observing that the maximum speed of signal propagation in a Life configuration is one cell per step, it becomes possible to evolve squares of the Life grid multiple steps into the future using hash codes. Hashlife is amazing to watch: it starts out slow but as the hashtable fills up, it suddenly “explodes” into exponential progress. I recommend Golly. Hashlife is one of my ten all-time favorite algorithms.
Those who have read Greg Egan's Permutation City will find the concept of Hashlife familiar.
Another ingenious simulation speedup (developed, as it happens, around the same time as Hashlife) is Time Warp, which relaxes the synchronization requirements, permitting a processor to run well ahead of its neighbors. This opens up the possibility that a processor will at some point receive a message that violates causality: it needs to be executed in the past. Clearly this is a problem. The solution is to roll back the simulation state to the time of the message and resume execution from there. If rollbacks are infrequent, overall performance may increase due to improved asynchrony. This is a form of “optimistic concurrency” and it can be shown to preserve the meaning of a simulation in the sense that the Time Warp implementation must always return the same final answer as the non-optimistic implementation.
Careful not to privilege hypotheses - the game of life's fame is not correlated with it's use as a theory of everything, at last not when you compare it to the zillions of other systems that have very similar properties. So allll those non-famous systems are probably just as good.
Philosophical speculation should still be grounded, since we don't have time to consider every possibility ever.
Careful not to fall prey to the Sorites Paradox - where exactly is the line between insufficient and sufficient evidence to form a hypothesis?
I should clarify that by CGoL-type substrate I meant some kind of 3-D cellular automaton, clearly not the 2-D CGoL itself. Other people, e.g. Stephen Wolfram and Konrad Zuse have seen fit to speculate at length on this subject.