Thomas comments on Brief question about Conway's Game of LIfe and AI - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (47)
OTOH ... whatever the initial condition is, sooner or later we will have a cycle. We always have a cycle in the GoL, eventually. Might be very trivial, an empty table. Might be a longer, complex cycle, but that is the way which always happens, sooner or later in the Game of Life.
We may stare at the empty plane and ask ourselves if this is the graveyard of a superintelligence, once lived here and conquered the plane for a brief time, then vanished in a collapse. Several gliders and roses could be everything what remained, as some dry fossils.
Everything is uniquely defined by the initial pattern.
Or, we could find that the playing field stabilizes to something that can easily be interpreted as a superintelligence's preferred state - perhaps with the field divided into subsections in which interesting things happen in repeated cycles, or whatever.
Interesting question: why does this (intuitively and irrationally) seem to me like a sadder fate than something like heat death?
Because it takes the meaning out of the accomplishment? In this scenario, there might be something interpretable as a superintelligence that exists at some point before the scenario settles into repeating, but the end state still seems more to be caused by the initial state than by the superintelligence.
Alternately, it could be because you value novelty, and the repeating nature of the stabilized field precludes that in a way that's more emotionally salient than heat death.
But this is true of the heat death of the universe, too, eventually...
The only long-term scenario I know that avoids both this outcome and heat death involves launching colonies into the "dust" as Greg Egan described. Unfortunately the assumptions required for that to work may well turn out to be false. There's no known law of nature saying we can't be trapped in the long term. And if we are trapped, I think I prefer a repeating paradise to heat death.
I don't know. Heat death seems a lot sadder to me, in part because I know that there's at least one universe where it will probably happen. Maybe you are just more used to the notion of heat death and so have digested that sour grape but not this one?
I wonder why rational consequentialist agent should do anything but channel all available resources into instrumental goal of finding a way to circumvent heat death. Mixed strategies are obviously suboptimal as expected utility of heat death circumvention is infinite.
A glider isn't a cycle. It translates itself. A glider gun isn't a cycle either, since it creates arbitrarily many gliders. So I think that it's possible not to end in a cycle in interesting ways as well.
On a higher level, since Life is Turing-complete, it's perfectly possible that the game state ends in an infinite computation of Pi to higher and higher precision, and never repeats as a result (or, you know, anything else could happen).
But GoL on a finite board has only finitely many possible states, and must therefore end up in a cycle.