1 min read

2

This is a special post for quick takes by evalu. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
1 comment, sorted by Click to highlight new comments since:

There's a lot of discussion about evolution as an example of inner and outer alignment.

However, we could instead view the universe as the outer optimizer that maximizes entropy, or power, or intelligence. From this view, both evolution and humans are inner optimizers, and the difference between evolution's and our optimization targets is more of an alignment success than a failure.

Before evolution, the universe increased entropy by having rocks in space crash into each other. When life and evolution finally came around, it was way more effective than rock collisions at increasing entropy even though entropy isn't in its optimization target. If there were an SGD loop around the universe to maximize entropy, it would choose the evolution mesa-optimizer instead of the crashing rocks.

Compared to evolution, humans optimizing to win wars, make money, and be famous was once again way better at increasing entropy. Entropy is still not in the optimization target, but an entropy-maximizing SGD around the universe would choose the human mesa-optimizer over evolution.

Importantly, humans not caring about genetic fitness is no longer an alignment failure from this view. The mesa-optimizer for our values is more aligned than evolution was, so it's good that we rather spread our ideas and influence than our genes.

More from evalu
Curated and popular this week