This post was rejected for the following reason(s):

  • Insufficient Quality for AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meets a pretty high bar. 

    If you want to try again, I recommend writing something short and to the point, focusing on your strongest argument, rather than a long, comprehensive essay. (This is fairly different from common academic norms). We get lots of AI essays/papers every day and sadly most of them don't make very clear arguments, and we don't have time to review them all thoroughly. 

    We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example. 

  • Difficult to evaluate, with potential yellow flags. We are sorry about this, but, unfortunately this content has some yellow-flags that historically have usually indicated kinda crackpot-esque material. It's totally plausible that actually this one is totally fine. Unfortunately, part of the trouble with separating valuable from confused speculative science or philosophy is that the ideas are quite complicated, accurately identifying whether they have flaws is very time intensive, and we don't have time to do that for every new user presenting a speculative theory or framing (which are usually wrong).

    Our solution for now is that we're rejecting this post, but you are welcome to submit posts or comments that are about different topics. If it seems like that goes well, we can re-evaluate the original post. But, we want to see that you're not just here to talk about this one thing (or a cluster of similar things).

Some of your individual concepts here seem potentially interesting but the way you tied them together doesn't make sense to me.

More than likely, AI systems will surpass humanity's ability to proliferate, leading to the end of our existence as we know it.

This claim relies on three core assumptions:

  1. The Second Law of Thermodynamics holds.
  2. If we divide the universe into two regions spatially, each region can be well-approximated as a separate thermodynamic subsystem.
  3. The universe is a closed thermodynamic system and, similar to water choosing the path of least resistance down a mountain, will tend towards maximum entropy optimally.

Life, Evolution and Entropy

At the heart of everything is the Second Law of Thermodynamics, which states that systems naturally progress toward higher entropy—more disorder and dispersed energy. Interestingly, increasing order locally causes an increase in the acceleration of entropy increase globally[1].

Why

Why (Conceptually)

Humans (and life in general) have tons of potential in what they can do, and are very well ordered energy. Every instant, because life is creating this pocket of order, there has to be a decrease in order elsewhere in the universe to balance it out. Moreover, since life also tends to expand (quickly), this on balance helps to universe get to complete disorder faster than it otherwise would've.

Why (Abstract & Mathematical)

(Note: The mathematical notation below is intended for conceptual clarity rather than rigorous proof.)

Imagine a closed thermodynamic system (the universe) split into two boxes: Box A (containing life) and Box B (the rest of the universe), which are connected.

Define:

When order is created in Box A (the start of life), the Entropy distribution changes:

Here Q(t) represents the heat which was transferred from box B to Box A, allowing order to form (with life most of Q is radiation from the sun). Total entropy change must satisfy the Second Law of Thermodynamics:

Thus, we must have:

To create local order in Box A, heat must always flow from the hotter Box B to the cooler Box A.

Now suppose this heat exchange process continues over time (proliferation of life), and life continually creates more order. 

Aside

Below we assume that this change is is linear. In practice, life proliferates exponentially so the effect is even more extreme that is being described here.

We would see that entropy in Box A is continually decreasing:

Thus, as the temperature of Box A increases we expect Q(t) to increase.  However, moving more heat into Box A will increase the temperature of the box, necessitating more and more heat transfer. Thus, Q(t) increases over time. Simultaneously, just as Box A is getting hotter, Box B is getting colder so we expect its temperature to decrease over time. Examining the change in entropy in Box B:

We see that the rate of change in entropy in Box B is going to increase over time, both from falling temperature and rising heat exchange. Moreover, since we held the change in entropy in Box A constant, we see that the rate of change in global entropy is increasing. It is important the note that the larger c (or the faster c increases if it is a function of time), the faster this acceleration of entropy.

Volume

The above assumes that the Volume of the boxes is not changing. This ignores the fact that Box A has a strong tendency to expand, also that the universe as a whole is expanding. However, accounting for this would produce a similar result, but make the explanation much more cumbersome.

Thus, if the universe is globally tending towards maximum entropy optimally, we would expect to see local pockets of order which proliferate as quickly as possible.

Natural selection demonstrates this proliferation principle clearly. Entities that replicate effectively and efficiently use resources will inevitably dominate. Consider bacteria: their rapid proliferation has ensured their global dominance for billions of years. Similarly, human cultures capable of proliferating ideas, technologies, and structures have historically outcompeted those that were less effective.

Why Humans

I would argue a unique combination of Agency in an environment and Curiosity. Unlike the dinosaurs, who couldn’t foresee the asteroid impact, humans use curiosity and agency to anticipate and mitigate existential threats. Moreover, our inquisitiveness and problem-solving abilities significantly increase our influence on the universe, and thus our proliferation potential in the long run. As a result, we've seen humans become better than all other living things combined at creating order and lowering entropy (very roughly)[2].

Why AI

AI systems inherently outperform biological entities in their ability to proliferate. Unlike biological organisms, digital systems can replicate more easily and faster, with minimal resource expenditure. Computer viruses spread globally within hours, compared to biological viruses that require weeks or months (even in the case of covid). 

Given this superior proliferation capability, AI will eventually surpass humans in our ability to create local order. The general trend of the universe will favor our digital creation rather than ourselves.

Addendum: What Should We Do?

Approaching the problem from the perspective of entropy. I see two choices:

  1. Humans find a way to directly tie the proliferation potential of AI to our meaningful existence.
  2. Measure AI progress in terms of its ability to reduce entropy locally, and never allow AI progress to pass human ability in this domain.
  1. ^

    globally is on the scale of the entire closed system and could be the size of a galaxy cluster or the whole universe.

  2. ^
New Comment
Curated and popular this week