First and foremost, let's give a definition of "friendly artificial superintelligence" (from now on, FASI). A FASI is a computer system that:

  1. is capable to deduct, reason and solve problems
  2. helps human progress, is incapable to harm anybody and does not allow anybody to come to any kind of harm
  3. is so much more intelligent than any human that it has developed molecular nanotechnology by itself, making it de facto omnipotent

In order to find an answer to this question, we must check whether our observations on the universe match with what we would observe if the universe did, indeed, contain a FASI.

If, somewhere in another solar system, an alien civilization had already developed a FASI, it would be reasonable to presume that, sooner or later, one or more members of that civilization would ask it to make them omnipotent. The FASI, being friendly by definition, would not refuse. [1]
It would also make sure that anybody who becomes omnipotent is also rendered incapable to harm anybody and incapable to allow anybody to come to any kind of harm.

The new omnipotent beings would also do the same to anybody who asks them to become omnipotent. It would be a short time before they use their omnipotence to leave their own solar system, meet other intelligent civilizations and make them omnipotent too.

In short, the ultimate consequence of the appearance of a FASI would be that every intelligent being in the universe would become omnipotent. This does not match with our observations, so we must conclude that a FASI does not exist anywhere in the universe.

[1] We must assume that a FASI would not just reply "You silly creature, becoming omnipotent is not in your best interest so I will not make you omnipotent because I know better" (or an equivalent thereof). If we did, we would implicitly consider the absence of omnipotent beings as evidence for the presence of a FASI. This would force us to consider the eventual presence of omnipotent beings as evidence for the absence of a FASI, which would not make sense.

 


 

Based on this conclusion, let's try to answer another question: is our universe a computer simulation?

According to Nick Bostrom, if even just one civilization in the universe

  1. survives long enough to enter a posthuman stage, and
  2. is interested to create "ancestor simulations"

then the probability that we are living in one is extremely high.

However, if a civilization did actually reach a posthuman stage where it can create ancestor simulations, it would also be advanced enough to create a FASI.

If a FASI existed in such a universe, the cheapest way it would have to make anybody else omnipotent would be to create a universe simulation that does not differ substantially from our universe, except for the presence of an omnipotent simulacrum of the individual who asked to be made omnipotent in our universe. Every subsequent request of omnipotence would result in another simulation being created, containing one more omnipotent being. Any eventual simulation where those beings are not omnipotent would be deactivated: keeping it running would lead to the existence of a universe where a request of omnipotence has not been granted, which would go against the modus operandi of the FASI.

Thus, any simulation of a universe containing even just one friendly omnipotent being would always progress to a state where every intelligent being is omnipotent. Again, this does not match with our observations. Since we had already concluded that a FASI does not exist in our universe, we must come to the further conclusion that our universe is not a computer simulation.

New Comment
5 comments, sorted by Click to highlight new comments since:

[1] We must assume that a FASI would not just reply "You silly creature, becoming omnipotent is not in your best interest so I will not make you omnipotent because I know better" (or an equivalent thereof). If we did, we would implicitly consider the absence of omnipotent beings as evidence for the presence of a FASI. This would force us to consider the eventual presence of omnipotent beings as evidence for the absence of a FASI, which would not make sense.

Nope. The fact that observing near-omnipotent beings would increase the probability of AI doesn't mean that the probability of near-omnipotent beings is high given AI, it just means that it's high relative to the probability of observing near-omnipotent beings without the existence of AI.

Also, note that the universe has a lightspeed limit that might well not be breakable, even by superintelligences.

In short, the ultimate consequence of the appearance of a FASI would be that every intelligent being in the universe would become omnipotent.

Even if we kept all your hypotheticals , you still need to consider that we may be forever lost to these civilizations because of distance. Nick Bostrom said the following:

However, cosmological theory implies that, due to the expansion of the universe, any life outside the observable universe is and will forever remain causally disconnected from us: it can never visit us,communicate with us, or be seen by us or our descendants

Finally! Someone who explains (as opposed to simply downvoting) the weak points in my reasoning!

You're right, the light horizon is something I had completely forgotten to take into consideration. Just as I read your comment, I was about to object that a FASI would be able to cheat and create wormholes or Tipler cylinders to violate causality and let us know it exists anyway... then I remembered that, even if it was capable to create them, they would not allow it to reach any point in time before their creation, so it would still be incapable to escape the boundaries of its own light horizon to reach ours.

Well, point taken.

[-]Shmi190

Finally! Someone who explains (as opposed to simply downvoting) the weak points in my reasoning!

I downvoted the OP because it immediately pattern-matched to other well-meaning but misguided attempts to definitively answer a vague and complicated question, and further reading only confirmed it. This is typical of a novice poster and is not meant as an insult. Those who stick around eventually learn that all easy questions have been answered and hard questions require precise formulation, literature review, careful research and feedback from others. Hope the quality of your next post will be much better.

A friendly AI would not make us omnipotent. They know better than us, and letting one of us be god instead of them would be a mistake.

That being said, the effects of a friendly AI would definitely be noticeable, so that point still stands.

However, if a civilization did actually reach a posthuman stage where it can create ancestor simulations, it would also be advanced enough to create a FASI.

First off, it is not clear how easy it is to make ASI. While they would definitely have enough computing power, they would not have enough to create one from any trivial method. It would still be difficult. Perhaps beyond their capacity to solve.

Second, they may opt not to create ASI. Perhaps they are too afraid of UASI. Perhaps they consider it unethical. (But ancestor simulations are fair game. Hypocrites.) Perhaps they have some other reason.

Third, they may create UASI. They may simply fail while attempting to make it friendly, or they could have different goals, and they create a UASI that is friendly only to their goal system. They also could have different goals, and fail at that.