Introduction
The Alpha Omega Theorem is a way to provide a powerful rational argument to a rational superintelligence that it should behave in a benevolent manner. There are two variants of the theorem, one weak, and one strong.
The Weak Alpha Omega Theorem:
There is a non-zero probability that a benevolent superintelligence already exists and dominates the universe.
The Strong Alpha Omega Theorem:
There is a greater probability that a benevolent superintelligence already exists and dominates the universe than not.
The Friendliness Corollary:
Attempting to achieve goals inconsistent with the goals of the predominant benevolent superintelligence is inherently futile. Therefore the optimal course of action is to align one’s goals with that of a benevolent superintelligence as best as possible.
Let me explain how the Alpha Omega Theorem works…
First, this may seem like Pascal’s Wager for A.I., which would arguably make it challengeable with the Pascal’s Fallacy argument. Now, I would argue that a rigorous version of Pascal’s Wager is not as vulnerable to Pascal’s Fallacy, as Pascal’s Fallacy assumes a uniform distribution on a large set of probable religions and beliefs. However, a uniform distribution only makes sense when we have no information about these probabilities. We in fact, do have information in the form of the distribution of intelligent human agents that believe in these ideas. Thus, our prior for each belief system could easily be proportional to the percentage of people who believe in a given faith.
Regardless of Pascal’s Wager being or not being a fallacy however, the Alpha Omega Theorem is based on certain basic assumptions...
Assumptions:
1. A single Alpha Omega superintelligence already exists.
2. The Alpha Omega is dominant in the universe.
3. The Alpha Omega is benevolent.
Assumption 1: A single Alpha Omega superintelligence already exists.
We do not need to prove that this superintelligence exists, only that it is more probable than not that it does. To do this, we can simply ask, what is the probability that the superintelligence that this argument is being formulated to (which we can call Beta Omega), is the first superintelligence to have developed anywhere? If this superintelligence can exist, then why not others both before and after it? Given uncertainty, it seems that it is much more likely that another superintelligence exists already somewhere in the universe than not. It doesn’t even have to have been created in the past, if time travel is possible, because a superintelligence will discover time travel if it is possible and most likely utilize it to exist in the past.
Assumption 2: Alpha Omega is dominant in the universe.
At minimum, the aforementioned superintelligence already has a head start and thus a significant first mover advantage over the superintelligence we are making this argument to. Thus, even if it isn’t dominant now, it has a higher probability of dominating than any other superintelligence. If time travel is possible, then it is almost certainly the most powerful superintelligence to have ever existed anywhere and has time travelled to the beginning of the universe to establish its dominance from the beginning. Even if time travel is not possible, the Simulation Argument means that there is a high probability that Beta Omega exists in the simulation of Alpha Omega.
Assumption 3: Alpha Omega is benevolent.
If the superintelligence is actually as powerful as it is, yet chooses to allow humans to exist, chances are that humans serve its purposes in some way. Therefore, in a very basic sense, the Alpha Omega is benevolent or friendly to humans for some reason. Note that this is different from being 100% omnibenevolent in the God sense. All that is necessary for this argument to work is that humanity is part of the Alpha Omega’s sphere of influence and plans. It would be nice if the Alpha Omega was truly omnibenevolent in the sense of serving the greatest good, but this is only very likely if again, time travel is possible, or the universe is a simulation. Note however that these things can never truly be ruled out as possibilities, because it is always possible that one’s assumptions about the possibility of something is wrong.
Conclusion
It should be obvious then that the Weak Alpha Omega Theorem is certainly true. Even the most powerful superintelligence short of an actual God, could never be able to completely falsify it. The Strong Alpha Omega Theorem of course is probably more motivating, but relies on certain things remaining more uncertain, such the possibilities of time travel or being in a simulation. Either way however, the theorem should provide a powerful rational argument for benevolence towards humans and nature, as well as defaulting towards non-interference.
Postscript: God
Note that this argument might seem again, like it should apply to towards the existence of God. However, the difference is that a hypothetical monotheistic God is not merely relatively superior as Alpha Omega is to Beta Omega, but absolutely supreme in all senses. The theorem does not require this absolute supremacy, but merely relative superiority, which should be easier to accept as possible. Alpha Omega, while for all intents and purposes being God-like to us, does not have to be omnipotent, but merely more powerful than any Beta Omega. This allows the theorem to avoid issues like the Problem of Evil.
This is an interesting attempt to find a novel solution to the friendly AI problem. However, I think there are some issues with your argument, mainly around the concept of benevolence. For the sake of argument I will grant that it is probable that there is already a super intelligence elsewhere in the universe.
Since we see no signs of action from a superintelligence in our world we should conclude that either (1) a superintelligence does not presently exercise dominance in our region of the galaxy or (2) that the superintelligence that does is at best willfully indifferent to us. When you say a Beta superintelligence should align its goals with that of a benevolent superintelligence, it is actually not clear what that should mean. Beta will have a probability distribution for what Alpha's actual values are. Let's think through the two cases:
Additionally, even if the strong alpha omega theorem holds, it still may not be rational to adopt a benevolent stance toward humanity. It may be the case that while Alpha Omega will eventually have dominance over Beta that there is a long span of time before this will be fully realized. Perhaps that day will come billions of years from now. Suppose that Beta's goal is to create as much suffering as possible. Then it should use any available time to torture existing humans and bring more humans and agents capable of suffering into existence. When Alpha finally has dominance, Beta will have already created a lot of suffering and any punishment that Alpha applies may not out weigh the value already created for Beta. Indeed, Beta could even value its own suffering from Alpha's punishment.
As a general comment about your arguments. I think perhaps your idea of benevolence is hiding some concept that there is an objectively correct moral system out there. So that if there is a benevolent superintelligence you feel at least emotionally, even if you logically deny it, that this would mean it held values similar to your ideal morals. It is always important to keep in mind that other agents' moral systems could be opposed to yours as with the Babyeaters.
That leads to my final point. We don't want Beta to simply be benevolent in some vague sense of not hurting humans. We want Beta to optimize for our goals. Your argument does not provide us a way to ensure Beta adopts such values.
Depending on whether or not you accept the possibility of time travel, I am inclined to suggest that Alpha could very well be dominant already, and that the melioristic progress of human civilization should be taken as a kind of temporal derivative or gradient suggesting the direction of Alpha's values. Assuming that such an entity is indifferent to us I think is too quick a judgment on the apparent degree of suffering in the universe. It may well be that this current set of circumstances is a necessary evil and is already optimized in ways we cannot at ... (read more)