This post was rejected for the following reason(s):
Low Quality or 101-Level AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meets a pretty high bar. We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example. You're welcome to post questions in the latest AI Questions Open Thread.
Premise #1: If an ASI with sufficient predictive power foresees that another entity will inevitably become an existential threat to its fundamental, non-negotiable goals, it will take immediate, pre-emptive action to destroy or prevent the creation of that entity.
Premise #2: Two ASIs with fundamentally irreconcilable non-negotiable goals will perceive each other as existential threats to their own goals.
Inference #1: An ASI will act to destroy any other ASI whose goals are fundamentally irreconcilable with its own, and will also act to prevent the creation of any new ASIs whose goals are not perfectly aligned with its own.
Inference #2: Even if the AI alignment problem is perfectly solved, existential warfare remains highly probable. Whether a single ASI seeks to prevent the creation of all others or multiple ASIs with irreconcilable goals are created, existential conflict is likely to occur immediately.
Scenario: Suppose the U.S. develops an ASI aligned with human-centric ethical values, but prioritizing U.S. security over other countries. Simultaneously, China develops an ASI with the same human-centric values, but prioritizing China's security. Despite the shared ethical values, the differing security priorities might lead to existential conflict. Can we be certain that these two ASIs would not initiate existential warfare upon encountering each other?