I am working on a project about estimating alien density, their expected utility, and arguing for strategic implications. I am optimistic about the project producing valuable content to inform the AI safety strategy. But I want to probe what the community thinks beforehand.

Main question:

Assumptions:

  • Alien and Earth space-faring civilizations produce similar expected utilities [1]. The utility is computed using our CEV.
  • Alien space-faring civilizations are frequent enough such that there are at least several of them per affectable light cone such that very few resources are left unused.

Question:

  • Given the assumptions, what would the strategic implications be for the AI safety community?

Secondary question:

  • Without any assumption. What are your arguments for expecting Alien space-faring civilizations to have similar, or lower (e.g. 0), or higher expected utility than a future Earth-originating space-faring civilization?

Note: Please reach out to me by MP if you want to contribute or provide feedback on the project.

  1. ^

    I am talking about the expected utility per unit of controlled resources, after considering everything. This is the “final” expected utility created per unit of resource. As an illustration, it accounts for the impact of trades and conflicts, causal or acausal, happening in the far future.

New Answer
New Comment

1 Answers sorted by

WilliamTrinket

31

If your two assumptions hold, takeover by misaligned AGIs that go on to create space-faring civilizations looks much worse than existential disasters that simply wipe humanity out (eg, nuclear extinction). In the latter case, an alien civilization will soon claim the resources humanity would have taken over had we become a space-faring civ, and they'll create just as much value with these resources as we would have created. Assuming that all civs eventually come to an end at some fixed time (see this article by Toby Ord), some amount of potential value will be lost, but not as much as would be lost if a misaligned AI used all the resources in our future light cone for something valueless while excluding aliens from using them. So the main strategic implication is that we should try to build fail-safes into AGI to prevent it from becoming grabby in the event of alignment failure.