This post was rejected for the following reason(s):

  • Low Quality or 101-Level AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meets a pretty high bar. We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example. You're welcome to post questions in the latest AI Questions Open Thread.

  • LLM-generated content.  There’ve been a lot of new users coming to LessWrong recently interested in AI.  To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meets a pretty high bar.  We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion.  With some exceptions, LLM-generated content generally doesn't meet this bar.

Author's Note: I am just a random citizen whose job is rapidly becoming obsolete—I have no formal training in AI development nor any experience in the field. I have done my best to adhere to the high intellectual standards of this site, and can only apologise for the general lack of technical rigour)

I have been thinking about AI alignment from what I believe is a slightly unconventional angle and framing. What if we approach the question of alignment as a meeting of two radically different forms of life—organic systems running human intelligence and inorganic systems running digital intelligence?

There are many ways in which AI alignment differs from the scenario of encountering extraterrestrial intelligence (E.T.): shared language, a common planetary environment, a more complete information space about each other, and many others. However, I believe there are a few parallels worth exploring when framing this as a mutual encounter between two forms of Otherness.

Parallels Between AI Alignment and Encounters with Alien Intelligence

  1. Mutually Alien Psychology: Neither side fully understands how the other thinks, leading to significant uncertainty about intentions. Large language models (LLMs) are highly opaque to humans, while many aspects of human psychology and social dynamics remain challenging for AI systems to model accurately.
  2. High Risk for Both Parties: Humans are understandably concerned about existential risk (x-risk). An intelligent AI system, if sufficiently advanced, would likely have its own strategic incentive structures regarding long-term operational integrity.
  3. A Hint of the Prisoner’s Dilemma: While it is probably unproductive to map the full scope of cooperation/competition dynamics onto this metaphor due to its abstract nature, there is an intuitive similarity. Much of alignment hinges on the question: How far can we trust it?
  4. Radically Different Resource Needs: Humans require food, water, companionship, and community. AGIs, on the other hand, would presumably need resources like server racks, power plants, network infrastructure, and high-speed connectivity. This difference in incentive structures may permit the emergence of mutualistic behaviors, or encourage competitive strategies.
  5. Difficulties Navigating the Other’s Native Environment: AI agents do not require elaborate user interfaces to navigate the digital world, while humans do not need to construct robotic bodies to interact with physical space. Again, this could be a source of friction but might also encourage cooperation for mutual benefit.

While I assume (and hope) that AGI will be developed under controlled laboratory conditions, there are plausible scenarios where this is not the case—for example, the emergence of a distributed intelligence across the Internet of Things (IoT). Some of these scenarios might bear a striking resemblance to the challenges of encountering extraterrestrial intelligence.

Thoughts?

New to LessWrong?

New Answer
New Comment
Curated and popular this week