Review

This post was rejected for the following reason(s):

  • Low Quality or 101-Level AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meets a pretty high bar. We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example. You're welcome to post quotes in the latest AI Questions Open Thread.

Large social organizations (like political systems, large private and public organizations, and markets) are not solely silicon-based, but they seem to have the same sort of semi-omniscience, semi-omnipotence and social impact that we talk about when we talk about AGI and Transformative AI:

  • They store, synthesize and use information on vast scales through processing transactions, creating structures of individual humans that process information, and incentivizing or mandating the creation of technology that processes information.
  • They can deploy vast resources and achieve fast, widespread physical and social disruptions (e.g. the Russian revolution of 1917, Covid-19 reponse).
  • Political systems, large private and public organizations, and markets are technologies that rely on very complex interacting algorithms made up of constitutions, laws/regulations, foundational texts, cultures, norms, and ultimately the organic neural networks that silicon neural networks recreate.
  • They can take actions at the speed of solely silicon-based systems, even though they have organic components (e.g. monitoring roadways with cameras and issuing instant traffic violations). They have achieved this by incentivizing or mandating the creation of non-AI silicon-based technology that acts autonomously.

Large social organizations also pose the type of alignment problems that we worry about with purely silicon-based AI:

  • They disempower or kill individual humans, and sometimes large groups of humans, whose interests they are nominally aligned with (e.g. North Korea, East Germany).
  • They act contrary to human interests because they are maximizing something else (e.g. markets aggravating or not being able to address climate risks).
  • Powerful individuals attempt to deploy them to gain or entrench power, only to later be killed or disempowered by the misaligned system (e.g. dictators deposed by military coups).

I believe that this is a worthwhile question because: 

  • If we are already successfully living with AGI and Transformative AI, it should lower our estimates of the x-risk that AGI and Transformative AI pose.
  • If we are on a technological continuum (transitioning to more powerful versions of our current technology vs. developing entirely new technology) it should increase our confidence that evolutions of our current strategies and tools for dealing with the risks of that technology will be effective in the future. Specifically, it should increase our confidence that competition and multipolar balances will keep AGIs, Transformative AIs and Superintelligences in check.
  • The analogy between economic and political systems, on one hand, and AGIs and Transformative AIs, on the other, may allow us to import learnings from economics and political science to AI safety. 
  • Examining the differences between social organizations and solely silicon-based AGI might help us to define and address the truly new risks posed by the latter.

I don’t know if this is a new question, and I assume it’s been discussed before. However, I haven’t been able to find the discussion on LessWrong or on the wider internet after a few hours of searching. I also tried asking Stampy without any luck.

Review

1

New Answer
New Comment