This post was rejected for the following reason(s):
Low Quality or 101-Level AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meets a pretty high bar. We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example. You're welcome to post quotes in the latest AI Questions Open Thread.
Large social organizations (like political systems, large private and public organizations, and markets) are not solely silicon-based, but they seem to have the same sort of semi-omniscience, semi-omnipotence and social impact that we talk about when we talk about AGI and Transformative AI:
Large social organizations also pose the type of alignment problems that we worry about with purely silicon-based AI:
I believe that this is a worthwhile question because:
I don’t know if this is a new question, and I assume it’s been discussed before. However, I haven’t been able to find the discussion on LessWrong or on the wider internet after a few hours of searching. I also tried asking Stampy without any luck.