This post was rejected for the following reason(s):

  • Low Quality or 101-Level AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meets a pretty high bar. We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example. You're welcome to post quotes in the latest AI Questions Open Thread.

I've been doing a lot of thinking lately (and probably watching Marvel's "Avengers: Age of Ultron" too much) and have come to a question. I have some experience with how our current methods of creating AI work, since I have recently built my own neural network. Is there a non-negligible chance that an AI built to sustain and protect humanity might actually decide to prune select groups of people? Say that there's a group of people that have a gene that makes the person that has it more prone to getting and spreading a disease. Is there a real risk that the AI might decide to remove that group of people to protect the rest of humanity?

1

New Answer
New Comment