I've been doing a lot of thinking lately (and probably watching Marvel's "Avengers: Age of Ultron" too much) and have come to a question. I have some experience with how our current methods of creating AI work, since I have recently built my own neural network. Is there a non-negligible chance that an AI built to sustain and protect humanity might actually decide to prune select groups of people? Say that there's a group of people that have a gene that makes the person that has it more prone to getting and spreading a disease. Is there a real risk that the AI might decide to remove that group of people to protect the rest of humanity?
I've been doing a lot of thinking lately (and probably watching Marvel's "Avengers: Age of Ultron" too much) and have come to a question. I have some experience with how our current methods of creating AI work, since I have recently built my own neural network. Is there a non-negligible chance that an AI built to sustain and protect humanity might actually decide to prune select groups of people? Say that there's a group of people that have a gene that makes the person that has it more prone to getting and spreading a disease. Is there a real risk that the AI might decide to remove that group of people to protect the rest of humanity?