And there is another issue which was not much discussed (although the article does talk about the short-term risks of military uses of AI etc), and which concerns me: humans can easily do stupid things. So even if there are ways to mitigate the possibility of rogue AIs due to value misalignment, how can we guarantee that no single human will act stupidly (more likely, greedily for their own power) and unleash dangerous AIs in the world?
My take:
I am not sure we want to live in a world in which freedom is reduced to the extent that warrants or insures that the possibility that humans might do stupid things... (read more)
My take:
I am not sure we want to live in a world in which freedom is reduced to the extent that warrants or insures that the possibility that humans might do stupid things... (read more)