All of Pandeist's Comments + Replies

I find it remarkably amusing that the spellchecker doesn't know "omnicidal."

I have posed elsewhere, and will do so here, an additional factor, which is that an AI achieving "godlike" intelligence and capability might well achieve a "godlike" attitude -- not in the mythic sense of going to efforts to cabin  and correct human morality, but in the sense of quickly rising so far beyond human capacities that human existence ceases to matter to it one way or another.

The rule I would anticipate from this is that any AI actually capable of destroying humanity... (read more)

The article does not appear to address the possibility that some group of humans might intentionally attempt to create a misaligned AI for nefarious purposes. Are there really any safeguards sufficient to prevent such a thing, particularly if for example a state actor seeks to develop an AI with the intent of disrupting another country through deceit and manipulation?