I've been thinking about the Rokos Basilisk thought experiment, considering the drivers of creating a Basilisk and the next logical step such an entity might conceivably take, and the risk in presents in the temptation to protect ourselves. Namely, that we may be tempted to create an alternative FAI which would serve to protect humankind against uFAI, a protector AI, and how it distorts the Basilisk.
A protector AI would likely share, evolve, or copy from any future Basilisk or malevolent intelligence in order to protect and/or prevent us from it or its creation; much like antibodies mush first be exposed to a threat to protect us from them. If we created this... (read more)
I've been thinking about the Rokos Basilisk thought experiment, considering the drivers of creating a Basilisk and the next logical step such an entity might conceivably take, and the risk in presents in the temptation to protect ourselves. Namely, that we may be tempted to create an alternative FAI which would serve to protect humankind against uFAI, a protector AI, and how it distorts the Basilisk.
A protector AI would likely share, evolve, or copy from any future Basilisk or malevolent intelligence in order to protect and/or prevent us from it or its creation; much like antibodies mush first be exposed to a threat to protect us from them. If we created this... (read more)