Posts

Sorted by New

Wiki Contributions

Comments

Great Story. The last part gave me nightmares and I have only just managed to realize this was the source. It is a good example of a case where a super intelligent AI might find it 'safer' to subjugate or eliminate their 'hosts' than cooperate with them and thereby give them the chance to 'switch it off'.

Fortunately for us it seems a lot more likely that the difference it intelligence / time scale will progress a lot more gradually from Humans being in control to AI being in control. So by the time AI is in a position to eliminate us (biological humans) it would be sufficiently obvious that we do not present any threat to it.