Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Daniel_B comments on That Alien Message - Less Wrong

111 Post author: Eliezer_Yudkowsky 22 May 2008 05:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (164)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Daniel_B 30 May 2008 05:12:00PM 1 point [-]

Great Story. The last part gave me nightmares and I have only just managed to realize this was the source. It is a good example of a case where a super intelligent AI might find it 'safer' to subjugate or eliminate their 'hosts' than cooperate with them and thereby give them the chance to 'switch it off'.

Fortunately for us it seems a lot more likely that the difference it intelligence / time scale will progress a lot more gradually from Humans being in control to AI being in control. So by the time AI is in a position to eliminate us (biological humans) it would be sufficiently obvious that we do not present any threat to it.