ChristianKl comments on Request for concrete AI takeover mechanisms - Less Wrong

18 Post author: KatjaGrace 28 April 2014 01:04AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (122)

You are viewing a single comment's thread. Show more comments above.

Comment author: ChristianKl 28 April 2014 12:47:03PM 0 points [-]

Fighting an unfamiliar animal means that you are in a position of bad information. An AGI is well informed and can choose better strategies. Destroying Silicon Valley makes the AGI visible and illustrates that it's a thread.

Comment author: chaosmage 28 April 2014 01:15:16PM 0 points [-]

Why would an AGI consider itself to be well informed?

In order to decide whether its information is adequate, it would logically have to attempt to model aspects of its environment, and test the success of those models. I'm pretty sure it would find it can predict the behavior of stones, trees or insects much more reliably than it can predict the behavior of the human species. And in a scenario where it is trying to take over, what else could it be trying to do except reducing unpredictability in its environment?

Of course it'd avoid visibility, because it can predict situations where the environment is responding to a novel stimulus (visibility of an AGI) less reliably than it can predict situations where it isn't. I recognize my use of the term "destroy" implied some primitive heavy-handed means, which of course makes no sense. Perhaps "neutralize" would have been better.

Comment author: ChristianKl 28 April 2014 02:50:14PM 1 point [-]

Why would an AGI consider itself to be well informed?

Because getting informed is one of the tasks that relatively easy for an AGI.