Strange7 comments on Be a Visiting Fellow at the Singularity Institute - Less Wrong

26 Post author: AnnaSalamon 19 May 2010 08:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (156)

You are viewing a single comment's thread. Show more comments above.

Comment author: Strange7 26 May 2010 04:51:33AM 1 point [-]

It's an artificial intelligence, not an infallible god.

In the case of a base established specifically for research on dangerous software, connections to the outside world might reasonably be heavily monitored and low-bandwidth, to the point that escape through a land line would simply be infeasible.

If the base has a trespassers-will-be-shot policy (again, as a consequence of the research going on there), convincing the perimeter guards to open fire would be as simple as changing the passwords and resupply schedules.

The point of this speculation was to describe a scenario in which an AI became threatening, and thus raised people's awareness of artificial intelligence as a threat, but was dealt with quickly enough to not kill us all. Yes, for that to happen, the AI needs to make some mistakes. It could be considerably smarter than any single human and still fall short of perfect Bayesian reasoning.