drnickbone comments on Thoughts on the Singularity Institute (SI) - Less Wrong

256 Post author: HoldenKarnofsky 11 May 2012 04:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1270)

You are viewing a single comment's thread. Show more comments above.

Comment author: drnickbone 11 May 2012 11:32:01PM *  0 points [-]

One simple observation is that a "tool AI" could itself be incredibly dangerous.

Imagine asking it this: "Give me a set of plans for taking over the world, and assess each plan in terms of probability of success". Then it turns out that right at the top of the list comes a design for a self-improving agent AI and an extremely compelling argument for getting some victim institute to build it...

To safeguard against this, the "tool" AI will need to be told that there are some sorts of questions it just must not answer, or some sorts of people to whom it must give misleading answers if they ask certain questions (while alerting the authorities). And you can see the problems that would lead to as well.

Basically, I'm very skeptical of developing "security systems" against anyone building agent AI. The history of computer security also doesn't inspire a lot of confidence here (difficult and inconvenient security measures tend to be deployed only after an attack has been demonstrated, rather than beforehand).