TimS comments on Reply to Holden on 'Tool AI' - Less Wrong

94 Post author: Eliezer_Yudkowsky 12 June 2012 06:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (348)

You are viewing a single comment's thread. Show more comments above.

Comment author: TimS 25 June 2012 08:44:23PM *  3 points [-]

[EY] My particular conception of an extraordinarily powerful tool AI, which would be vastly more powerful than any other conception of tool AI that anyone has considered, would secretly be an agentive AI because the difference between trying to inform the user and trying to manipulate the user is only semantic.

This is not a valid response. Holden is saying, "Here's this vast space of possible kinds of AIs, subsumed under the term 'tool AI', that you should investigate." And Eliezer is saying, "AIs within a small subset of that space would be dangerous; therefore I'm not interested in that space."

How do you know it is a small subset? Or a subset at all? If every interestingly powerful tool AI is secretly an agent AI, that's bad, right?