Stuart_Armstrong comments on Reply to Holden on 'Tool AI' - Less Wrong

94 Post author: Eliezer_Yudkowsky 12 June 2012 06:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (348)

You are viewing a single comment's thread.

Comment author: Stuart_Armstrong 14 June 2012 05:30:35PM 7 points [-]

Minor point from Nick Bostrom: an agent AI may be safer than a tool AI, because if something goes unexpectedly wrong, then an agent with safe goals should turn out to be better than a non-agent whose behaviour would be unpredictable.

Comment author: TheOtherDave 14 June 2012 06:11:30PM 10 points [-]

Also, an agent with safer goals than humans have (which is a high bar, but not nearly as high a bar as some alternatives) is safer than humans with equivalently powerful tools.

Comment author: PhilGoetz 02 July 2012 01:01:16AM -1 points [-]

How is this helpful? This is true by definition of the word "safer". The problem is knowing whether an agent has safer goals, or what "safer" means.

Comment author: PhilGoetz 25 June 2012 08:26:02PM *  -1 points [-]

I don't think this makes any sense. A tool AI has no autonomous behavior. It computes a function. Its output has no impact on the world until a human uses it. The phrase "tool AI" implies to me that we are not talking about an AI that you ask, for instance, to "fix the economy"; we are talking about an AI that you ask questions such as, "Find me data showing whether lowering taxes increases tax revenue."