TheOtherDave comments on Reply to Holden on 'Tool AI' - Less Wrong

94 Post author: Eliezer_Yudkowsky 12 June 2012 06:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (348)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheOtherDave 14 June 2012 06:11:30PM 10 points [-]

Also, an agent with safer goals than humans have (which is a high bar, but not nearly as high a bar as some alternatives) is safer than humans with equivalently powerful tools.

Comment author: PhilGoetz 02 July 2012 01:01:16AM -1 points [-]

How is this helpful? This is true by definition of the word "safer". The problem is knowing whether an agent has safer goals, or what "safer" means.