Vaniver comments on Reply to Holden on 'Tool AI' - Less Wrong

94 Post author: Eliezer_Yudkowsky 12 June 2012 06:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (348)

You are viewing a single comment's thread.

Comment author: Vaniver 12 June 2012 04:42:33AM -2 points [-]

Your link to Holden's post is broken.

It might be the right suggestion, but it's not so obviously right that our failure to prioritize discussing it reflects horrible negligence.

In a paragraph begging for charity, this sentence seems out of place.

(Commentary to follow.)

Comment author: ciphergoth 12 June 2012 06:44:55AM 6 points [-]

I can't see what you're getting at. Holden seems to say not just "you should do this", but "the fact that you're not already doing this reflects badly on your decision making". Eliezer replies that the first may be true but the second seems unwarranted.

Comment author: Vaniver 12 June 2012 03:56:00PM *  1 point [-]

Consider three sections of Holden's post:

Below, I list my major objections. I do not believe that these objections constitute a sharp/tight case for the idea that SI's work has low/negative value; I believe, instead, that SI's own arguments are too vague for such a rebuttal to be possible.

In section 1 and 2, Holden makes the argument that pinning our hopes on a utility function seems dangerous, because maximizers in general are dangerous. Better to just make information processing tools that make us more intelligent.

When discussing SI as an organization, Holden says,

One of SI's major goals is to raise awareness of AI-related risks; given this, the fact that it has not advanced clear/concise/compelling arguments speaks, in my view, to its general competence.

The jump from "speaks to its general competence" to "horribl[y] negligent" is a large and uncharitable one. If one focuses on "compelling," then yes, Holden is saying "SI is incompetent because I wasn't convinced by them," and that does seem unwarranted, or at least weak. But if one focuses on "clear" or "concise," then I agree with Holden- if SI's core mission is to communicate about AI risks, and they're unable to communicate clearly and concisely, then that speaks to their ability to complete their core mission! And there's the other bit where charity seemed lacking to me- it seems that Holden's strongest complaints are about clarity and concision.

Now, that's my impression as a bystander, and I "remember with compassion that it's not always obvious to one person what another person will think was the central point", so it is an observation about tone and little more.