You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Stuart_Armstrong comments on The idiot savant AI isn't an idiot - Less Wrong Discussion

8 Post author: Stuart_Armstrong 18 July 2013 03:43PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (133)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 18 July 2013 09:48:24PM 4 points [-]

First, the existence of such an AI would imply that at least somebody thought it was useful enough to build.

I've met people with very stupid ideas about how to control an AI, who were convinced that they knew how to build such an AI. I argued them out of those initial stupid ideas. Had I not, they would have tried to build the AI with their initial ideas, which they now admit were dangerous.

So people trying to build dangerous AIs without realising the danger is already a fact!

Comment author: Lumifer 19 July 2013 07:29:00PM 0 points [-]

Had I not, they would have tried to build the AI with their initial ideas, which they now admit were dangerous.

My prior that they were capable of building an actually dangerous AI cannot be distinguished from zero :-D

Comment author: Stuart_Armstrong 19 July 2013 08:26:24PM 2 points [-]

Don't know why you keep on getting downvoted... Anyway, I agree with you, in that particular case (not naming names!).

But I've seen no evidence that competence in designing a powerful AI is related to competence in controlling a powerful AI. If anything, these seem much less related than you'd expect.

Comment author: RobbBB 21 July 2013 05:48:12PM *  8 points [-]

I suspect Lumifer's getting downvoted for four reasons:

(1) A lot of his/her responses attack the weakest (or least clear) point in the original argument, even if it's peripheral to the central argument, without acknowledging any updating on his/her part in response to the main argument. This results in the conversation spinning off in a lot of unrelated directions simultaneously. Steel-manning is a better strategy, because it also makes it clearer whether there's a misunderstanding about what's at issue.

(2) Lumifer is expressing consistently high confidence that appears disproportionate to his/her level of expertise and familiarity with the issues being discussed. In particular, s/he 's unfamiliar with even the cursory summaries of Sequence points that could be found on the wiki. (This is more surprising, and less easy to justify, given how much karma s/he's accumulated.)

(3) Lumifer's tone comes off as cute and smirky and dismissive, even when the issues being debated are of enormous human importance and the claims being raised are at best not obviously correct, at worst obviously not correct.

(4) Lumifer is expressing unpopular views on LW without arguing for them. (In my experience, unpopular views receive polarizing numbers of votes on LW: They get disproportionately many up-votes if well-argued, disproportionately many down-votes if merely asserted. The most up-voted post in the history of LW is an extensive critique of MIRI.)

I didn't downvote Lumifer's "My prior that they were capable of building an actually dangerous AI cannot be distinguished from zero :-D", but I think all four of those characteristics hold even for this relatively innocuous (and almost certainly correct) post. The response is glib and dismissive of the legitimate worry you raised, it reflects a lack of understanding of why this concern is serious (hence also lacks any relevant counter-argument; you already recognized that the people you were talking about weren't going to succeed in building AI), and it changes the topic without demonstrating any updating in response to the previous argument.

Comment author: Lumifer 19 July 2013 08:46:45PM 1 point [-]

Don't know why you keep on getting downvoted...

Heh. People are people, even on LW...