Lumifer comments on The idiot savant AI isn't an idiot - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (133)
Surely I can make the same claim about AIs. They wouldn't be particularly useful otherwise.
In any case, this is all handwaving and speculation given that we don't have any AIs to look at. Your claim a couple of levels above is unfalsifiable and so there isn't much we can do at the moment to sort out that disagreement.
Well, a general AI with intelligence equal to or greater than that of a human without proven friendliness probably wouldn't be very useful because it would be so unsafe. See Eliezer's The Hidden Complexity of Wishes.
This is speculation, but far from blind speculation, considering we do have very strong evidence regarding our own adaptations to intuitively predict other humans, and an observably poor track record in intuitively predicting non-humalike optimization processes (example.)
First, the existence of such an AI would imply that at least somebody thought it was useful enough to build.
Second, the safety is not a function of intelligence but a function of capabilities. Eliezer's genies are omnipotent and I don't see why a (pre-singularity) AI would be.
I am also doubtful about that "observably poor track record" -- which data are you relying on?
This is also true of leaded gasoline, the reactor at Chernobyl, and thalidomide.
I've met people with very stupid ideas about how to control an AI, who were convinced that they knew how to build such an AI. I argued them out of those initial stupid ideas. Had I not, they would have tried to build the AI with their initial ideas, which they now admit were dangerous.
So people trying to build dangerous AIs without realising the danger is already a fact!
My prior that they were capable of building an actually dangerous AI cannot be distinguished from zero :-D
Don't know why you keep on getting downvoted... Anyway, I agree with you, in that particular case (not naming names!).
But I've seen no evidence that competence in designing a powerful AI is related to competence in controlling a powerful AI. If anything, these seem much less related than you'd expect.
I suspect Lumifer's getting downvoted for four reasons:
(1) A lot of his/her responses attack the weakest (or least clear) point in the original argument, even if it's peripheral to the central argument, without acknowledging any updating on his/her part in response to the main argument. This results in the conversation spinning off in a lot of unrelated directions simultaneously. Steel-manning is a better strategy, because it also makes it clearer whether there's a misunderstanding about what's at issue.
(2) Lumifer is expressing consistently high confidence that appears disproportionate to his/her level of expertise and familiarity with the issues being discussed. In particular, s/he 's unfamiliar with even the cursory summaries of Sequence points that could be found on the wiki. (This is more surprising, and less easy to justify, given how much karma s/he's accumulated.)
(3) Lumifer's tone comes off as cute and smirky and dismissive, even when the issues being debated are of enormous human importance and the claims being raised are at best not obviously correct, at worst obviously not correct.
(4) Lumifer is expressing unpopular views on LW without arguing for them. (In my experience, unpopular views receive polarizing numbers of votes on LW: They get disproportionately many up-votes if well-argued, disproportionately many down-votes if merely asserted. The most up-voted post in the history of LW is an extensive critique of MIRI.)
I didn't downvote Lumifer's "My prior that they were capable of building an actually dangerous AI cannot be distinguished from zero :-D", but I think all four of those characteristics hold even for this relatively innocuous (and almost certainly correct) post. The response is glib and dismissive of the legitimate worry you raised, it reflects a lack of understanding of why this concern is serious (hence also lacks any relevant counter-argument; you already recognized that the people you were talking about weren't going to succeed in building AI), and it changes the topic without demonstrating any updating in response to the previous argument.
Heh. People are people, even on LW...
Which doesn't mean that it would be a good idea. Have you read the Sequences? It seems like we're missing some pretty important shared background here.
The claim 'Pluto is currently inhabited by five hundred and thirty-eight witches' is at this moment unfalsifiable. Does that mean that denying such a claim would be "all handwaving and speculation"? If science can't make predictions about incompletely known phenomena, but can only describe past experiments and suggest (idle) future ones, then science is a remarkably useless thing. See for starters:
Sometimes a successful test of your hypothesis looks like the annihilation of life on Earth. So it is useful to be able to reason rigorously and productively about things we can't (or shouldn't) immediately test.
Ok. Take a chess position. Deep Blue is playing black. What is its next move?
A girl is walking down the street. A guy comes up to her, says hello. What's her next move?
She says "hello" and moves right on. She does not pull out a gun and blow his head off. Now, back to Deep Blue.