RichardKennaway comments on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions - Less Wrong

16 Post author: MichaelGR 11 November 2009 03:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (682)

You are viewing a single comment's thread.

Comment author: RichardKennaway 11 November 2009 08:57:12AM *  13 points [-]

Is there any published work in AI (whether or not directed towards Friendliness) that you consider does not immediately, fundamentally fail due to the various issues and fallacies you've written on over the course of LW? (E.g. meaningfully named Lisp symbols, hiddenly complex wishes, magical categories, anthropomorphism, etc.)

ETA: By AI I meant AGI.

Comment author: Eliezer_Yudkowsky 11 November 2009 06:40:22PM 1 point [-]

I assume this is to be interpreted as "published work in AGI". Plenty of perfectly good AI work around.

Comment author: RichardKennaway 11 November 2009 11:07:30PM 0 points [-]

Yes, I meant AGI by AI. I don't consider any of the stuff outside AGI to be worth calling AI. The good stuff there is merely the more or less successful descendants of spinoffs of failed attempts to create AGI, and is good in direct proportion to its distance from that original vision.

Comment author: [deleted] 11 November 2009 04:15:18PM 0 points [-]

Well, it appears that no published work in AI has ended in successful strong artificial intelligence.

Comment author: RichardKennaway 11 November 2009 11:19:42PM 1 point [-]

It might be making visible progress, or failing that, at least not making basic fatal errors.