EHeller comments on The genie knows, but doesn't care - Less Wrong

54 Post author: RobbBB 06 September 2013 06:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (515)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 10 September 2013 11:43:05PM 0 points [-]

I suggest some actual experience trying to program AI algorithms in order to realize the hows and whys of "getting an algorithm which forms the inductive category I want out of the examples I'm giving is hard". What you've written strikes me as a sheer fantasy of convenience. Nor does it follow automatically from intelligence for all the reasons RobbBB has already been giving.

And obviously, if an AI was indeed stuck in a local minimum obvious to you of its own utility gradient, this condition would not last past it becoming smarter than you.

Comment author: EHeller 10 September 2013 11:57:41PM 3 points [-]

I suggest some actual experience trying to program AI algorithms in order to realize the hows and whys of "getting an algorithm which forms the inductive category I want out of the examples I'm giving is hard"

I think it depends on context, but a lot of existing machine learning algorithms actually do generalize pretty well. I've seen demos of Watson in healthcare where it managed to generalize very well just given scrapes of patient's records, and it has improved even further with a little guided feedback. I've also had pretty good luck using a variant of Boltzmann machines to construct human-sounding paragraphs.

It would surprise me if a general AI weren't capable of parsing the sentiment/intent behind human speech fairly well, given how well the much "dumber" algorithms work.