You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Ezekiel comments on Stupid Questions Open Thread - Less Wrong Discussion

42 Post author: Costanza 29 December 2011 11:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (265)

You are viewing a single comment's thread.

Comment author: Ezekiel 30 December 2011 08:37:41PM 6 points [-]

If I understand it correctly, the FAI problem is basically about making an AI whose goals match those of humanity. But why does the AI need to have goals at all? Couldn't you just program a question-answering machine and then ask it to solve specific problems?

Comment author: Vladimir_Nesov 30 December 2011 09:03:05PM 14 points [-]

This idea is called "Oracle AI"; see this post and its dependencies for some reasons why it's probably a bad idea.

Comment author: Ezekiel 30 December 2011 09:23:30PM 3 points [-]

That's exactly what I was looking for. Thank you.

Comment author: Kaj_Sotala 31 December 2011 06:11:08AM 3 points [-]

In addition to the post Vladimir linked, see also this paper.

Comment author: shminux 30 December 2011 08:48:26PM 1 point [-]

Presumably once AGI becomes smarter than humans, it will develop goals of some kind, whether we want it or not. Might as well try to influence them.

Comment author: Ezekiel 30 December 2011 08:51:46PM 3 points [-]

Presumably once AGI becomes smarter than humans, it will develop goals of some kind

Why?

Comment author: Kaj_Sotala 31 December 2011 06:31:12AM 6 points [-]

A better wording would probably be that you can't design something with literally no goals and still call it an AI. A system that answers questions and solves specific problems has a goal: to answer questions and solve specific problems. To be useful for that task, its whole architecture has to be crafted with that purpose in mind.

For instance, suppose it was provided questions in the form of written text. This means that its designers will have to build it in such a way that it interprets text in a certain way and tries to discover what we mean by the question. That's just one thing that it could do to the text, though - it could also just discard any text input, or transform each letter to a number and start searching for mathematical patterns in the numbers, or use the text to seed its random-number generator that it was using for some entirely different purpose, and so forth. In order for the AI to do anything useful, it has to have a large number of goals such as "interpret the meaning of this text file I was provided" implicit in its architecture. As the AI grows more powerful, these various goals may manifest themselves in unexpected ways.

Comment author: Vladimir_Nesov 30 December 2011 09:02:20PM 4 points [-]