You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

bokov comments on Supposing you inherited an AI project... - Less Wrong Discussion

-5 Post author: bokov 04 September 2013 08:07AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (23)

You are viewing a single comment's thread. Show more comments above.

Comment author: bokov 04 September 2013 04:28:00PM 0 points [-]

So, in the general case, something that will take a natural language request, turn it into a family of optimizable models, identify the most promising ones, ask the user to choose, and then return an optimized answer?

Notice that it doesn't actually have to do anything itself-- only give answers. This makes much easier to build and creates an extra safeguard for free.

But is there anything more we can pare away? For example, a provably correct natural language parser is impossible because natural language is ambiguous and inconsistent. Humans certainly don't always parse it correctly. On the other hand it's easy for a human to learn a machine language and huge numbers of them have already done so.

So in the chain of events below, the AIs responsibility would be limited to the words in all caps and humans do the rest.

[1 articulate a need] -> [2 formulate an unambiguous query] -> [3 FIND CANDIDATE MODELS] -> [4 user chooses a model or revises step 2] -> [5 RETURN OPTIMAL MANIPULATIONS TO THE MODEL] -> [6 user implements manipulation or revises step 2]

Comment author: bokov 04 September 2013 04:31:40PM 0 points [-]

Continued from above, to reduce TLDR-ness...

We have generic algorithms that do step 5. They don't always scale well, but that's an engineering problem, that a lot of people in fields outside AI are already working to solve. We have domain-specific algorithms some of which can do a decent job of step 3-- spam filters, recommendation engines, autocorrectors.

So, does this mean that what's really missing is a generic problem-representor?

Well, that and friendliness, but if we can articulate a coherent, unambiguous code of morality, we will still need a generic problem-representer to actually incorporate it into the optimization procedure.