You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

bokov comments on Supposing you inherited an AI project... - Less Wrong Discussion

-5 Post author: bokov 04 September 2013 08:07AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (23)

You are viewing a single comment's thread. Show more comments above.

Comment author: bokov 04 September 2013 04:31:40PM 0 points [-]

Continued from above, to reduce TLDR-ness...

We have generic algorithms that do step 5. They don't always scale well, but that's an engineering problem, that a lot of people in fields outside AI are already working to solve. We have domain-specific algorithms some of which can do a decent job of step 3-- spam filters, recommendation engines, autocorrectors.

So, does this mean that what's really missing is a generic problem-representor?

Well, that and friendliness, but if we can articulate a coherent, unambiguous code of morality, we will still need a generic problem-representer to actually incorporate it into the optimization procedure.