You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

OrphanWilde comments on The AI That Pretends To Be Human - Less Wrong Discussion

1 Post author: Houshalter 02 February 2016 07:39PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (69)

You are viewing a single comment's thread. Show more comments above.

Comment author: OrphanWilde 03 February 2016 01:32:16PM -1 points [-]

An oracle AI is just moving the problem to that of structuring the queries so it answers the question you thought you asked, as opposed to the question you asked.

The "human" criteria is as ill-defined as any control mechanism, which are all, when you get down to it, shuffling the problem into one poorly-defined box or another.

Comment author: Houshalter 03 February 2016 02:12:31PM -1 points [-]

An oracle AI is just moving the problem to that of structuring the queries so it answers the question you thought you asked, as opposed to the question you asked.

This solves that problem. The AI tries to produce an answer it thinks you will approve of, and which mimics the output of another human.

The "human" criteria is as ill-defined as any control mechanism

We don't need to define "humans" because we have tons of examples. And we reduce the problem to prediction, which is something AIs can be told to do.

Comment author: OrphanWilde 03 February 2016 02:19:41PM -1 points [-]

Oh. Well if we have enough examples that we don't need to define it, just create a few human-like AIs - don't worry about all that superintelligence nonsense, we can just create human-like AIs and run them faster. If we have enough insight into humans to be able to tell an AI how to predict them, it should be trivial to just skip the "tell an AI" part and predict what a human would come up with.

AI solved.

Or maybe you're hiding complexity behind definitions.