TheAncientGeek comments on Debunking Fallacies in the Theory of AI Motivation - Less Wrong

8 Post author: Richard_Loosemore 05 May 2015 02:46AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (343)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vaniver 16 May 2015 05:10:12PM *  1 point [-]

You have assumed that the AI will have some separate boxed-off goal system

What makes you think that? The description in that post is generic enough to describe AIs with compartmentalized goals, AIs without compartmentalized goals, and AIs that don't have explicitly labeled internal goals. It doesn't even require that the AI follow the goal statement, just evaluate it for consistency!

See the problem?

You may find this comment of mine interesting. In short, yes, I do think I see the problem.

If efficiency can be substituted for truth, why is there so so much emphasis on truth in the advice given to human rationalists?

I'm sorry, but I can't make sense of this question. I'm not sure what you mean by "efficiency can be substituted for truth," and what you think the relevance of advice to human rationalists is to AI design.

In order to achieve an AI that's smart enough to be dangerous , a number of correctly unsolved problems will have to .be solved. That's a given.

I disagree with this, too! AI systems already exist that are both smart, in that they solve complex and difficulty cognitive tasks, and dangerous, in that they make decisions on which significant value rides, and thus poor decisions are costly. As a simple example I'm somewhat familiar with, some radiation treatments for patients are designed by software looking at images of the tumor in the body, and then checked by a doctor. If the software is optimizing for a suboptimal function, then it will not generate the best treatment plans, and patient outcomes will be worse than they could have been.

Now, we don't have any AIs around that seem capable of ending human civilization (thank goodness!), and I agree that's probably because a number of unsolved problems are still unsolved. But it would be nice to have the unknowns mapped out, rather than assuming that wisdom and cleverness go hand in hand. So far, that's not what the history of software looks like to me.

Comment author: TheAncientGeek 16 May 2015 07:24:12PM 1 point [-]

AI systems already exist that are both smart, in that they solve complex and difficulty cognitive tasks, and dangerous, in that they make decisions on which significant value rides, and thus poor decisions are costly. 

But they are not smart in the contextually relevant sense of being able to outsmart humans, or dangerous in the contextually relevant sense of being unboxable.