Epictetus comments on Debunking Fallacies in the Theory of AI Motivation - Less Wrong

8 Post author: Richard_Loosemore 05 May 2015 02:46AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (343)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheAncientGeek 15 May 2015 03:38:32PM *  1 point [-]

That would be fine if you and everyone else who tries to argue on this side of the debate do not proceed to then conclude from the statement that the AI has "good intentions" that it is making some sort of "error" when it fails to act on our cries that "doing X isn't good!" or "doing X isn't what we meant

The point doesn't need to be argued for on the basis of definitions. Given one set of assumptions, one systems architecture, it is entirely natural that an AI would pursue its goals against is own information, and against the protests of humans;. But on other assumptions, it is utterly bizarre that an AI would ever do that....it would be not merely an error, in the sense of a bug, a failure on the part of the programmers to code their intentions, but an unlikely kind of bug that allows the system to continue doing really complex things, instead of degrading it.

Comment author: Epictetus 15 May 2015 06:50:10PM 2 points [-]

Given one set of assumptions, one systems architecture, it is entirely natural that an AI would pursue its goals against is own information, and against the protests of humans;. But on other assumptions, it is utterly bizarre that an AI would ever do that....

If one of its parameters is "do not go against human protests of magnitude greater than X", then it will not pursue a course of action if enough people protest it. But in this case, avoiding strong human protest is part of its goals.

The AI is ultimately following some procedure, and any outside information or programmer intention or human protest is just some variable that may or may not be taken into consideration.

Comment author: TheAncientGeek 17 May 2015 08:25:22AM 1 point [-]

That just restated my point that the different sides in the debate are just making different assumptions about likely AI architectures.

But the AI researchers win, because they know what real world AI architectures are, whereas MIRI is guessing.