Richard_Loosemore comments on Debunking Fallacies in the Theory of AI Motivation - Less Wrong

8 Post author: Richard_Loosemore 05 May 2015 02:46AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (343)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheAncientGeek 14 May 2015 12:39:50PM *  2 points [-]

. The way that the DLI is defined, it borders on self-evidently true

ETA

I now see that what you have written subsequently to the OP is that DLI is almost, but not quite a description of rigid behaviour as a symptom (with the added ingredient that an AI can see the mistakenness of its behaviour):-

However, suppose there is no safe mode, and suppose that the AI also knows about its own design. For that reason, it knows that this situation has come about because (a) its programming is lousy, and (b) it has been hardwired to carry out that programming REGARDLESS of all this understanding that it has, about the lousy programming and the catastrophic consequences for the strawberries.Now, my "doctrine of logical infallibility" is just a shorthand phrase to describe a superintelligent AI in that position which really is hardwired to go ahead with the plan, UNDER THOSE CIRCUMSTANCES. That is all it means. It is not about the rigidity as such, it is about the fact that the AI knows it is being rigid, and knows how catastrophic the consequences will be.

HOWEVER, that doesn't entirely gel with what you wrote in the OP;-

One way to characterize this assumption is that the AI is supposed to be hardwired with a Doctrine of Logical Infallibility. The significance of the doctrine of logical infallibility is as follows. The AI can sometimes execute a reasoning process, then come to a conclusion and then, when it is faced with empirical evidence that its conclusion may be unsound, it is incapable of considering the hypothesis that its own reasoning engine may not have taken it to a sensible place. The system does not second guess its conclusions. This is not because second guessing is an impossible thing to implement, it is simply because people who speculate about future AGI systems take it as a given that an AGI would regard its own conclusions as sacrosanct.

Emph added. Doing dumb things because you think are correct, DLI v1, just isnt the same as realising their dumbness, but being tragically compelled to do them anyway...DLI2. (And Infallibility is a much more appropriate label for the origin idea....the second is more like inevitability)

Comment author: Richard_Loosemore 14 May 2015 02:07:28PM 3 points [-]

Ummm...

You think it is self evidently true that MIRI think that the dangers they warn of are the result of AIs believing themselves to infallible?

The referents in that sentence are a little difficult to navigate, but no, I'm pretty sure I am not making that claim. :-) In other words, MIRI do not think that.

What is self-evidently true is that MIRI claim a certain kind of behavior by the AI, under certain circumstances .... and all I did was come along and put a label on that claim about the AI behavior. When you put a label on something, for convenience, the label is kinda self-evidently "correct".

I think that what you said here:

I now see that what you have written subsequently to the OP is that DLI is almost, but not quite a description of rigid behaviour as a symptom (with the added ingredient that an AI can see the mistakenness of its behaviour):-

... is basically correct.

I had a friend once who suffered from schizophrenia. She was lucid, intelligent (studying for a Ph.D. in psychology) and charming. But if she did not take her medication she became a different person (one day she went up onto the suspension bridge that was the main traffic route out of town and threatened to throw herself to her death 300 feet below. She brought the whole town to a halt for several hours, until someone talked her down.) Now, talking to her in a good moment she could tell you that she knew about her behavior in the insane times - she was completely aware of that side of herself - and she knew that in that other state she would find certain thoughts completely compelling and convincing, even though at this calm moment she could tell you that those thoughts were false. If I say that during the insane period her mind was obeying a "Doctrine That Paranoid Beliefs Are Justified", then all I am doing is labeling that state that governed her during those times.

That label would just be a label, so if someone said "No, you're wrong: she does not subscribe to the DTPBAJ at all", I would be left nonplussed. All I wanted to do was label something that she told me she categorically DID believe, so how can my label be in some sense 'wrong'?

So, that is why some people's attacks on the DLI are a little baffling.

Comment author: TheAncientGeek 14 May 2015 02:18:50PM *  1 point [-]

Their criticisms are possibly accurate about the first version., which gives a cause for the rigid behaviour "it regards its own conclusions as sacrosanct.*