TheOtherDave comments on Thoughts on the Singularity Institute (SI) - Less Wrong

256 Post author: HoldenKarnofsky 11 May 2012 04:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1270)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheOtherDave 22 March 2013 03:00:46PM 1 point [-]

Absolutely agreed that this sort of situation arises, and that the more I know about the world, the more situations have this character for me. That said, if I'm indifferent to the world-affecting effects of my answers, it seems that the result is very similar to if I'm ignorant of them.

That is, it seems that Predictor looks at that situation, concludes that in order to predict "yes" or "no" it has to first predict whether it will answer "yes" or "no", and either does so (on what basis, I have no idea) or fails to do so and refuses to answer. Yes, those actions influence the world (as does the very existence of Predictor, and Sam's knowledge of Predictor's existence), but I'm not sure I would characterize the resulting behavior as agentlike.

Comment author: CCC 22 March 2013 06:21:30PM 2 points [-]

Then consider; Sam asks a question. Predictor knows that an answer of "yes" will result in the development of Clippy, and subsequently in turning Earth into paperclips, causing the destruction of humanity, within the next ten thousand years; while an answer of "no" will result in a wonderful future where everyone is happy and disease is eradicated and all Good Things happen. In both cases, the prediction will be correct.

If Predictor doesn't care about that answer, then I would not define Predictor as a Friendly AI.

Comment author: TheOtherDave 22 March 2013 07:07:10PM 1 point [-]

Absolutely agreed; neither would I. More generally, I don't think I would consider any Oracle AI as Friendly.