Emile comments on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) - Less Wrong

32 Post author: ciphergoth 30 October 2010 09:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (432)

You are viewing a single comment's thread. Show more comments above.

Comment author: Emile 30 October 2010 03:30:17PM 1 point [-]

Yes, that's what I was referring to when saying this:

Eliezer also cares about mathematical proofs, but more for the purpose of preserving values under self-modification (something that humans don't usually have to deal with).

The provability here has to do with the AI proving to itself that modifying itself will preserve it's values (or not cause it to self-destruct or wirehead or whatever), not the designers proving the AI is non-dangerous.

I.e. friendly as "provably non-dangerous AGI" doesn't necessarily mean having a rigorous mathematical proof that the AI is not dangerous; but "merely" having enough understanding of morality when building it (as opposed to some high-level notions whose components haven't been rigorously analyzed).