Incorrect comments on Non-orthogonality implies uncontrollable superintelligence - Less Wrong

14 Post author: Stuart_Armstrong 30 April 2012 01:53PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread.

Comment author: Incorrect 30 April 2012 03:27:20PM 0 points [-]

I don't understand how to construct a consistent world view that involves the premise. Could you state the premise as a statement about all computable functions?

Comment author: Stuart_Armstrong 02 May 2012 11:38:32AM 3 points [-]

Let's give it a try... In the space of computable functions, there is a class X that we would recognize as "having goal G". There is a process SI we would identify as self-improvement. Then converge implies that for nearly any initial function f, the process SI will result in f being in X.

If you want to phrase this in an updateless way, say that "any function with property SI is in X", defining X as "ultimately having goal G".

Comment author: ciphergoth 02 May 2012 07:31:52AM -1 points [-]

If you want a complete, coherent account of what non-orthogonality would be, you'll have to ask one of its proponents.