novalis comments on After critical event W happens, they still won't believe you - Less Wrong

37 Post author: Eliezer_Yudkowsky 13 June 2013 09:59PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (104)

You are viewing a single comment's thread. Show more comments above.

Comment author: novalis 14 June 2013 01:12:31AM 6 points [-]

High speed calculation plus human-level intelligence is not sufficient for recursive self-improvement. An AI needs to be able to understand its own source code, and that is not a guarantee that passing the Turing test (plus high-speed calculation) includes.

Comment author: TheOtherDave 14 June 2013 01:54:34AM 3 points [-]

If I am confident that a human is capable of building human-level intelligence, my confidence that a human-level intelligence cannot build a slightly-higher-than-human intelligence, given sufficient trials, becomes pretty low. Ditto my confidence that a slightly-higher-than-human intelligence cannot build a slightly-smarter-than-that intelligence, and so forth.

But, sure, it's far from zero. As you say, it's not a guarantee.