novalis comments on After critical event W happens, they still won't believe you - Less Wrong

37 Post author: Eliezer_Yudkowsky 13 June 2013 09:59PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (104)

You are viewing a single comment's thread. Show more comments above.

Comment author: novalis 14 June 2013 12:51:01AM 9 points [-]

unrestricted Turing test passing should be sufficient unto FOOM

I don't think this is quite right. Most humans can pass a Turing test, even though they can't understand their own source code. FOOM requires that an AI has the ability to self-modify with enough stability to continue to (a) desire to continue to self-modified, and (b) be able to do so. Most uploaded humans would have a very difficult time with this - - just look at how people resist even modifying their beliefs, let alone their thinking machinery.

Comment author: Eliezer_Yudkowsky 14 June 2013 12:59:01AM 6 points [-]

The problem is that an AI which passes the unrestricted Turing test must be strictly superior to a human; it would still have all the expected AI abilities like high-speed calculation and so on. A human who was augmented to the point of passing the Pocket Calculator Equivalence Test would be superhumanly fast and accurate at arithmetic on top of still having all the classical human abilities, they wouldn't be just as smart as a pocket calculator.

Comment author: novalis 14 June 2013 01:12:31AM 6 points [-]

High speed calculation plus human-level intelligence is not sufficient for recursive self-improvement. An AI needs to be able to understand its own source code, and that is not a guarantee that passing the Turing test (plus high-speed calculation) includes.

Comment author: TheOtherDave 14 June 2013 01:54:34AM 3 points [-]

If I am confident that a human is capable of building human-level intelligence, my confidence that a human-level intelligence cannot build a slightly-higher-than-human intelligence, given sufficient trials, becomes pretty low. Ditto my confidence that a slightly-higher-than-human intelligence cannot build a slightly-smarter-than-that intelligence, and so forth.

But, sure, it's far from zero. As you say, it's not a guarantee.

Comment author: Locaha 14 June 2013 08:59:12AM *  4 points [-]

A human who was augmented to the point of passing the Pocket Calculator Equivalence Test

I thought a Human with a Pocket Calculator is this augmented human already. Unless you want to implant the calculator in your skull and control it with your thoughts. Which will also soon be possible.

Comment author: ShardPhoenix 14 June 2013 10:38:33AM *  0 points [-]

The biggest reason humans can't do this is that we don't implement .copy(). This is not a problem for AIs or uploads, even if they are otherwise only of human intelligence.

Comment author: novalis 14 June 2013 08:24:08PM -1 points [-]

Sure, with a large enough number of copies of you to practice on, you would learn to do brain surgery well enough to improve the functioning of your brain. But it could easily take a few thousand years. The biggest problem with self-improving AI is understanding how the mind works in the first place.