Similar to the monthly Rationality Quotes threads, this is a thread for memorable quotes about Artificial General Intelligence.
- Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote comments/posts on LW/OB.
Consider in spongebob when plankton builds a fake "Mr Crabs." The machine is superb, at least as functional as the real Mr Crabs and stronger and more durable to boot. But without a plankton up in the control room running the thing, it does nothing.
Implicit in the ideas of those who think machines may take over is that the increase in capabilities of machines will in some sense naturally, or perhaps even accidentally, include the creation of machine volition, machine will, a machine version of the driver of the machine.
This quoter apparently doubts this assumption, at least about current machines. As long as every powerful machine we build needs a human driver lest it sit there with its metaphorical screen saver on waiting for a volitional agent to command it, then all machines no matter how powerful are just tools.
I don't think Kurzweil necessarily thinks machines will get volition in his version of the singularity. Kurzweil is much more oriented towards enhanced humans, essentially or eventually a human which can access the solution to provlems that require a lot of intelligence, but who is still supplying all the volition in the system.
Around here, on the other hand, I think it is essentially assumed that machine intelligence will be independent and with its own volition, which humans will have a hand in constraining by design.
The maker of the quote questions whether volition, will, primal drive, will arise naturally as part of the progression
I disagree.
What he's saying is: submarines traverse water, so it's irrelevant whether we call what they do "swimming". Likewise, if a machine can do the things that a thinking being can do, then it's irrelevant whether it's "actually" "thinking".
He refers to this as a settled question in the origin of the quote. Moreover, he capitalizes the terms in question, indicating he perceives the concept as an incorrect reification.