Similar to the monthly Rationality Quotes threads, this is a thread for memorable quotes about Artificial General Intelligence.
- Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote comments/posts on LW/OB.
It is seemingly easy to get stuck in arguments over whether or not machines can "actually" think.
It is sufficient to assess the effects or outcomes of the phenomenon in question.
By sidestepping the question of what, exactly, it means to "think",
we can avoid arguing over definitions, yet lose nothing of our ability to model the world.
Does a submarine swim? The purpose of swimming is to propel oneself through the water. A nuclear powered submarine can propel itself through the oceans at full speed for months at a time. It achieves the purpose of swimming, and does so rather better than a fish, or a human.
If the purpose of thinking is isomorphic to:
Model the world in order to formulate plans for executing actions which implement goals.
Then, if a machine can achieve the above we can say it achieves the purpose of thinking,
akin to how a submarine successfully achieves the purpose of swimming.
Discussion of whether the machine really thinks is now superfluous.