James_Miller comments on AGI Quotes - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (88)
I don't understand this.
It is seemingly easy to get stuck in arguments over whether or not machines can "actually" think.
It is sufficient to assess the effects or outcomes of the phenomenon in question.
By sidestepping the question of what, exactly, it means to "think",
we can avoid arguing over definitions, yet lose nothing of our ability to model the world.
Does a submarine swim? The purpose of swimming is to propel oneself through the water. A nuclear powered submarine can propel itself through the oceans at full speed for months at a time. It achieves the purpose of swimming, and does so rather better than a fish, or a human.
If the purpose of thinking is isomorphic to:
Model the world in order to formulate plans for executing actions which implement goals.
Then, if a machine can achieve the above we can say it achieves the purpose of thinking,
akin to how a submarine successfully achieves the purpose of swimming.
Discussion of whether the machine really thinks is now superfluous.
It is a similar idea as that proposed by Turing. If you have submarines, and they move through the water and do exactly what you want them to do, then it is rather pointless to ask if what they're doing is "really swimming". And the arguments on both sides of the "swimming" dispute will make reference to fish.