This is our monthly thread for collecting these little gems and pearls of wisdom, rationality-related quotes you've seen recently, or had stored in your quotesfile for ages, and which might be handy to link to in one of our discussions.
- Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote comments/posts on LW/OB.
- No more than 5 quotes per person per monthly thread, please.
And by the same token, we'll know we've nailed AI not when we have written a program that can have that conversation... but when we have written down an account of how we are able to have that conversation, to such a level of detail that there's nothing left to explain.
Writing a program which solves the Towers of Hanoi is not too hard. Proving, given a formalization of the ToH, various properties of a program that solves it, isn't too hard. But looking at a bunch of wooden disks slotted on pegs and coming up with an interpretation of that situation which corresponds to the abstract scheme we know as "Towers of Hanoi"... That's where the fun is.
One can't proceed from the informal to the formal by formal means. Yet.
(Apologies to Alan Perlis etc)