rhollerith_dot_com comments on After critical event W happens, they still won't believe you - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (104)
You mention Deep Blue beating Kasparov. This sounds look a good test case. I know that there were times when it was very controversial whether computers would ever be able to beat humans in chess - Wikipedia gives the example of a 1960s MIT professor who claimed that "no computer program could defeat even a 10-year-old child at chess". And it seems to me that by the time Deep Blue beat Kasparov, most people in the know agreed it would happen someday even if they didn't think Deep Blue itself would be the winner. A quick Google search doesn't pull up enough data to allow me to craft a full narrative of "people gradually became more and more willing to believe computers could beat grand masters with each incremental advance in chess technology", but it seems like the sort of thing that probably happened.
I think the economics example is a poor analogy, because it's a question about laws and not a question of gradual creeping recognition of a new technology. It also ignores one of the most important factors at play here - the recategorization of genres from "science fiction nerdery" to "something that will happen eventually" to "something that might happen in my lifetime and I should prepare for it."
Douglas Hofstadter being one on the wrong side: well, to be exact, he predicted (in his book GEB) that any computer that could play superhuman chess would necessarily have certain human qualities, e.g., if you ask it to play chess, it might reply, "I'm bored of chess; let's talk about poetry!" which IMHO is just as wrong as predicting that computers would never beat the best human players.
I thought you were exaggerating there, but I looked it up in my copy and he really did say that: pg684-686:
I wonder if he did change his opinion on computer chess before Deep Blue and how long before? I found two relevant bits by him, but they don't really answer the question except they sound largely like excuse-making to my ears and like he was still fairly surprised it happened even as it was happening; from February 1996:
And from January 2007:
I suspect the thermostat is closer to the human mind than his conception of the human mind is.
To be fair, people expected a chess playing computer to play chess in the same way a human does, thinking about the board abstractly and learning from experience and all that. We still haven't accomplished that. Chess programs work by inefficiently computing every possible move, so many moves ahead, which seemed impossible before computers got exponentially faster. And even then, deep blue was a specialized super-computer and had to use a bunch of little tricks and optimizations to get it just barely past human grand master level.
I was going to point that out too as I think it demonstrates an important lesson. They were still wrong.
Almost all of their thought processes were correct, but they still got to the wrong result because they looked at solutions too narrowly. It's quite possible that many of the objections to AI, rejuvenation, cryonics, are correct but if there's another path they're not considering, we could still end up with the same result. Just like a Chess program doesn't think like a human, but can still beat one and an airplane doesn't fly like a bird, but can still fly.