My Bloggingheads.tv interview with Jaron Lanier is up. Reductionism, zombies, and questions that you're not allowed to answer:
This ended up being more of me interviewing Lanier than a dialog, I'm afraid. I was a little too reluctant to interrupt. But you at least get a chance to see the probes I use, and Lanier's replies to them.
If there are any BHTV heads out there who read Overcoming Bias and have something they'd like to talk to me about, do let me or our kindly producers know.
I think I see where the disconnect was in this conversation. Lanier was accusing general AI people of being religious. Yudkowsky took that as a claim that something he believed was false, and wanted Lanier to say what.
But Lanier wasn't saying anything in particular was false. He was saying that when you tackle these Big Problems, there are necessarily a lot of unknowns, and when you have too many unknowns reason and science are inapplicable. Science and reason work best when you have one unknown and lots of knowns. If you try to bite off too big a chunk at once you end up reasoning in a domain that is now only e.g. 50% fact, and that reminds him of the "reasoning" of religious people.
Knowledge is a big interconnected web, with each fact reinforcing the others. You have to grow it from the edge. And our techniques are design for edge space.