The Open Thread posted at the beginning of the month has gotten really, really big, so I've gone ahead and made another one. Post your new discussions here!
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
Listening to the longer version isn't so bad. The snippet was definitely the most objectionable.
It appears that Lanier thinks AI is suffering from the puppet problem bought on by taking the Turing test too seriously. The puppet problem is that computers can be used to implement puppets. Things that fake being intelligence. Imagine Omega makes a program for the Turing Test that looks intelligent by predicting you and having the program output intelligent sounding responses at different times, so that you (and only you!) think it is intelligent but you are really talking to the advanced equivalent of an answer phone*. So he thinks that AIs are going to be puppets. Which is a semi-reasonable opinion to come to if you just look at chatbots.
However Lanier doesn't, but should, argue that computers can only be puppets.
Edited: For clarity.
*I think Eliezer said something like if you see intelligent behaviour you should guess that there is an intelligence somewhere, it may just not be in the system that appears intelligent. I'm not organised enough to keep a quote file. Anyone?
"GAZP vs. GLUT":