Eliezer Yudkowsky and Scott Aaronson - Percontations: Artificial Intelligence and Quantum Mechanics
Sections of the diavlog:
- When will we build the first superintelligence?
- Why quantum computing isn’t a recipe for robot apocalypse
- How to guilt-trip a machine
- The evolutionary psychology of artificial intelligence
- Eliezer contends many-worlds is obviously correct
- Scott contends many-worlds is ridiculous (but might still be true)
Am I missing something here? EY and SA were discussing the advance of computer technology, the end of Moore's rule-of-thumb, quantum computing, BIg Blue, etc. It seems to me that AI is an epistemological problem not an issue of more computing power. Getting Big Blue to go down all the possible branches is not really intelligence at all. Don't we need a theory of knowledge first? I'm new here so this has probably already been discussed but what about freewill? How do AI researchers address that issue?
I'm with SA on the MWI of QM. I think EY is throwing the scientific baby out with the physics bath water. It seems to me that the MWI is committing the mind projection fallacy or the fallacy of the primacy of consciousness. I also agree with whoever said (paraphrased) that all these interpretations of QM just differ on where they hide the contradictions... they are all unsatisfactory and it will take a genius to figure it out.
Neither consciousness nor mind are primary in the MWI - so I can't see where you are getting that from.