Eliezer Yudkowsky and Scott Aaronson - Percontations: Artificial Intelligence and Quantum Mechanics
Sections of the diavlog:
- When will we build the first superintelligence?
- Why quantum computing isn’t a recipe for robot apocalypse
- How to guilt-trip a machine
- The evolutionary psychology of artificial intelligence
- Eliezer contends many-worlds is obviously correct
- Scott contends many-worlds is ridiculous (but might still be true)
OK, if that's really what it takes I guess I'll leave it at that. But I don't see the loss of generality from conservation laws operating on any closed system as a good thing, and I can't understand how weighting a world (that is claimed to actually exist) by a probability measure (that I've seen claimed to be meant as observed frequencies) is actually a reasonable thing to do.
I would actually like to understand this, and I suspect strongly that I'm missing something basic. Unfortunately, I don't have the time to make my ignorance suitable for public consumption, but if anyone would like to help enlighten me privately, I'd be delighted.