Eliezer Yudkowsky and Scott Aaronson - Percontations: Artificial Intelligence and Quantum Mechanics
Sections of the diavlog:
- When will we build the first superintelligence?
- Why quantum computing isn’t a recipe for robot apocalypse
- How to guilt-trip a machine
- The evolutionary psychology of artificial intelligence
- Eliezer contends many-worlds is obviously correct
- Scott contends many-worlds is ridiculous (but might still be true)
Taking a person's most fundamental beliefs into account when trying to figure out what their true intentions are is not an ad hominem, it's common sense.
That's short-sighted. Nothing may really turn on the question of transubstantiation, but a there's a lot that turns on the cognitive processes that led millions of people to believe that a cracker is the body a magical Jewish half-deity.
I'm all in favor of "actually making things better", but the middle-of-the-road solution that the Templeton Foundation is (outwardly, deceitfully) espousing won't do that. Middle-of-the-road solutions are easy, they allow us to avoid sounding shrill, strident, and militant, but easiness is not effectiveness.
There is harm, because people who don't mean something vacuous by 'God' like to give the impression that they do to shield themselves against criticism. And thanks to 'pragmatism', it usually works.
If theists need to pretend to be atheists to be taken seriously, then we've already won.