Passages from The Many Worlds of Hugh Everett III:
Bohr declared that although there may be a reality underlying quantum phenomena, we cannot know what the reality is. It is accessible to human understanding only through the mediation of experiment and classical concepts. Consequently, generations of physicists were taught that there is no quantum reality independent of experimental result. And that the Schrödinger equation, while incredibly useful as a predictive tool, should not be interpreted literally as a description of reality.
Everett took the opposite view.
And:
Fifteen years after the thesis was published, Everett penned a letter (found in the basement) to Max Jammer, who was writing his book on the philosophy of quantum mechanics... [saying] "It seemed to me unnatural that there should be a ‘magic’ process in which something quite drastic occurred (collapse of the wave function), while in all other times systems were assumed to obey perfectly natural continuous laws."
[By] 1954, Everett was not alone in his feeling that the collapse postulate was illogical, but he was one of the very few physicists who dared to publicly express deep dissatisfaction with it... Everett had hoped to reinvent quantum mechanics on its own terms and was disappointed that his revolutionary idea was experimentally unproveable, as the only “proof” of it was that quantum mechanics works — a fact which was already known.
(It wasn't until decades later that David Deutsch and others showed that Everettian quantum mechanics does make novel experimental predictions.)
One open question in AI risk strategy is: Can we trust the world's elite decision-makers (hereafter "elites") to navigate the creation of human-level AI (and beyond) just fine, without the kinds of special efforts that e.g. Bostrom and Yudkowsky think are needed?
Some reasons for concern include:
But if you were trying to argue for hope, you might argue along these lines (presented for the sake of argument; I don't actually endorse this argument):
The basic structure of this 'argument for hope' is due to Carl Shulman, though he doesn't necessarily endorse the details. (Also, it's just a rough argument, and as stated is not deductively valid.)
Personally, I am not very comforted by this argument because:
Obviously, there's a lot more for me to spell out here, and some of it may be unclear. The reason I'm posting these thoughts in such a rough state is so that MIRI can get some help on our research into this question.
In particular, I'd like to know: