Where does he express optimism about elites' handling of AGI? In that post, he seems to just be saying "AGI is probably many centuries away, and I don't see much we can knowably do about it so far in advance."
Right, and if it's that slow, there is plenty of time for it to be noticed and mitigated, see Adams Law of Slow-Moving Disasters
Previously, I asked "Will the world's elites navigate the creation of AI just fine?" My current answer is "probably not," but I think it's a question worth additional investigation.
As a preliminary step, and with the help of MIRI interns Jeremy Miller and Oriane Gaillard, I've collected a few stated opinions on the issue. This survey of stated opinions is not representative of any particular group, and is not meant to provide strong evidence about what is true on the matter. It's merely a collection of quotes we happened to find on the subject. Hopefully others can point us to other stated opinions — or state their own opinions.
MIRI researcher Eliezer Yudkowsky is famously pessimistic on this issue. For example, in a 2009 comment, he replied to the question "What kind of competitive or political system would make fragmented squabbling AIs safer than an attempt to get the monolithic approach right?" by saying "the answer is, 'None.' It's like asking how you should move your legs to walk faster than a jet plane" — again, implying extreme skepticism that political elites will manage AI properly.1
Cryptographer Wei Dai is also quite pessimistic:
Stanford philosopher Ken Taylor has also expressed pessimism, in an episode of Philosophy Talk called "Turbo-charging the mind":
Here, Taylor seems to express the view that humans are not yet morally and rationally advanced enough to be trusted with powerful technologies. This general view has been expressed before by many others, including Albert Einstein, who wrote that "Our entire much-praised technological progress... could be compared to an axe in the hand of a pathological criminal."
In response to Taylor's comment, MIRI researcher Anna Salamon (now Executive Director of CFAR) expressed a more optimistic view:
Economist James Miller is another voice of pessimism. In Singularity Rising, chapter 5, he worries about game-theoretic mechanisms incentivizing speed of development over safety of development:
In chapter 6, Miller expresses similar worries about corporate incentives and AI:
Miller expanded on some of these points in his chapter in Singularity Hypotheses.
In a short reply to Miller, GMU economist Robin Hanson wrote that
Unfortunately, Hanson does not explain his reasons for rejecting Miller's analysis.
Sun Microsystems co-founder Bill Joy is famous for the techno-pessimism of his Wired essay "Why the Future Doesn't Need Us," but that article's predictions about elites' likely handling of AI are actually somewhat mixed:
Former GiveWell researcher Jonah Sinick has expressed optimism on the issue:
Paul Christiano is another voice of optimism about elites' handling of AI. Here are some snippets from his "mainline" scenario for AI development:
Christiano is no Polyanna, however. In the same document, he outlines "what could go wrong," and what we might do about it.
Notes
1 I originally included another quote from Eliezer, but then I noticed that other readers on Less Wrong had elsewhere interpreted that same quote differently than I had, so I removed it from this post.