"the brightest minds that build the first AI failed to understands some argument that even former theologians can follow"
This is related to something that I am quite confused about. There are basically 3 possibilities:
(1) You have to be really lucky to stumble across MIRI's argument. Just being really smart is insufficient. So we should not expect whoever ends up creating the first AGI to think about it.
(2) You have to be exceptionally intelligent to come up with MIRI's argument. And you have to be nowhere as intelligent in order to build an AGI that can take over the world.
(3) MIRI's argument is very complex. Only someone who deliberately thinks about risks associated with AGI could come up with all the necessary details of the argument. The first people to build an AGI won't arrive at the correct insights in time.
Maybe there is another possibility on how MIRI could end up being right that I have not thought about, let me know.
It seems to me that what all of these possibilities have in common is that they are improbable. Either you have to be (1) lucky or (2) exceptionally bright or (3) be right about a highly conjunctive hypothesis.
I would have to say:
4) MIRI themselves are incredibly bad at phrasing their own argument. Go hunt through Eliezer's LessWrong postings about AI risks, from which most of MIRI's language regarding the matter is taken. The "genie metaphor", of Some Fool Bastard being able to give an AGI a Bad Idea task in the form of verbal statements or C++-like programming at a conceptual level humans understand, appears repeatedly. The "genie metaphor" is a worse-than-nothing case of Generalizing From Fictional Evidence.
I would phrase the argument t...
So I know we've already seen them buying a bunch of ML and robotics companies, but now they're purchasing Shane Legg's AGI startup. This is after they've acquired Boston Dynamics, several smaller robotics and ML firms, and started their own life-extension firm.
Is it just me, or are they trying to make Accelerando or something closely related actually happen? Given that they're buying up real experts and not just "AI is inevitable" prediction geeks (who shall remain politely unnamed out of respect for their real, original expertise in machine learning), has someone had a polite word with them about not killing all humans by sheer accident?