The Register talks to Google's Alfred Spector:
Google's approach toward artificial intelligence embodies a new way of designing and running complex systems. Rather than create a monolithic entity with its own modules for reasoning about certain inputs and developing hypotheses that let it bootstrap its own intelligence into higher and higher abstractions away from base inputs, as other AI researchers did through much of the 60s and 70s, Google has instead taken a modular approach.
"We have the knowledge graph, [the] ability to parse natural language, neural network tech [and] enormous opportunities to gain feedback from users," Spector said in an earlier speech at Google IO. "If we combine all these things together with humans in the loop continually providing feedback our systems become ... intelligent."
Spector calls this his "combination hypothesis", and though Google is not there yet – SkyNet does not exist – you can see the first green buds of systems that have the appearance of independent intelligence via some of the company's user-predictive technologies such as Google Now, the new Maps and, of course, the way it filters search results according to individual identity.
(Emphasis mine.) I don't have a transcript, but there are videos online. Spector is clearly smart, and apparently he expects an AI to appear in a completely different way than Eliezer does. And he has all the resources and financing he wants, probably 3-4 orders of magnitude over MIRI's. His approach, if workable, also appears safe: it requires human feedback in the loop. What do you guys think?
I think that's over-stated. Spector is proposing tool AI; I think Eliezer thinks tool AI is a perfectly doable way of creating AI - it's just extremely unsafe if it's ever pushed to the point of being truly "independent intelligence".