The Register talks to Google's Alfred Spector:
Google's approach toward artificial intelligence embodies a new way of designing and running complex systems. Rather than create a monolithic entity with its own modules for reasoning about certain inputs and developing hypotheses that let it bootstrap its own intelligence into higher and higher abstractions away from base inputs, as other AI researchers did through much of the 60s and 70s, Google has instead taken a modular approach.
"We have the knowledge graph, [the] ability to parse natural language, neural network tech [and] enormous opportunities to gain feedback from users," Spector said in an earlier speech at Google IO. "If we combine all these things together with humans in the loop continually providing feedback our systems become ... intelligent."
Spector calls this his "combination hypothesis", and though Google is not there yet – SkyNet does not exist – you can see the first green buds of systems that have the appearance of independent intelligence via some of the company's user-predictive technologies such as Google Now, the new Maps and, of course, the way it filters search results according to individual identity.
(Emphasis mine.) I don't have a transcript, but there are videos online. Spector is clearly smart, and apparently he expects an AI to appear in a completely different way than Eliezer does. And he has all the resources and financing he wants, probably 3-4 orders of magnitude over MIRI's. His approach, if workable, also appears safe: it requires human feedback in the loop. What do you guys think?
This kind of AI might not cause the same kinds of existential risk typically described on this website, but I certainly wouldn't call it "safe". These technologies have a huge potential to reshape our lives. In particular, they can have a huge influence on our perceptions.
All of our search results come filtered through google's algorithm, which, when tailored to the individual user, creates a filter bubble. This changes our perception of what's on the web, and we're scarcely even conscious that the filter bubble exists. If you don't know about sampling bias, how you can you correct for it?
With the advent of Google Glass, there is a potential for this kind of filter bubble to pervade our entire visual experience. Instead of physical advertisements painted on billboards, we'll get customized advertisements superimposed on our surroundings. The thought of Google adding things to our visual perception scares me, but not nearly as much as the thought of Google removing things from our perception. I'm sure this will seem quite enticing. That stupid painting that your significant other insists on hanging on the wall? With advanced enough computer vision, Google+ could simply excise it from your perception. What about that ex-girlfriend with whom things ended badly? Now she walks down the streets of your town with her new boyfriend. What if you could change a setting in your Google glasses and have him removed from view? The temptations of such technology are endless. How many people in the world would rather simply block out the unpleasant stimulus than confront the cause of its unpleasantness - their own personal problems?
Google's continuous user feedback is one of the things that scares me most about its services. Take the search engine for example. When you're typing something into the search bar, google autocompletes - changing the way you construct your query. Its suggestions are often quite good, and they make the system run more smoothly - but they take away aspects of individuality and personal expression. The suggestions change the way you form queries, pushing them towards a common denominator, slowly sucking out the last drops of originality.
And sure, this matters little in search engines, but can you see how readily it could be applied to things like automatic writing helpers? Imagine you're a high school student writing an essay. An online tool provides you with suggestions for better wordings of your sentences, based on other user preferences. It will suggest similar wordings for all people, and suddenly, all essays will become that much more canned. (Certainly, such a tool could add a bit of randomness to the rewording-choice, but one has to be careful - introduce too much randomness and the quality decreases rapidly.)
I guess I'm just afraid that autocomplete systems will change the way people speak, encouraging everyone to speak in a very standardized way, the way which least confuses the autocomplete system or the natural language understanding system. As computers become more omnipresent, people might switch to this way of speaking all the time, to make it easier for everyone's mobile devices to understand what they're saying. Changing the way we speak changes the way we think; what will this do to our thought processes, if original wording is discouraged because it's hard for the computer to understand?
I do realize that socializing with other humans already exerts this kind of pressure. You have to speak understandably, and this changes what words you'll use. I find myself speaking differently with my NLP grad school colleagues than I do with non-CS friends, for instance. It's automatic. In a CS crowd, I'll use CS metaphors; in a non-CS crowd I won't. So I'm not opposed to changing the way I speak based on the context. I'm just specifically worried about the sort of speaking patterns NLP systems will force us into. I'm afraid they'll require us to (1) speak more simply (easier to process), (2) speak less creatively (because the algorithm has only been trained on a limited set of expressions), and (3) speak the way the average user speaks (because that's what the system has gotten the most data on, and can respond best to).
Ok, I'm done ranting now. =) I realize this is probably not what you were asking about in the post. I just felt the need to bring this stuff up, because I don't think LW is as concerned about these things as we should be. People obsess constantly about existential risk and threats to our way of life, but often seem quite gung-ho about new technological advances like Google Glass and self-driving cars.
A post from the sequences that jumps to mind is Interpersonal Entanglement:
If people gain increased control of their reality, they might start simplifying it past the point where there are no more sufficiently complex situations to allow your mind to grow, and for you to learn new things. People will start interacting more and more with things that are specifically t... (read more)