Comment author: Peter_Koller 23 September 2014 09:00:04AM *  7 points [-]

If one includes not only the current state of affairs on Earth for predicting when superintelligent AI will occur, but considers the whole of the universe (or at least our galaxy) it raises the question of an AI-related Fermi paradox: Where are they?

I assume that extraterrestrial civilizations (given they exist) which have advanced to a technological society will have accelerated growth of progress similar to ours and create a superintelligent AI. After the intelligence explosion the AI would start consuming energy from planets and stars and convert matter to further its computational power and send out Von Neumann probes (all of this at some probability), which would reach every star of the milky way in well under a million years if travelling at just 10% of the speed of light -- and turn everything into a giant computronium. It does not have to be a catastrophic event for life, a benign AI could spare worlds that harbor life. It would spread magnitudes faster than its biological creators, because it would copy itself faster and travel in many more directions simultanously than them. Our galaxy could/should have been consumed by an evergrowing sphere of AI up to billions of years ago (and that probably many times over by competing AI from various civilizations). But we don't see any traces of such a thing.

Are we alone? Did no one ever create a superintelligent AI? Did the AI and its creators go the other way (ie instead of expanding they choose to retire into a simulated world without interest in growing, going anywhere or contacting anyone)? Did it already happen and are we part or product of it (ie simulation)? Is it happening right in front of us and we, dumb as a goldfish, can't see it?

Should these questions, which would certainly shift the probabilities, be part of AI predictions?

Comment author: mvp9 24 September 2014 05:10:49PM *  2 points [-]

Exploration is a very human activity, it's in our DNA you might say. I don't think we should take for granted that an AI would be as obsessed with expanding into space for that purpose.

Nor Is it obvious that it will want to continuously maximize its resources, at least on the galactic scale. This is also a very biological impulse - why should an AI have that built in?

When we talk about AI this way, I think we commit something like Descartes's Error (see Damasio's book of that name): thinking that the rational mind can function on its own. But our higher cognitive abilities are primed and driven by emotions and impulses, and when these are absent, one is unable to make even simple, instrumental decisions. In other words, before we assume anything about an AI's behavior, we should consider its built in motivational structure.

I haven't read Bostrom's book so perhaps he makes a strong argument for these assumptions that I am not aware of, in which case, could some one summarize them?

Comment author: shullak7 17 September 2014 03:47:36PM 2 points [-]

I think that "the role of language in human thought" is one of the ways that AI could be very different from us. There is research into the way that different languages affect cognitive abilities (e.g. -- https://psych.stanford.edu/~lera/papers/sci-am-2011.pdf). One of the examples given is that, as a native English-speaker, I may have more difficulty learning the base-10 structure in numbers than a Mandarin speaker because of the difference in the number words used in these languages. Language can also affect memory, emotion, etc.

I'm guessing that an AI's cognitive ability wouldn't change no matter what human language it's using, but I'd be interested to know what people doing AI research think about this.

Comment author: mvp9 17 September 2014 06:41:20PM 1 point [-]

Lera Boroditsky is one of the premier researchers on this topic. They've also done some excellent work on comparing spatial/time metaphors in English and Mandarin (?), showing that the dominant idioms in each language affect how people cognitively process motion.

But the question is more broad -- whether some form of natural language is required (natural, roughly meaning used by a group in day to day life, is key here)? Differences between major natural languages are for the most part relatively superficial and translatable because their speakers are generally dealing with a similar reality.

Comment author: devi 16 September 2014 03:23:38AM 5 points [-]

I think AI-completeness is a quite seductive notion. Borrowing the concept of reduction from complexity/computability theory makes it sound technical, but unlike those fields I haven't seen anyone actually describing eg how to use an AI with perfect language understanding to produce another one that proved theorems or philosophized.

Spontaneously it feels like everyone here should in principle be able to sketch the outlines of such a program (at least in the case of a base-AI that has perfect language comprehension that we want to reduce to), probably by some version of trying to teach the AI as we teach a child in natural language. I suspect that the details of some of these reductions might still be useful, especially the parts that don't quite seem to work. For while I don't think that we'll see perfect machine translation before AGI, I'm much less convinced that there is a reduction from AGI to perfect translation AI. This illustrates what I suspect might be an interesting differences between two problem classes that we might both want to call AI-complete: the problems human programmers will likely not be able to solve before we create superintelligence, and the problems whose solutions we could (somewhat) easily re-purpose to solve the general problem of human-level AI. These classes look the same as in we shouldn't expect to see problems from any of them solved without an imminent singularity, but differ in that the problems in the latter class could prove to be motivating examples and test-cases for AI work aimed at producing superintelligence.

I guess the core of what I'm trying to say is that arguments about AI-completeness has so far sounded like: "This problem is very very hard, we don't really know how to solve it. AI in general is also very very hard, and we don't know how to solve it. So they should be the same." Heuristically there's nothing wrong with this, except we should keep in mind that we could be very mistaken about what is actually hard. I'm just missing the part that goes: "This is very very hard. But if we knew it this other thing would be really easy."

Comment author: mvp9 16 September 2014 05:37:40AM 3 points [-]

A different (non-technical) way to argue for their reducibility is through analysis of the role of language in human thought. The logic being that language by its very nature extends into all aspects of cognition (little human though of interest takes place outside its reach), and so one cannot do one without the other. I believe that's the rationale behind the Turing test.

It's interesting that you mention machine translation though. I wouldn't equate that with language understanding. Modern translation programs are getting very good, and may in time be "perfect" (indistinguishable from competent native speakers), but they do this through pattern recognition and leveraging a massive corpus of translation data - not through understanding it.

Comment author: KatjaGrace 16 September 2014 04:10:06AM 2 points [-]

Are there foreseeable developments other than human-level AI which might produce much faster economic growth? (p2)

Comment author: mvp9 16 September 2014 05:19:23AM 4 points [-]

I think the best bets as of today would be truly cheap energy (whether through fusion, ubiqutious solar, etc) and nano-fabrication. Though it may not happen, we could see these play out in 20-30 year term.

The bumps from this, however would be akin to the steam engine. Dwarfed by (or possibly a result of) the AI.

Comment author: paulfchristiano 16 September 2014 03:59:07AM 2 points [-]

Here is another way of looking at things:

  1. From the inside it looks like automating the process of automation could lead to explosive growth.
  2. Many simple endogenous growth models, if taken seriously, tend to predict explosive growth at finite time. (Including the simplest ones.)
  3. A straightforward extrapolation of historical growth suggests explosive growth in the 21st century (depending on whether you read the great stagnation as a permanent change or a temporary fluctuation).

You might object to any one of those lines of arguments on their own, but taken together the story seems compelling to me (at least if one wants to argue "We should take seriously the possibility of explosive growth.")

Comment author: mvp9 16 September 2014 05:13:05AM 3 points [-]

Oh, I completely agree with the prediction of explosive growth (or at least its strong likelihood), I just think (1) or something like it is a much better argument than 2 or 3.

Comment author: paulfchristiano 16 September 2014 03:53:35AM *  3 points [-]

I object (mildly) to this characterization of quantum mechanics. What notion of "understand" do we mean? I can use quantum mechanics to make predictions, I can use it to design quantum mechanical machines and protocols, I can talk philosophically about what is "going on" in quantum mechanics to more or less the same extent that I can talk about what is going on in a classical theory.

I grant there are senses in which I don't understand this concept, but I think the argument would be more compelling if you could make the same point with a clearer operationalization of "understand."

Comment author: mvp9 16 September 2014 04:44:05AM 1 point [-]

I'll take a stab at it.

We are now used to saying that light is both a particle and a wave. We can use that proposition to make all sorts of useful predictions and calculations. But if you stop and really ponder that for a second, you'll see that it is so far out of the realm of human experience that one cannot "understand" that dual nature in the sense that you "understand" the motion of planets around the sun. "Understanding" in the way I mean is the basis for making accurate analogies and insight. Thus I would argue Kepler was able to use light as an analogy to 'gravity' because he understood both (even though he didn't yet have the math for planetary motion)

Perhaps an even better example is the idea of quantum entanglement: theory may predict, and we may observe quarks "communicating" at a distance faster than light, but (for now at least) I don't think we have really incorporate it into our (pre-symbolic) conception of the world.

Comment author: VonBrownie 16 September 2014 02:05:51AM 3 points [-]

Do you think, then, that its a dangerous strategy for an entity such as a Google that may be using its enormous and growing accumulation of "the existing corpus of human knowledge" to provide a suitably large data set to pursue development of AGI?

Comment author: mvp9 16 September 2014 02:19:09AM 1 point [-]

I think Google is still quite aways from AGI, but in all seriousness, if there was ever a compelling interest of national security to be used as a basis for nationalizing inventions, AGI would be it. At the very least, we need some serious regulation of how such efforts are handled.

Comment author: AshokGoel 16 September 2014 01:48:48AM *  3 points [-]

I havnt read the book yet, but based on the summary here (and for what it is worth), I found the jump from 1-5 under economic growth above to 6 a little unconvincing.

Comment author: mvp9 16 September 2014 02:15:07AM 4 points [-]

I find the whole idea of predicting AI-driven economic growth based on analysis of all of human history as a single set of data really unconvincing. It is one thing to extrapolate up-take patterns of a particular technology based on similar technologies in the past. But "true AI" is so broad, and, at least on many accounts, so transformative, that making such macro-predictions seems a fool's errand.

Comment author: KatjaGrace 16 September 2014 01:21:05AM 3 points [-]

Have you seen any demonstrations of AI which made a big impact on your expectations, or were particularly impressive?

Comment author: mvp9 16 September 2014 01:55:17AM 4 points [-]

Again, this is a famous one, but Watson seems really impressive to me. It's one thing to understand basic queries and do a DB query in response, but its ability to handle indirect questions that would confuse many a person (guilty), was surprising.

On the other hand, its implementation (as described in Second Machine Age) seems to be just as algorithmic, brittle and narrow as Deep Blue - basically Watson was as good as its programmers...

Comment author: KatjaGrace 16 September 2014 01:08:46AM 4 points [-]

How much smarter than a human could a thing be? (p4) How about the same question, but using no more energy than a human? What evidence do we have about this?

Comment author: mvp9 16 September 2014 01:48:44AM 2 points [-]

Another way to get at the same point, I think, is - Are there things that we (contemporary humans) will never understand (from a Quora post)?

I think we can get some plausible insight on this by comparing an average person to the most brilliant minds today - or comparing the earliest recorded examples of reasoning in history to that of modernity. My intuition is that there are many concepts (quantum physics is a popular example, though I'm not sure it's a good one) that even most people today, and certainly in the past, will never comprehend, at least without massive amounts of effort, and possibly even then. They simply require too much raw cognitive capacity to appreciate. This is at least implicit in the Singularity hypothesis.

As to the energy issue, I don't see any reason to think that such super-human cognition systems necessarily requires more energy - though they may at first.

View more: Next