All of mvp9's Comments + Replies

Exploration is a very human activity, it's in our DNA you might say. I don't think we should take for granted that an AI would be as obsessed with expanding into space for that purpose.

Nor Is it obvious that it will want to continuously maximize its resources, at least on the galactic scale. This is also a very biological impulse - why should an AI have that built in?

When we talk about AI this way, I think we commit something like Descartes's Error (see Damasio's book of that name): thinking that the rational mind can function on its own. But our hig... (read more)

2KatjaGrace
Good question. The basic argument is that whatever an AI (or any creature) values, more resources are very likely to be useful for that goal. For instance, if it just wants to calculate whether large numbers are prime or not, it will do this much better if it has more resources to devote to calculation. This is elaborated somewhat in papers by Omohundro and Bostrom. That is, while exploration and resource acquisition are in our DNA, there is a very strong reason for them to be there, so they are likely to be in the DNA-analog of any successful general goal-seeking creature.

Lera Boroditsky is one of the premier researchers on this topic. They've also done some excellent work on comparing spatial/time metaphors in English and Mandarin (?), showing that the dominant idioms in each language affect how people cognitively process motion.

But the question is more broad -- whether some form of natural language is required (natural, roughly meaning used by a group in day to day life, is key here)? Differences between major natural languages are for the most part relatively superficial and translatable because their speakers are generally dealing with a similar reality.

2shullak7
I think that is one of my questions; i.e., is some form of natural language required? Or maybe what I'm wondering is what intelligence would look like if it weren't constrained by language -- if that's even possible. I need to read/learn more on this topic. I find it really interesting.

A different (non-technical) way to argue for their reducibility is through analysis of the role of language in human thought. The logic being that language by its very nature extends into all aspects of cognition (little human though of interest takes place outside its reach), and so one cannot do one without the other. I believe that's the rationale behind the Turing test.

It's interesting that you mention machine translation though. I wouldn't equate that with language understanding. Modern translation programs are getting very good, and may in time b... (read more)

2shullak7
I think that "the role of language in human thought" is one of the ways that AI could be very different from us. There is research into the way that different languages affect cognitive abilities (e.g. -- https://psych.stanford.edu/~lera/papers/sci-am-2011.pdf). One of the examples given is that, as a native English-speaker, I may have more difficulty learning the base-10 structure in numbers than a Mandarin speaker because of the difference in the number words used in these languages. Language can also affect memory, emotion, etc. I'm guessing that an AI's cognitive ability wouldn't change no matter what human language it's using, but I'd be interested to know what people doing AI research think about this.

I think the best bets as of today would be truly cheap energy (whether through fusion, ubiqutious solar, etc) and nano-fabrication. Though it may not happen, we could see these play out in 20-30 year term.

The bumps from this, however would be akin to the steam engine. Dwarfed by (or possibly a result of) the AI.

1Paul Crowley
The steam engine heralded the Industrial Revolution and a lasting large increase in doubling rate. I would expect a rapid economic growth after either of these inventions, followed by returning to the existing doubling rate.

Oh, I completely agree with the prediction of explosive growth (or at least its strong likelihood), I just think (1) or something like it is a much better argument than 2 or 3.

I'll take a stab at it.

We are now used to saying that light is both a particle and a wave. We can use that proposition to make all sorts of useful predictions and calculations. But if you stop and really ponder that for a second, you'll see that it is so far out of the realm of human experience that one cannot "understand" that dual nature in the sense that you "understand" the motion of planets around the sun. "Understanding" in the way I mean is the basis for making accurate analogies and insight. Thus I would argue Kepl... (read more)

4pragmatist
The apparent mystery in particle-wave dualism is simply an artifact of using bad categories. It is a misleading historical accident that we hear things like "light is both a particle and a wave" in quantum physics lectures. Really what teachers should be saying is that 'particle' and 'wave' are both bad ways of conceptualizing the nature of microscopic entities. It turns out that the correct representation of these entities is neither as particles nor as waves, traditionally construed, but as quantum states (which I think can be understood reasonably well, although there are of course huge questions regarding the probabilistic nature of observed outcomes). It turns out that in certain experiments quantum states produce outcomes similar to what we would expect from particles, and in other experiments they produce outcomes similar to what we would expect from waves, but that is surely not enough to declare that they are both particles and waves. I do agree with you that entanglement is a bigger conceptual hurdle.
8paulfchristiano
I grant that there is a sense in which we "understand" intuitive physics but will never understand quantum mechanics. But in a similar sense, I would say that we don't "understand" almost any of modern mathematics or computer science (or even calculus, or how to play the game of go). We reason about them using a new edifice of intuitions that we have built up over the years to deal with the situation at hands. These intuitions bear some relationship to what has come before but not one as overt as applying intuitions about "waves" to light. As a computer scientist, I would be quick to characterize this as understanding! Moreover, even if a machine's understanding of quantum mechanics is closer to our idea of intuitive physics (in that they were built to reason about quantum mechanics in the same way we were built to reason about intuitive physics) I'm not sure this gives them more than a quantitative advantage in the efficiency with which they can think about the topic. I do expect them to have such advantages, but I don't expect them to be limited to topics that are at the edge of humans' conceptual grasp!

I think Google is still quite aways from AGI, but in all seriousness, if there was ever a compelling interest of national security to be used as a basis for nationalizing inventions, AGI would be it. At the very least, we need some serious regulation of how such efforts are handled.

0cameroncowan
That's already underway.
5VonBrownie
Which raises another issue... is there a powerful disincentive to reveal the emergence of an artificial superintelligence? Either by the entity itself (because we might consider pulling the plug) or by its creators who might see some strategic advantage lost (say, a financial institution that has gained a market trading advantage) by having their creation taken away?

I find the whole idea of predicting AI-driven economic growth based on analysis of all of human history as a single set of data really unconvincing. It is one thing to extrapolate up-take patterns of a particular technology based on similar technologies in the past. But "true AI" is so broad, and, at least on many accounts, so transformative, that making such macro-predictions seems a fool's errand.

2paulfchristiano
Here is another way of looking at things: 1. From the inside it looks like automating the process of automation could lead to explosive growth. 2. Many simple endogenous growth models, if taken seriously, tend to predict explosive growth at finite time. (Including the simplest ones.) 3. A straightforward extrapolation of historical growth suggests explosive growth in the 21st century (depending on whether you read the great stagnation as a permanent change or a temporary fluctuation). You might object to any one of those lines of arguments on their own, but taken together the story seems compelling to me (at least if one wants to argue "We should take seriously the possibility of explosive growth.")
6KatjaGrace
If you knew AI to be radically more transformative than other technologies, I agree that predictions based straightforwardly on history would be inaccurate. If you are unsure how transformative AI will be though, it seems to me to be helpful to look at how often other technologies have made a big difference, and how much of a difference they have made. I suspect many technologies would seem transformative ahead of time - e.g. writing, but seem to have made little difference to economic growth.

Again, this is a famous one, but Watson seems really impressive to me. It's one thing to understand basic queries and do a DB query in response, but its ability to handle indirect questions that would confuse many a person (guilty), was surprising.

On the other hand, its implementation (as described in Second Machine Age) seems to be just as algorithmic, brittle and narrow as Deep Blue - basically Watson was as good as its programmers...

6SteveG
Along with self-driving cars, Watson's Jeopardy win shows that, given enough time, a team of AI engineers has an excellent chance of creating a specialized system which can outpace the best human expert in a much wider variety of tasks than we might have thought before. The capabilities of such a team has risen dramatically since I first studied AI. Charting and forecasting the capabilities of such a team is worthwhile. Having an estimate of what such a team will be able to accomplish in ten years is material to knowing when they will be able to do things we consider dangerous. After those two demonstrations, what narrow projects could we give a really solid AI team which would stump them? The answer is no longer at all clear. For example, the SAT or an IQ test seem fairly similar to Jeopardy, although the NLP tasks differ. The Jeopardy system also did not incorporate a wide variety of existing methods and solvers, because they were not needed to answer Jeopardy questions. In short order an IBM team can incorporate systems which can extract information from pictures and video, for example, into a Watson application.

Another way to get at the same point, I think, is - Are there things that we (contemporary humans) will never understand (from a Quora post)?

I think we can get some plausible insight on this by comparing an average person to the most brilliant minds today - or comparing the earliest recorded examples of reasoning in history to that of modernity. My intuition is that there are many concepts (quantum physics is a popular example, though I'm not sure it's a good one) that even most people today, and certainly in the past, will never comprehend, at least with... (read more)

3paulfchristiano
I am generally quite hesitant about using the differences between humans as evidence about the difficulty of AI progress (see here for some explanation). But I think this comparison is a fair one in this case, because we are talking about what is possible rather than what will be achieved soon. The exponentially improbable tails of the human intelligence distribution are a lower bound for what is possible in the long run, even without using any more resources than humans use. I do expect the gap between the smartest machines and the smartest humans to eventually be much larger than the gap between the smartest human and the average human (on most sensible measures).
1KatjaGrace
If there are insights that some humans can't 'comprehend', does this mean that society would never discover certain facts had the most brilliant people not existed, or just that they would never be able to understand them in an intuitive sense?
3billdesmedt
Actually, wrt quantum mechanics, the situation is even worse. It's not simply that "most people ... will never comprehend" it. Rather, per Richard Feynman (inventor of Feynman Diagrams, and arguable one of the 20th century's greatest physicists) nobody will ever comprehend it. Or as he put it, "If you think you understand quantum mechanics, you don't understand quantum mechanics." (http://en.wikiquote.org/wiki/Talk:Richard_Feynman#.22If_you_think_you_understand_quantum_mechanics.2C_you_don.27t_understand_quantum_mechanics..22)

Depends on the criteria we place on "understanding." Certainly an AI may act in a way that invite us to attribute 'common sense' to it in some situations, without solving the 'whole problem." Watson would seem to be a case in point - apparently demonstrating true language understanding within a broad, but still strongly circumscribed domain.

Even if we take "language understanding" in the strong sense (i.e. meaning native fluency, including ability for semantic innovation, things like irony, etc), there is still the question of ph... (read more)

3KatjaGrace
Whether we are concerned about the internal experiences of machines seems to depend largely on whether we are trying to judge the intrinsic value of the machines, or judge their consequences for human society. Both seem important.