Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

cousin_it comments on Q&A with Michael Littman on risks from AI - Less Wrong Discussion

15 Post author: XiXiDu 19 December 2011 09:51AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (88)

You are viewing a single comment's thread.

Comment author: cousin_it 19 December 2011 12:43:00PM *  20 points [-]

I think this expert is anthropomorphizing too much. To pose an extinction risk, a machine doesn't even need to talk, much less replicate all the accidental complexity of human minds. It just has to be good at physics and engineering.

These tasks seem easier to formalize than many other things humans do: in particular, you could probably figure out the physics of our universe from very little observational data, given a simplicity prior and lots of computing power (or a good enough algorithm). Some engineering tasks are limited by computing power too, e.g. protein folding is an already formalized problem, and a machine that could solve it efficiently could develop nanotech faster than humans do.

We humans probably suck at physics and engineering on an absolute scale just like we suck at multiplying 32-bit numbers, see Moravec's paradox. And we probably suck at these tasks about as much as it's possible to suck and still build a technological civilization, because otherwise we would have built it at an earlier point in our evolution.

We now know that playing chess doesn't require human-level intelligence as Littman understands it. It may turn out that destroying the world doesn't require human-level intelligence either. A narrow AI could do just fine.

Comment author: Nymogenous 19 December 2011 01:00:58PM 4 points [-]

I wouldn't take Moravec's paradox too seriously; all it seems to indicate is that we're better at programming a system we've spent thousands of years formalizing (eg, math) than a system that's built into our brains so that we never really think about it...hardly surprising to me.

Comment author: cousin_it 19 December 2011 01:13:35PM *  7 points [-]

I think Moravec's paradox is more than a selection effect. Face recognition requires more computing power than multiplying two 32-bit numbers, and it's not just because we've learned to formalize one but not the other. We will never get so good at programming computers that our face-recognition programs get faster than our number-multiplication programs.

Comment author: Vladimir_Nesov 19 December 2011 01:23:26PM 8 points [-]

We now know that playing chess doesn't require human-level intelligence as Littman understands it. It may turn out that destroying the world doesn't require human-level intelligence either. A narrow AI could do just fine.

Interesting: this framing moved me more than your previous explanation.

Comment author: JoshuaZ 19 December 2011 04:25:11PM *  7 points [-]

And we probably suck at these tasks about as much as it's possible to suck and still build a technological civilization, because otherwise we would have built it at an earlier point in our evolution.

I don't think this follows. Humans spent thousands of years in near stagnation (the time before the dawn of agriculture is but one example). It isn't clear what caused technological civilization to take off but when new discoveries occurred almost looks like some sort of nearly random process except that the probability of a new discovery or invention increases as more discoveries occur. I'd almost consider modeling it as a biased coin which starts off with an extreme bias towards tails, but each time it turns up heads, the bias shifts a bit in the heads direction. Something like P(heads on the nth flip)= (1+k)/(10^5 + k) where k is the number of previous flips that came up heads.If that's the case, then the timing doesn't by itself tell us much about where our capacity is for civilization. It doesn't look that improbable that some other extinct species might even have had the capability at about where we are or higher but went extinct before they got those first few lucky coin flips.

Comment author: billswift 19 December 2011 06:31:20PM 3 points [-]

almost looks like some sort of nearly random process except that the probability of a new discovery or invention increases as more discoveries occur.

And as population increases that would tend to increase the rate of discovery or invention as well. This is basically Julian Simon's argument in The Great Breakthrough and Its Causes, that gradually increasing population hit a point where the rates of discovery and invention suddenly started increasing rapidly (and population then started increasing even more rapidly), resulting in the Renaissance and ultimately in the Industrial Revolution. He gives some thought and argument as to why they didn't happen earlier in India or China, but I think the specific arguments a bit iffy.

Comment author: Vladimir_Nesov 19 December 2011 05:44:08PM *  0 points [-]

In a blink of evolution's eye.

Comment author: whpearson 19 December 2011 09:53:58PM 6 points [-]

It just has to be good at physics and engineering.

I would contend it would have to know what is in the current environment as well. What bacteria and other micro organisms it is likely to face ( a largely unexplored question by humans), what chemicals it will have available (as potential feedstocks and poisons) and what radiation levels.

To get these from first principles it would have to recreate the evolution of earth from scratch.

Some engineering tasks are limited by computing power too, e.g. protein folding is an already formalized problem,

What do you mean by a formalized problem in this context? I'm interested in links on the subject.

Comment author: cousin_it 19 December 2011 11:12:23PM *  2 points [-]

Sorry for speaking so confidently. I don't really know much about protein folding, it was just the impression I got from Wikipedia: 1, 2.

Comment author: JoshuaZ 19 December 2011 10:06:59PM *  3 points [-]

There are a variety of formalized versions of protein folding. See for example this paper(pdf). There are however questions if these models are completely accurate. Computing how a protein will fold based on a model is often so difficult that testing the actual limits of the models is tricky. The model given in the paper I linked to is known to be too simplistic in many practical cases.

Comment author: wedrifid 19 December 2011 01:57:19PM 3 points [-]

And we probably suck at these tasks about as much as it's possible to suck and still build a technological civilization, because otherwise we would have built it at an earlier point in our evolution.

<Expression of extravagant agreement and emphasis/>

Comment author: cousin_it 19 December 2011 02:13:08PM 1 point [-]

This is a well-known argument. I got it from Eliezer somewhere, don't remember where.

Comment author: wedrifid 19 December 2011 02:38:02PM *  7 points [-]

This is a well-known argument. I got it from Eliezer somewhere, don't remember where.

Yes, and I'm sick of trying to explain to people why "we have no evidence that it is possible to have higher than human intelligence" is trivially absurd for approximately this reason. Hence encouragement of others saying the same thing.

Comment author: XiXiDu 19 December 2011 04:00:00PM 1 point [-]

I'm sick of trying to explain to people why "we have no evidence that it is possible to have higher than human intelligence" is trivially absurd for approximately this reason.

You wrote a reference post where you explain why you would deem anyone who wants to play quantum roulette crazy. If the argument mentioned by cousin_it is that good and you have to explain it to people that often, I want you to consider writing a post on it where you outline the argument in full detail ;-)

You could start by showing how most evolutionary designs are far short of their maximum efficiency and that we therefore have every reason to believe that human intelligence barely reached the minimum threshold necessary to build a technological civilization.

Comment author: wedrifid 19 December 2011 04:03:11PM *  1 point [-]

You could start by showing how most evolutionary designs are far short of their maximum efficiency and that we therefore have every reason to believe that human intelligence barely reached the minimum threshold necessary to build a technological civilization.

The therefore is pointing the wrong direction. That's the point!

Comment author: XiXiDu 19 December 2011 04:29:48PM *  0 points [-]

The therefore is pointing the wrong direction. That's the point!

Human intelligence barely reached the minimum threshold necessary to build a technological civilization and therefore we have every reason to believe that most evolutionary designs are far short of their maximum efficiency? That seems like a pretty bold claim based on the the fact that some of our expert systems are better at narrow tasks that were never optimized for by natural selection.

If you really want to convince people that human intelligence is the minimum of general intelligence possible given the laws of physics then in my opinion you have to provide some examples of other evolutionary designs that are very inefficient compared to their technological counterparts.

Comment author: FAWS 19 December 2011 10:45:30PM *  4 points [-]

If you really want to convince people that human intelligence is the minimum of general intelligence possible given the laws of physics then in my opinion you have to provide some examples of other evolutionary designs that are very inefficient compared to their technological counterparts.

E. g. trying to estimate how fast the first animal that walked on land could run and comparing that with how fast the currently fastest animal on land can run, how fast the fastest robot with legs can run and how fast the fastest car can "run"?

Comment author: wedrifid 19 December 2011 04:39:06PM -2 points [-]

Cousin_it, this is why I am glad to see people other than myself explaining the concept. I just don't have the patience to deal with this kind of thinking.

Comment author: XiXiDu 19 December 2011 05:18:48PM *  3 points [-]

I don't see how us not having build a technological civilization earlier in our history does constitute evidence that we only have the minimum intelligence that is necessary to do so. I don't think that intelligence makes as much of a difference to how quickly discoveries are made as you seem to think.

...this is why I am glad to see people other than myself explaining the concept.

I have never seen you explain the concept nor have I seen you refer to an explanation. I must have missed that, but I also haven't read all of your comments.

Comment author: jsteinhardt 20 December 2011 03:13:35PM 3 points [-]

I don't understand. XiXiDu's thinking was "if your assertion about humans was true, then we would expect to see these other things as well (i.e., other species being minimally fit for a task when they first start doing it); we therefore have a way of testing this hypothesis in a fairly convincing way, why don't we actually do that so that we can see if we're right or not?" That seems to me like the cornerstone of critical thought, or am I missing what you found objectionable?

Comment author: XiXiDu 20 December 2011 04:46:54PM *  0 points [-]

..."if your assertion about humans was true, then we would expect to see these other things as well (i.e., other species being minimally fit for a task when they first start doing it); we therefore have a way of testing this hypothesis in a fairly convincing way,..."

That is a good suggestion and I endorse it. I have however been thinking about something else.

I suspected that people like cousin_it and wedrifid must base their assumption that human intelligence is close to the minimum level of efficiency (optimization power/resources used) on other evidence, e.g. that expert systems can factor numbers 10^180 times faster than humans can. I didn't think that the whole argument rests on the fact that humans didn't start to build a technological civilization right after they became mentally equipped to do so.

Don't get a wrong impression here, I agree that it is very unlikely that human intelligence is close to the optimum. But I also don't see that we have much reason to believe that it is close to the minimum. Further I believe that intelligence is largely overrated by some people on lesswrong and that conceptual revolutions, e.g. the place-value notation method of encoding numbers, wouldn't have been discovered much quicker by much more intelligent beings other than due to lucky circumstances. In other words, I think that the speed of discovery is not proportional to intelligence but rather that intelligence quickly hits diminishing returns (anecdotal evidence here includes that real world success doesn't really seem to scale with IQ points. Are people like Steve Jobs that smart? Could Terence Tao become the richest person if he wanted to? Are high karma people on lesswrong unusually successful?).

But I digress. My suggestion was to compare technological designs with evolutionary designs. For example animal echolocation with modern sonar, ant colony optimization algorithm with the actual success rate of ant behavior, energy efficiency and maneuverability of artificial flight with insect or bird flight...

If intelligence is a vastly superior optimization process compared to evolution then I suspect that any technological replications of evolutionary designs, that have been around for some time, should have been optimized to an extent that their efficiency vastly outperforms that of their natural counterparts. And from this we could then draw the conclusion that intelligence itself is unlikely to be an outlier but just like those other evolutionary designs very inefficient and subject to strong artificial amplification.

ETA: I believe that even sub-human narrow AI is an existential risk. So that I believe that lots of people here are hugely overconfident about a possible intelligence explosion doesn't really change that much with respect to risks from AI.