Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

wedrifid comments on Q&A with Michael Littman on risks from AI - Less Wrong Discussion

15 Post author: XiXiDu 19 December 2011 09:51AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (88)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 19 December 2011 04:03:11PM *  1 point [-]

You could start by showing how most evolutionary designs are far short of their maximum efficiency and that we therefore have every reason to believe that human intelligence barely reached the minimum threshold necessary to build a technological civilization.

The therefore is pointing the wrong direction. That's the point!

Comment author: XiXiDu 19 December 2011 04:29:48PM *  0 points [-]

The therefore is pointing the wrong direction. That's the point!

Human intelligence barely reached the minimum threshold necessary to build a technological civilization and therefore we have every reason to believe that most evolutionary designs are far short of their maximum efficiency? That seems like a pretty bold claim based on the the fact that some of our expert systems are better at narrow tasks that were never optimized for by natural selection.

If you really want to convince people that human intelligence is the minimum of general intelligence possible given the laws of physics then in my opinion you have to provide some examples of other evolutionary designs that are very inefficient compared to their technological counterparts.

Comment author: FAWS 19 December 2011 10:45:30PM *  4 points [-]

If you really want to convince people that human intelligence is the minimum of general intelligence possible given the laws of physics then in my opinion you have to provide some examples of other evolutionary designs that are very inefficient compared to their technological counterparts.

E. g. trying to estimate how fast the first animal that walked on land could run and comparing that with how fast the currently fastest animal on land can run, how fast the fastest robot with legs can run and how fast the fastest car can "run"?

Comment author: wedrifid 19 December 2011 04:39:06PM -2 points [-]

Cousin_it, this is why I am glad to see people other than myself explaining the concept. I just don't have the patience to deal with this kind of thinking.

Comment author: XiXiDu 19 December 2011 05:18:48PM *  3 points [-]

I don't see how us not having build a technological civilization earlier in our history does constitute evidence that we only have the minimum intelligence that is necessary to do so. I don't think that intelligence makes as much of a difference to how quickly discoveries are made as you seem to think.

...this is why I am glad to see people other than myself explaining the concept.

I have never seen you explain the concept nor have I seen you refer to an explanation. I must have missed that, but I also haven't read all of your comments.

Comment author: jsteinhardt 20 December 2011 03:13:35PM 3 points [-]

I don't understand. XiXiDu's thinking was "if your assertion about humans was true, then we would expect to see these other things as well (i.e., other species being minimally fit for a task when they first start doing it); we therefore have a way of testing this hypothesis in a fairly convincing way, why don't we actually do that so that we can see if we're right or not?" That seems to me like the cornerstone of critical thought, or am I missing what you found objectionable?

Comment author: XiXiDu 20 December 2011 04:46:54PM *  0 points [-]

..."if your assertion about humans was true, then we would expect to see these other things as well (i.e., other species being minimally fit for a task when they first start doing it); we therefore have a way of testing this hypothesis in a fairly convincing way,..."

That is a good suggestion and I endorse it. I have however been thinking about something else.

I suspected that people like cousin_it and wedrifid must base their assumption that human intelligence is close to the minimum level of efficiency (optimization power/resources used) on other evidence, e.g. that expert systems can factor numbers 10^180 times faster than humans can. I didn't think that the whole argument rests on the fact that humans didn't start to build a technological civilization right after they became mentally equipped to do so.

Don't get a wrong impression here, I agree that it is very unlikely that human intelligence is close to the optimum. But I also don't see that we have much reason to believe that it is close to the minimum. Further I believe that intelligence is largely overrated by some people on lesswrong and that conceptual revolutions, e.g. the place-value notation method of encoding numbers, wouldn't have been discovered much quicker by much more intelligent beings other than due to lucky circumstances. In other words, I think that the speed of discovery is not proportional to intelligence but rather that intelligence quickly hits diminishing returns (anecdotal evidence here includes that real world success doesn't really seem to scale with IQ points. Are people like Steve Jobs that smart? Could Terence Tao become the richest person if he wanted to? Are high karma people on lesswrong unusually successful?).

But I digress. My suggestion was to compare technological designs with evolutionary designs. For example animal echolocation with modern sonar, ant colony optimization algorithm with the actual success rate of ant behavior, energy efficiency and maneuverability of artificial flight with insect or bird flight...

If intelligence is a vastly superior optimization process compared to evolution then I suspect that any technological replications of evolutionary designs, that have been around for some time, should have been optimized to an extent that their efficiency vastly outperforms that of their natural counterparts. And from this we could then draw the conclusion that intelligence itself is unlikely to be an outlier but just like those other evolutionary designs very inefficient and subject to strong artificial amplification.

ETA: I believe that even sub-human narrow AI is an existential risk. So that I believe that lots of people here are hugely overconfident about a possible intelligence explosion doesn't really change that much with respect to risks from AI.

Comment author: JoshuaZ 23 December 2011 12:07:22PM *  2 points [-]

(anecdotal evidence here includes that real world success doesn't really seem to scale with IQ points. Are people like Steve Jobs that smart? Could Terence Tao become the richest person if he wanted to? Are high karma people on lesswrong unusually successful?).

There's evidence that on the upper end higher IQ is inversely correlated with income but this may be due to people caring about non-monetary rewards (the correlation is positive at lower IQ levels and is positive at lower education levels but negatively correlated at higher education levels). I would not be surprised if there were an inverse correlation between high karma and success levels, since high karma may indicate procrastination and the like. If one looked at how high someone's average karma per a post is that might be a better metric to make this sort of point.

Comment author: XiXiDu 23 December 2011 03:09:52PM *  -1 points [-]

There's evidence that on the upper end higher IQ is inversely correlated with income but this may be due to people caring about non-monetary rewards...

That occurred to me as well and seems a reasonable guess. But let me restate the question. Would Marilyn vos Savant be proportionally more likely to destroy the world if she tried to than a 115 IQ individual? I just don't see that and I still don't understand the hype about intelligence around here.

All it really takes is to speed up the rate of discovery immensely by having a vast number of sub-human narrow AI scientists research various dangerous technologies and stumble upon something unexpected or solve a problem that enables unfriendly humans to wreak havoc.

The main advantage of AI is that it can be applied in parallel to brute force a solution. But this doesn't imply that you can brute force problem solving and optimization power itself. Conceptual revolutions are not signposted so that one only has to follow a certain path or use certain heuristics to pick them up. They are scattered randomly across design space and their frequency is decreasing disproportionately with each optimization step.

I might very well be wrong and recursive self-improvement is a real and economic possibility. But I don't see there being enough evidence to take that possibility as seriously as some here do. It doesn't look like that intelligence is instrumentally useful beyond a certain point, a point that might well be close to human levels of intelligence. Which doesn't imply that another leap in intelligence, alike the one that happened since chimpanzees and humans split, isn't possible. But there is not much reason to believe that it can be reached by recursive self-improvement. It might very well take sheer luck because the level above ours is as hard to grasp for us than our level is for chimpanzees.

So even if it is economically useful to optimize our level of intelligence there is no evidence that it is possible to reach the next level other than by stumbling upon it. Lots of "ifs", lots of conjunctive reasoning is required to get from simple algorithms over self-improvement to superhuman intelligence.

Comment author: wallowinmaya 26 December 2011 02:31:07PM 1 point [-]

Would Marilyn vos Savant be proportionally more likely to destroy the world if she tried to than a 115 IQ individual?

I think that's the wrong question; it should read:

Is it more likely that a human, rather than a dog, destroys the world if he tried to?

The difference in intelligence between Marylin vos Savant and a human with IQ 100 is very small in absolute terms.

Comment author: XiXiDu 15 January 2012 05:50:07PM 0 points [-]

Would Marilyn vos Savant be proportionally more likely to destroy the world if she tried to than a 115 IQ individual?

I think that's the wrong question; it should read:

Is it more likely that a human, rather than a dog, destroys the world if he tried to?

I believe that the question is an important one. An AI has to be able to estimate the expected utility of improving its own intelligence and I think it is unlikely that any level of intelligence is capable of estimating the expected utility of a whole level above its own, because 1) it can't possible know where it is located in design space 1b) how it can detect insights in solution space 2) how much resources it takes 3) how long it takes. Therefore any AI has to calculate the expected utility of the next small step towards the next level, the expected utility of small amplifications of its intelligence similar to the difference between an average human and that of Marylin vos Savant. It has to ask 1) is the next step instrumentally useful 2) are resources spent on amplification better spent on 2b) pursuing other ways to gain power 2c) to pursue terminal goals directly given the current resources and intelligence.

I believe that those questions shed light on the possibility of recursive self improvement and its economic feasibility.

Comment author: wallowinmaya 15 January 2012 06:34:50PM 1 point [-]

1) it can't possible know where it is located in design space 1b) how it can detect insights in solution space 2) how much resources it takes 3) how long it takes.

I think it's possible that an AI could at least roughly estimate where it's located in design space, how much ressources and how long it takes to increase its intelligence, but I'm happy to hear counter-arguments.

And it seems to me that the point of diminishing returns in increasing intelligence is very far away. I would do almost anything to gain, say, 30 IQ points, and that's nothing in absolute terms.

But I agree with you that these questions are important and that trusting my intuitions too much would be foolish.