Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

wedrifid comments on Q&A with Michael Littman on risks from AI - Less Wrong Discussion

15 Post author: XiXiDu 19 December 2011 09:51AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (88)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 19 December 2011 01:57:19PM 3 points [-]

And we probably suck at these tasks about as much as it's possible to suck and still build a technological civilization, because otherwise we would have built it at an earlier point in our evolution.

<Expression of extravagant agreement and emphasis/>

Comment author: cousin_it 19 December 2011 02:13:08PM 1 point [-]

This is a well-known argument. I got it from Eliezer somewhere, don't remember where.

Comment author: wedrifid 19 December 2011 02:38:02PM *  7 points [-]

This is a well-known argument. I got it from Eliezer somewhere, don't remember where.

Yes, and I'm sick of trying to explain to people why "we have no evidence that it is possible to have higher than human intelligence" is trivially absurd for approximately this reason. Hence encouragement of others saying the same thing.

Comment author: XiXiDu 19 December 2011 04:00:00PM 1 point [-]

I'm sick of trying to explain to people why "we have no evidence that it is possible to have higher than human intelligence" is trivially absurd for approximately this reason.

You wrote a reference post where you explain why you would deem anyone who wants to play quantum roulette crazy. If the argument mentioned by cousin_it is that good and you have to explain it to people that often, I want you to consider writing a post on it where you outline the argument in full detail ;-)

You could start by showing how most evolutionary designs are far short of their maximum efficiency and that we therefore have every reason to believe that human intelligence barely reached the minimum threshold necessary to build a technological civilization.

Comment author: wedrifid 19 December 2011 04:03:11PM *  1 point [-]

You could start by showing how most evolutionary designs are far short of their maximum efficiency and that we therefore have every reason to believe that human intelligence barely reached the minimum threshold necessary to build a technological civilization.

The therefore is pointing the wrong direction. That's the point!

Comment author: XiXiDu 19 December 2011 04:29:48PM *  0 points [-]

The therefore is pointing the wrong direction. That's the point!

Human intelligence barely reached the minimum threshold necessary to build a technological civilization and therefore we have every reason to believe that most evolutionary designs are far short of their maximum efficiency? That seems like a pretty bold claim based on the the fact that some of our expert systems are better at narrow tasks that were never optimized for by natural selection.

If you really want to convince people that human intelligence is the minimum of general intelligence possible given the laws of physics then in my opinion you have to provide some examples of other evolutionary designs that are very inefficient compared to their technological counterparts.

Comment author: FAWS 19 December 2011 10:45:30PM *  4 points [-]

If you really want to convince people that human intelligence is the minimum of general intelligence possible given the laws of physics then in my opinion you have to provide some examples of other evolutionary designs that are very inefficient compared to their technological counterparts.

E. g. trying to estimate how fast the first animal that walked on land could run and comparing that with how fast the currently fastest animal on land can run, how fast the fastest robot with legs can run and how fast the fastest car can "run"?

Comment author: wedrifid 19 December 2011 04:39:06PM -2 points [-]

Cousin_it, this is why I am glad to see people other than myself explaining the concept. I just don't have the patience to deal with this kind of thinking.

Comment author: XiXiDu 19 December 2011 05:18:48PM *  3 points [-]

I don't see how us not having build a technological civilization earlier in our history does constitute evidence that we only have the minimum intelligence that is necessary to do so. I don't think that intelligence makes as much of a difference to how quickly discoveries are made as you seem to think.

...this is why I am glad to see people other than myself explaining the concept.

I have never seen you explain the concept nor have I seen you refer to an explanation. I must have missed that, but I also haven't read all of your comments.

Comment author: jsteinhardt 20 December 2011 03:13:35PM 3 points [-]

I don't understand. XiXiDu's thinking was "if your assertion about humans was true, then we would expect to see these other things as well (i.e., other species being minimally fit for a task when they first start doing it); we therefore have a way of testing this hypothesis in a fairly convincing way, why don't we actually do that so that we can see if we're right or not?" That seems to me like the cornerstone of critical thought, or am I missing what you found objectionable?

Comment author: XiXiDu 20 December 2011 04:46:54PM *  0 points [-]

..."if your assertion about humans was true, then we would expect to see these other things as well (i.e., other species being minimally fit for a task when they first start doing it); we therefore have a way of testing this hypothesis in a fairly convincing way,..."

That is a good suggestion and I endorse it. I have however been thinking about something else.

I suspected that people like cousin_it and wedrifid must base their assumption that human intelligence is close to the minimum level of efficiency (optimization power/resources used) on other evidence, e.g. that expert systems can factor numbers 10^180 times faster than humans can. I didn't think that the whole argument rests on the fact that humans didn't start to build a technological civilization right after they became mentally equipped to do so.

Don't get a wrong impression here, I agree that it is very unlikely that human intelligence is close to the optimum. But I also don't see that we have much reason to believe that it is close to the minimum. Further I believe that intelligence is largely overrated by some people on lesswrong and that conceptual revolutions, e.g. the place-value notation method of encoding numbers, wouldn't have been discovered much quicker by much more intelligent beings other than due to lucky circumstances. In other words, I think that the speed of discovery is not proportional to intelligence but rather that intelligence quickly hits diminishing returns (anecdotal evidence here includes that real world success doesn't really seem to scale with IQ points. Are people like Steve Jobs that smart? Could Terence Tao become the richest person if he wanted to? Are high karma people on lesswrong unusually successful?).

But I digress. My suggestion was to compare technological designs with evolutionary designs. For example animal echolocation with modern sonar, ant colony optimization algorithm with the actual success rate of ant behavior, energy efficiency and maneuverability of artificial flight with insect or bird flight...

If intelligence is a vastly superior optimization process compared to evolution then I suspect that any technological replications of evolutionary designs, that have been around for some time, should have been optimized to an extent that their efficiency vastly outperforms that of their natural counterparts. And from this we could then draw the conclusion that intelligence itself is unlikely to be an outlier but just like those other evolutionary designs very inefficient and subject to strong artificial amplification.

ETA: I believe that even sub-human narrow AI is an existential risk. So that I believe that lots of people here are hugely overconfident about a possible intelligence explosion doesn't really change that much with respect to risks from AI.

Comment author: JoshuaZ 23 December 2011 12:07:22PM *  2 points [-]

(anecdotal evidence here includes that real world success doesn't really seem to scale with IQ points. Are people like Steve Jobs that smart? Could Terence Tao become the richest person if he wanted to? Are high karma people on lesswrong unusually successful?).

There's evidence that on the upper end higher IQ is inversely correlated with income but this may be due to people caring about non-monetary rewards (the correlation is positive at lower IQ levels and is positive at lower education levels but negatively correlated at higher education levels). I would not be surprised if there were an inverse correlation between high karma and success levels, since high karma may indicate procrastination and the like. If one looked at how high someone's average karma per a post is that might be a better metric to make this sort of point.