Cthulhoo comments on Why an Intelligence Explosion might be a Low-Priority Global Risk - Less Wrong

3 Post author: XiXiDu 14 November 2011 11:40AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

You are viewing a single comment's thread. Show more comments above.

Comment author: Cthulhoo 14 November 2011 12:49:23PM 1 point [-]

Even if we postulate an increasing difficulty or threshold of "contemplative-productivity" per new "layer" of intelligence, the following remains true: Any AGI which is designed as more "intelligent" than the (A)GI which designed it will be material evidence that GI can be incremented upwards through design: and furthermore that general intelligence can do this. This then implies that any general intelligence that can design an intelligence superior to itself will likely do so in a manner that creates a general intelligence which is superior at designing superior intelligences, as this has already been demonstrated to be a characteristic of general intelligences of the original intelligence's magnitude.

There isn't the need to have infinite recursion. Even if there is some cap on intelligence (due to resources optimization or something else we have yet to discover), the risk is still there if the cap isn't exceptionally near the human level. If the AI is to us what we are to chimps it may very well be enough.

Comment author: Logos01 14 November 2011 01:42:59PM *  0 points [-]

There isn't the need to have infinite recursion.

Or, frankly, recursion at all. Say we can't make anything smarter than humans... but we can make them reliably smart, and smaller than humans. AGI bots as smart as our average "brilliant" guy with no morals and the ability to accelerate as only solid-state equipment can... is frankly pretty damned scary all on its own.

(You could also count, under some auspices, "intelligence explosion" as meaning "an explosion in the number of intelligences". Imagine if for every human being the AGIs had 10,000 minds. Exactly what impact would the average human's mental contributions have? What, then, of 'intellectual labor'? Or manual labor?)

Comment author: Cthulhoo 14 November 2011 01:59:32PM 2 points [-]

Good point.

In addition, supposing the AI is slightly smarter than humans and can easily replicate itself, Black Team effects could possibly be relevant (just an hypothesis, really, but still interesting to consider).

Comment author: TimS 14 November 2011 08:35:52PM *  0 points [-]

Could you expand this a little further. I'm not afraid of amoral, fast-thinking, miniature Isaac Newtons unless they are a substantial EDIT: number (>1000 at the very least) or are not known about by the relevant human policy-makers.

ETA: what it used to say at the edit was "faction of the human population (>1% at the very least)" TheOtherDave corrected my mis-estimate.

Comment author: RomeoStevens 15 November 2011 02:40:45AM 1 point [-]

have you read that alien message? http://lesswrong.com/lw/qk/that_alien_message/

Comment author: TimS 15 November 2011 04:07:05AM 0 points [-]

TheOtherDave showed that I mis-estimated the critical number. That said, there are several differences between my hypo and the story.

1) Most importantly, the difference between average human and Newton is smaller than the difference portrayed between aliens and humans.

2) There is a huge population of humans in the story, and I expressly limited my non-concern to much smaller populations.

3) The super-intelligents in the story do not appear to be know about by the relevant policy-makers (i.e. senior military officials) Not that it would matter in the story, but it seems likely to matter if the population of supers was much smaller.

Comment author: RomeoStevens 15 November 2011 04:42:56AM 0 points [-]

I'm not sure I see the point of the details you mention. The main thrust is that humans within the normal range given a million fold speedup (as silicon does) and unlimited collaboration would be a de facto super intelligence.

Comment author: TimS 15 November 2011 02:06:20PM *  0 points [-]

The humans were not within the current normal range. The average was explicitly higher. And I think that the aliens average intelligent was lower than the current human average, although the story is not explicit on that point. And there were billions of super-humans.

Let me put it this way: Google is smarter, wealthier, and more knowledgeable than I. But even if everyone at Google thought millions of times faster than everyone else, I still wouldn't worry about them taking over the world. Unless nobody else important knew about this capacity.

AI is a serious risk, but let's not underestimate how hard it is to be as capable as a Straumli Perversion.

Comment author: RomeoStevens 15 November 2011 08:38:36PM 0 points [-]

the higher average does not mean that they were not within the normal range. they are not individually super human.

Comment author: TheOtherDave 14 November 2011 09:18:49PM 1 point [-]

I don't have a clear sense of how dangerous a group of amoral fast-thinking miniature Isaac Newtons might be but it would surprise me if there were a particularly important risk-evaluation threshold crossed between 70 million amoral fast-thinking miniature Isaac Newtons and a mere, say, 700,000 of them.

Admittedly, I may be being distracted by the image of hundreds of thousands of miniature Isaac Newtons descending on Washington DC or something. It's a far more entertaining idea than those interminable zombie stories.

Comment author: TimS 14 November 2011 09:58:27PM *  0 points [-]

You are right that 1% of the world population is likely too large. I probably should have said "substantial numbers in existence." I've adjusted my estimate, so amoral Newtons don't worry me unless they are secret or exist (>1000). And the minimum number gets bigger unless there is reason to think amoral Newtons will cooperate amongst themselves to dominate humanity.

Comment author: Logos01 15 November 2011 03:27:28AM 2 points [-]

I don't think the numbers I was referencing quite came across to you.

I was postulating humans:AGIs :: 1:10,000

So not 70,000 Newtons or 70 million Newtons -- 70,000 Billion Newtons.