NxGenSentience comments on Superintelligence 5: Forms of Superintelligence - Less Wrong

12 Post author: KatjaGrace 14 October 2014 01:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (112)

You are viewing a single comment's thread. Show more comments above.

Comment author: KatjaGrace 14 October 2014 04:05:29AM 7 points [-]

I'm confused about Bostrom's definition of superintelligence for collectives. The following quotes suggest that it is not the same as the usual definition of superintelligence (greatly outperforming a human in virtually all domains), but instead means something like 'greatly outperforming current collective intelligences', which have been improving for a long time:

To obtain a collective superintelligence from any present-day collective intelligence would require a very great degree of enhancement. The resulting system would need to be capable of vastly outperforming any current collective intelligence or other cognitive system across many very general domains. (p54)

Note that the threshold for collective superintelligence is indexed to the performance levels of the present - that is, the early twenty-first century. Over the course of human prehistory, and again over the course of human history, humanity's collective intelligence has grown by very large factors. ... current levels of human collective intelligence intelligence could be regarded as approaching superintelligence relative to a Pleistocene baseline. (p55)

This seems strange, if so. It hasn't been quite clear why we should care about the threshold of superintelligence in particular, but if it refers to different levels of capability for different kinds of entity, it seems hard for the concept to play an interesting role in our reasoning. Similarly for if it is a moving and relative point.

If we want to claim that something special will happen when AI reaches a certain level of intelligence, it seems we should prima facie expect something similar to happen when organizations reach that level of intelligence. It has been unclear to me from the book so far whether Bostrom thinks organizations are currently superintelligent, by non-collective metrics of superintelligence, yet this seems an important point.

Comment author: RobbBB 15 October 2014 11:15:13PM *  7 points [-]

Present-day humanity is a collective intelligence that is clearly 'superintelligent' relative to individual humans; yet Bostrom expresses little to no interest in this power disparity, and he clearly doesn't think his book is about the 2014 human race.

So I think his definitions of 'superintelligence' are rough, and Bostrom is primarily interested in the invincible inhuman singleton scenario: the possibility of humans building something other than humanity itself that can vastly outperform the entire human race in arbitrary tasks. He's also mainly interested in sudden, short-term singletons (the prototype being seed AI). Things like AGI and ems mainly interest him because they might produce an invincible singleton of that sort.

Wal-Mart and South Korea have a lot more generality and optimization power than any living human, but they're not likely to become invincibly superior to rival collectives anytime soon, in the manner of a paperclipper, and they're also unlikely to explosively self-improve. That matters more to Bostrom than whether they technically get defined as 'superintelligences'. I get the impression Bostrom ignores that kind of optimizer more because it doesn't fit his prototype, and because the short-term risks and benefits prima facie seem much smaller, than because of any detailed analysis of the long-term effects of power-acquiring networks.

It's important (from Bostrom's perspective) that the invincible singleton scenario is defined relative to humans at the time it's invented; if we build an AGI in 2100 that's superintelligent relative to 2014 humans, but stupid relative to 2100 humans, then Bostrom doesn't particularly care (unless that technology might lead to an AI that's superintelligent relative to its contemporaries).

It's also important for invincible singleton, at least in terms of selecting a prototype case, that it's some optimizer extrinsic to humanity (or, in the case of ems and biologically super-enhanced humans -- which I get the impression are edge cases in Bostrom's conceptual scheme -- the optimizer is at least extrinsic to some privileged subset of humanity). That's why it's outside the scope of the book Superintelligence to devote a lot of time to the risks of mundane totalitarianism, the promise of a world government, or the general class of cases where humanity just keeps gradually improving in intelligence but without any (intragenerational) conflicts or values clashes. Even though it's hard to define 'superintelligence' in a way that excludes governments, corporations, humanity-as-a-whole, etc.

(I get the vague feeling in Superintelligence that Bostrom finds 'merely human' collective superintelligence relatively boring, except in so far as it affects the likely invincible inhuman singleton scenarios. It's not obvious to me that Hansonian em-world scenarios deserve multiple chapters while 'Networks and organizations' deserve a fairly dismissive page-and-a-half mention; but if you're interested in invincible singletons extrinsic to humanity, and especially in near-term AI pathways to such, it makes sense to see ems as more strategically relevant.)

Bostrom's secondary interest is the effects of enhancing humans' / machines' / institutions' general problem-solving abilities relative to ~2014 levels. So he does discuss things other than invincible singletons, and he does care about how human intelligence will change relative to today (much more so than he cares about superintelligence relative to, say, 900 BC). But I don't think this is the main focus.

Comment author: NxGenSentience 20 October 2014 11:23:12AM 0 points [-]

Thanks for the very nice post.