NxGenSentience comments on Superintelligence 5: Forms of Superintelligence - Less Wrong

12 Post author: KatjaGrace 14 October 2014 01:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (112)

You are viewing a single comment's thread. Show more comments above.

Comment author: RobbBB 15 October 2014 11:15:13PM *  7 points [-]

Present-day humanity is a collective intelligence that is clearly 'superintelligent' relative to individual humans; yet Bostrom expresses little to no interest in this power disparity, and he clearly doesn't think his book is about the 2014 human race.

So I think his definitions of 'superintelligence' are rough, and Bostrom is primarily interested in the invincible inhuman singleton scenario: the possibility of humans building something other than humanity itself that can vastly outperform the entire human race in arbitrary tasks. He's also mainly interested in sudden, short-term singletons (the prototype being seed AI). Things like AGI and ems mainly interest him because they might produce an invincible singleton of that sort.

Wal-Mart and South Korea have a lot more generality and optimization power than any living human, but they're not likely to become invincibly superior to rival collectives anytime soon, in the manner of a paperclipper, and they're also unlikely to explosively self-improve. That matters more to Bostrom than whether they technically get defined as 'superintelligences'. I get the impression Bostrom ignores that kind of optimizer more because it doesn't fit his prototype, and because the short-term risks and benefits prima facie seem much smaller, than because of any detailed analysis of the long-term effects of power-acquiring networks.

It's important (from Bostrom's perspective) that the invincible singleton scenario is defined relative to humans at the time it's invented; if we build an AGI in 2100 that's superintelligent relative to 2014 humans, but stupid relative to 2100 humans, then Bostrom doesn't particularly care (unless that technology might lead to an AI that's superintelligent relative to its contemporaries).

It's also important for invincible singleton, at least in terms of selecting a prototype case, that it's some optimizer extrinsic to humanity (or, in the case of ems and biologically super-enhanced humans -- which I get the impression are edge cases in Bostrom's conceptual scheme -- the optimizer is at least extrinsic to some privileged subset of humanity). That's why it's outside the scope of the book Superintelligence to devote a lot of time to the risks of mundane totalitarianism, the promise of a world government, or the general class of cases where humanity just keeps gradually improving in intelligence but without any (intragenerational) conflicts or values clashes. Even though it's hard to define 'superintelligence' in a way that excludes governments, corporations, humanity-as-a-whole, etc.

(I get the vague feeling in Superintelligence that Bostrom finds 'merely human' collective superintelligence relatively boring, except in so far as it affects the likely invincible inhuman singleton scenarios. It's not obvious to me that Hansonian em-world scenarios deserve multiple chapters while 'Networks and organizations' deserve a fairly dismissive page-and-a-half mention; but if you're interested in invincible singletons extrinsic to humanity, and especially in near-term AI pathways to such, it makes sense to see ems as more strategically relevant.)

Bostrom's secondary interest is the effects of enhancing humans' / machines' / institutions' general problem-solving abilities relative to ~2014 levels. So he does discuss things other than invincible singletons, and he does care about how human intelligence will change relative to today (much more so than he cares about superintelligence relative to, say, 900 BC). But I don't think this is the main focus.

Comment author: NxGenSentience 20 October 2014 11:23:12AM 0 points [-]

Thanks for the very nice post.