I wrote a blog post responding to Kevin Kelly that I'm fairly happy about. It summarizes some of the reasons why I figure that superintelligence is likely to be a fairly big deal. If you read it, please post your comments here. 

New Comment
12 comments, sorted by Click to highlight new comments since: Today at 11:27 PM

I think one way that people get a little caught up when thinking about the possibility of superintelligence is via the typical mind fallacy. Someone says "a superintelligence could theoretically do [super hard thing]" and my intuitive response is: "Yeah, but how plausible is that? Why would it even bother? That seems like it would take a lot of effort, and it probably wouldn't even work." By default I anchor on my own mind's capabilities (and motivation, even), instead of trying to think in information-theoretic terms, or figuring out what does and doesn't violate the laws of physics.

Might be an artifact of what Kelly talks about but I think the focus on Immortality towards the beginning is too strong and not helpful. Speculating about that technology is distracting from the more general idea of the power of super intelligence and it's not actually neccesary or even a first step to how recursive AI will transform the world.

Other than that, I like the essay on the whole.

Also a useful enhancement that is not addressed: Meta-research becomes hugely faster and more useful with massively increased speed and processing power, and doesn't require experimentation. A hyperintelligence can aggregate way more data that already exists than we can, and apply it more usefully.

A hyperintelligence can aggregate way more data that already exists than we can, and apply it more usefully.

People often seem to miss that difference. Machine intelligence isn't just us, but faster. The vanity of the Turing test struck me the other day. A machine is intelligent when it an pass for us. Are we supposed to be the be all and end all of intelligence?

A vastly greater working memory, attention span, etc., can make problems of search and integration much easier. At least at first, machine intelligence will be most effective when used in collaboration with people. We've already got a lot of human style intelligence in people - I'd rather get new and complementary abilities.

Thanks. I address immortality early on because it is a main point that Kelly addresses throughout his short piece. I appreciate your point about meta-research, but my intuition says that it might be even harder for many to grasp than the points in the post. Can you name concrete instances where meta-research led to breakthroughs?

One point I would like mentioned, maybe it is being mentioned as parts of other arguments, is that the world is getting a whole lot more legible, legible as in James C Scott's work. There is a greater and greater tendency to create systems that increase the ability of any super-intelligence to takeover the world. This is not the strongest argument against Kevin Kelly's argument, but one that can be made.

It's true that superintelligence is likely to be a big deal. It's interesting to see where intelligence works worst, though. As civilization accelerates, its slowest aspects will be the ones holding things back.

I think the L.H.C. is our current best example. There's no "Moore's law" for particle accelerators. Another example involves understanding large complex systems - such as predicting the weather or stockmarket crashes. Of course intelligence helps with such things - just not as much as in some other areas.

Kelly's argument seems silly bordering on stupid. Interesting what drove him to this

[-]A11311y-20

I agree with both you and Kelly most of the time, you more than him. I did think this part required a nitpick:

To me, at first impression, the notion that a ten million times speedup would have a negligible effect on scientific innovation or progress seems absurd. It appears obvious that it would have a world-transforming impact. To me, it appears obvious that it would be capable of having a world-transforming impact. Just because it can doesn't mean it will, though I certainly wouldn't want to assume it won't.

If I became superintelligent tomorrow, I probably wouldn't significantly change the world. Not on a Singularity scale, not right away, and not just because I could. Would you? My point there is that you can't assume that because the first superintelligence can construct nanobots and take over the world, it therefore will.

A lot depends on what we mean by "superintelligent." But yes, there's a level of intelligence above which I'm fairly confident that I would change the world, as rapidly as practical, because I can. Why wouldn't you?

[-]A11311y10

Not just because I can. Maybe for other reasons, like the fact that I still care about the punier humans and want to make it better for them. That depends on preferences that an AI might or might not have.

It's not really about what I would do; it's the fact that we don't know what an arbitrary superintelligence will or won't decide to do.

(I'm thinking of "superintelligence" as "smart enough to do more or less whatever it wants by sheer thinkism," which I've already said I agree is possible. Is this nonstandard?)

Sure, "because I have preferences which changing the world would more effectively maximize than leaving it as it is" is more accurate than "because I can". And, sure, maybe an arbitrary superintelligence would have no such preferences, but I'm not confident of that.

(Nope, it's standard (locally).)

[+]noen11y-100