TheOtherDave comments on Think Twice: A Response to Kevin Kelly on ‘Thinkism’ - Less Wrong

6 Post author: MichaelAnissimov 07 November 2012 06:07AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (12)

You are viewing a single comment's thread. Show more comments above.

Comment author: A113 07 November 2012 09:13:36PM *  -1 points [-]

I agree with both you and Kelly most of the time, you more than him. I did think this part required a nitpick:

To me, at first impression, the notion that a ten million times speedup would have a negligible effect on scientific innovation or progress seems absurd. It appears obvious that it would have a world-transforming impact. To me, it appears obvious that it would be capable of having a world-transforming impact. Just because it can doesn't mean it will, though I certainly wouldn't want to assume it won't.

If I became superintelligent tomorrow, I probably wouldn't significantly change the world. Not on a Singularity scale, not right away, and not just because I could. Would you? My point there is that you can't assume that because the first superintelligence can construct nanobots and take over the world, it therefore will.

Comment author: TheOtherDave 07 November 2012 09:15:45PM *  1 point [-]

A lot depends on what we mean by "superintelligent." But yes, there's a level of intelligence above which I'm fairly confident that I would change the world, as rapidly as practical, because I can. Why wouldn't you?

Comment author: A113 07 November 2012 09:34:11PM 1 point [-]

Not just because I can. Maybe for other reasons, like the fact that I still care about the punier humans and want to make it better for them. That depends on preferences that an AI might or might not have.

It's not really about what I would do; it's the fact that we don't know what an arbitrary superintelligence will or won't decide to do.

(I'm thinking of "superintelligence" as "smart enough to do more or less whatever it wants by sheer thinkism," which I've already said I agree is possible. Is this nonstandard?)

Comment author: TheOtherDave 07 November 2012 11:38:23PM 2 points [-]

Sure, "because I have preferences which changing the world would more effectively maximize than leaving it as it is" is more accurate than "because I can". And, sure, maybe an arbitrary superintelligence would have no such preferences, but I'm not confident of that.

(Nope, it's standard (locally).)