You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

private_messaging comments on Superintelligence 6: Intelligence explosion kinetics - Less Wrong Discussion

9 Post author: KatjaGrace 21 October 2014 01:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (67)

You are viewing a single comment's thread. Show more comments above.

Comment author: private_messaging 27 October 2014 06:03:55PM *  3 points [-]

It may also be worth noting that there's no particular reason to expect a full blown AI that wants to do real world things to be also the first good algorithmic optimizer (or hardware optimizer), for example. The first good algorithmic optimizer can be run on it's own source, performing an entirely abstract task, without having to do the calculations relating to it's hardware basis, the real world, and so on, which are an enormous extra hurdle.

It seems to me that the issue is that the only way some people can imagine this 'explosion' to happen is by imagining fairly anthropomorphic software which performs a task monstrously more complicated than mere algorithmic optimization "explosion" (in the sense that algorithms are replaced with their theoretically ideal counterparts, or something close. For every task there's an optimum algorithm for doing it, and you can't do better than this algorithm).