You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

JoshuaFox comments on Superintelligence 8: Cognitive superpowers - Less Wrong Discussion

7 Post author: KatjaGrace 04 November 2014 02:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (95)

You are viewing a single comment's thread. Show more comments above.

Comment author: JoshuaFox 07 November 2014 08:48:22AM 0 points [-]

To make it a bit clearer: A financial AI that somehow never developed the ability do do anything beyond buy and sell orders could still have catastrophic effects, if it hyperoptimized its trading to the point that it gained some very large percent of the world's assets. This would have disruptive effects on the economy, and depending on the AI's goals, that would not stop the AI from hoovering up every asset.

Comment author: KatjaGrace 09 November 2014 05:30:46AM 2 points [-]

Note that this relies on this one AI being much better than the competition, so similar considerations apply to the usual case of a more general AI suddenly becoming very powerful. One difference is that an intelligence explosion in this case would be via investing money in hiring more labor, rather than via the AI itself laboring.