You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

timtyler comments on What is the best compact formalization of the argument for AI risk from fast takeoff? - Less Wrong Discussion

11 Post author: utilitymonster 13 March 2012 01:44AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (20)

You are viewing a single comment's thread.

Comment author: timtyler 13 March 2012 08:10:47PM *  -1 points [-]
  1. At some point in the development of AI, there will be a very swift increase in the optimization power of the most powerful AI, moving from a non-dangerous level to a level of superintelligence. (Fast takeoff)

...unless people want it to go slowly. It isn't a law of nature that things will go quickly. It seems likely that a more unified society will be able to progress as slowly as it wants to. There are plenty of proposals to throttle development - via "nannies" or other kinds of safety valve.

Insistence on a rapid takeoff arises from a position of technological determinism. It ignores sociological factors.

IMO, the "rapid takeoff" idea should probably be seen as a fundraising ploy. It's big, scary, and it could conceivably happen - just the kind of thing for stimulating donations.

Comment author: utilitymonster 14 March 2012 12:04:08AM 1 point [-]

IMO, the "rapid takeoff" idea should probably be seen as a fundraising ploy. It's big, scary, and it could conceivably happen - just the kind of thing for stimulating donations.

It seems that SIAI would have more effective methods for fundraising, e.g. simply capitalizing on "Rah Singularity!". I therefore find this objection somewhat implausible.