You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

ikrase comments on What bothers you about Less Wrong? - Less Wrong Discussion

18 Post author: Will_Newsome 19 May 2011 10:23AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (160)

You are viewing a single comment's thread.

Comment author: ikrase 04 January 2013 10:37:57AM 0 points [-]

On the subject of powerful self-improving AI, there does not seem to be enough discussion of real-world limitations or chances for manual override on 1. the AI integrating computational power and more importantly 2. the AI manipulating the outside world with limited info and no dedicated or trustworthy manipulators, or manipulators weaker than Nanotech God. I no longer believe that 1 is a major (or trustable!) limit on FOOM since an AI may be run in rented supercomputers, eat the Internet, etc but 2 seems not to be considered very much. I've seen some claims that an AI without too many communication restrictions may be able to anonymously order DNA and stuff and have some idiot mix them and make some kinda self-improving biology up to nanobots, but I haven't seen anything I could really call a threat assessment.