You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

David_Bolin comments on Steelmaning AI risk critiques - Less Wrong Discussion

26 Post author: Stuart_Armstrong 23 July 2015 10:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (98)

You are viewing a single comment's thread.

Comment author: David_Bolin 23 July 2015 10:53:07AM 8 points [-]

Ramez Naam discusses it here: http://rameznaam.com/2015/05/12/the-singularity-is-further-than-it-appears/

I find the discussion of corporations as superintelligences somewhat persuasive. I understand why Eliezer and others do not consider them superintelligences, but it seems to me a question of degree; they could become self-improving in more and more respects and at no point would I expect a singularity or a world-takeover.

I also think the argument from diminishing returns is pretty reasonable: http://www.sphere-engineering.com/blog/the-singularity-is-not-coming.html

Comment author: [deleted] 23 July 2015 05:40:47PM *  8 points [-]

On the same note, but probably already widely known, Scott Aaronson on "The Signularity Is Far" (2008): http://www.scottaaronson.com/blog/?p=346

Comment author: TheAncientGeek 23 July 2015 03:13:07PM 1 point [-]

Now, that's what I was looking for.