XiXiDu comments on Why an Intelligence Explosion might be a Low-Priority Global Risk - Less Wrong

3 Post author: XiXiDu 14 November 2011 11:40AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 25 November 2011 07:56:41PM 0 points [-]

You are far more knowledgeable than me and a lot better at expressing possible problems with an intelligence explosion.

Since the very beginning I wondered why nobody has written down what speaks against that possibility. Which is one of the reasons for why I even bothered to start arguing against it myself -- the trigger has been a deletion of a certain post which made me realize that there is a lot more to it (socially and psychologically) than the average research project -- even though I knew very well that I don't have the necessary background, nor patience, to do so in a precise and elaborated manner.

Do people think that a skeptical inquiry of, and counterarguments against an intelligence explosion are not valuable?

Comment author: JoshuaZ 28 November 2011 07:50:57PM 0 points [-]

You are far more knowledgeable than me and a lot better at expressing possible problems with an intelligence explosion.

I don't know about that. The primary issue I've talked about limiting an intelligence explosion is computational complexity issues. That's a necessarily technical area. Moreover, almost all the major boundaries are conjectural. If P=NP in a practical way, than an intelligence explosion may be quite easy. There's also a major danger that in thinking/arguing that this is relevant, I may be engaging in motivated cognition in that there's an obvious bias to thinking that things close to one's own field are somehow relevant.