JoshuaZ comments on Why an Intelligence Explosion might be a Low-Priority Global Risk - Less Wrong

3 Post author: XiXiDu 14 November 2011 11:40AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

You are viewing a single comment's thread. Show more comments above.

Comment author: JoshuaZ 25 November 2011 06:54:40PM 0 points [-]

In practice, much is down to how fast scientific and technological progress can accelerate. If seems fairly clear that progress is autocatalytic - and that the rate of progress ramps up with the number of scientists, which does not have hard limits.

It might ramp up with increasing the number of scientists, but there are clear diminishing marginal returns. There are more scientists today at some major research universities than there were at any point in the 19th century. Yet we don't have people constantly coming up with ideas as big as say evolution or Maxwell's equations. The low hanging fruit gets picked quickly.

Algorithm limits seem to apply more to the question of how smart a computer program can become in an isolated virtual world.

Matt Mahoney has looked at that area - though his results so far do not seem terribly interesting to me.

I agree that Mahoney's work isn't so far very impressive. The models used are simplistic and weak.

I think one math problem is much more important to progress than all the other ones: inductive inference.

Many forms of induction are NP-hard and some versions are NP-complete so these sorts of limits are clearly relevant. Some other forms are closely related where one models things in terms of recognizing pseudorandom number generators. But it seems to me to be incorrect to identify this as the only issue or even that it is necessarily more important.. If for example one could factor large numbers more efficiently, an AI could do a lot with that if it got minimal internet access.

Comment author: XiXiDu 25 November 2011 07:56:41PM 0 points [-]

You are far more knowledgeable than me and a lot better at expressing possible problems with an intelligence explosion.

Since the very beginning I wondered why nobody has written down what speaks against that possibility. Which is one of the reasons for why I even bothered to start arguing against it myself -- the trigger has been a deletion of a certain post which made me realize that there is a lot more to it (socially and psychologically) than the average research project -- even though I knew very well that I don't have the necessary background, nor patience, to do so in a precise and elaborated manner.

Do people think that a skeptical inquiry of, and counterarguments against an intelligence explosion are not valuable?

Comment author: JoshuaZ 28 November 2011 07:50:57PM 0 points [-]

You are far more knowledgeable than me and a lot better at expressing possible problems with an intelligence explosion.

I don't know about that. The primary issue I've talked about limiting an intelligence explosion is computational complexity issues. That's a necessarily technical area. Moreover, almost all the major boundaries are conjectural. If P=NP in a practical way, than an intelligence explosion may be quite easy. There's also a major danger that in thinking/arguing that this is relevant, I may be engaging in motivated cognition in that there's an obvious bias to thinking that things close to one's own field are somehow relevant.

Comment author: timtyler 26 November 2011 02:09:39AM *  0 points [-]

It might ramp up with increasing the number of scientists, but there are clear diminishing marginal returns.

Perhaps eventually - but much depends on how you measure it. In dollar terms, scientists are doing fairly well - there are a lot of them and they command reasonable salaries. They may not be Newtons of Einsteins, but society still seems to be prepared to pay them in considerable numbers at the moment. I figure that means there is still important stuff that needs discovering.

[re: inductive inference] it seems to me to be incorrect to identify this as the only issue or even that it is necessarily more important.. If for example one could factor large numbers more efficiently, an AI could do a lot with that if it got minimal internet access.

As Eray Ă–zkural once said: "Every algorithm encodes a bit of intelligence". However, some algorithms do more so than others. A powerful inductive inference engine could be used to solve factoring problems - but also, a huge number of other problems.