timtyler comments on Why an Intelligence Explosion might be a Low-Priority Global Risk - Less Wrong

3 Post author: XiXiDu 14 November 2011 11:40AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

You are viewing a single comment's thread. Show more comments above.

Comment author: JoshuaZ 25 November 2011 06:54:40PM 0 points [-]

In practice, much is down to how fast scientific and technological progress can accelerate. If seems fairly clear that progress is autocatalytic - and that the rate of progress ramps up with the number of scientists, which does not have hard limits.

It might ramp up with increasing the number of scientists, but there are clear diminishing marginal returns. There are more scientists today at some major research universities than there were at any point in the 19th century. Yet we don't have people constantly coming up with ideas as big as say evolution or Maxwell's equations. The low hanging fruit gets picked quickly.

Algorithm limits seem to apply more to the question of how smart a computer program can become in an isolated virtual world.

Matt Mahoney has looked at that area - though his results so far do not seem terribly interesting to me.

I agree that Mahoney's work isn't so far very impressive. The models used are simplistic and weak.

I think one math problem is much more important to progress than all the other ones: inductive inference.

Many forms of induction are NP-hard and some versions are NP-complete so these sorts of limits are clearly relevant. Some other forms are closely related where one models things in terms of recognizing pseudorandom number generators. But it seems to me to be incorrect to identify this as the only issue or even that it is necessarily more important.. If for example one could factor large numbers more efficiently, an AI could do a lot with that if it got minimal internet access.

Comment author: timtyler 26 November 2011 02:09:39AM *  0 points [-]

It might ramp up with increasing the number of scientists, but there are clear diminishing marginal returns.

Perhaps eventually - but much depends on how you measure it. In dollar terms, scientists are doing fairly well - there are a lot of them and they command reasonable salaries. They may not be Newtons of Einsteins, but society still seems to be prepared to pay them in considerable numbers at the moment. I figure that means there is still important stuff that needs discovering.

[re: inductive inference] it seems to me to be incorrect to identify this as the only issue or even that it is necessarily more important.. If for example one could factor large numbers more efficiently, an AI could do a lot with that if it got minimal internet access.

As Eray Ă–zkural once said: "Every algorithm encodes a bit of intelligence". However, some algorithms do more so than others. A powerful inductive inference engine could be used to solve factoring problems - but also, a huge number of other problems.