timtyler comments on Why an Intelligence Explosion might be a Low-Priority Global Risk - Less Wrong

3 Post author: XiXiDu 14 November 2011 11:40AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 26 November 2011 02:09:39AM *  0 points [-]

It might ramp up with increasing the number of scientists, but there are clear diminishing marginal returns.

Perhaps eventually - but much depends on how you measure it. In dollar terms, scientists are doing fairly well - there are a lot of them and they command reasonable salaries. They may not be Newtons of Einsteins, but society still seems to be prepared to pay them in considerable numbers at the moment. I figure that means there is still important stuff that needs discovering.

[re: inductive inference] it seems to me to be incorrect to identify this as the only issue or even that it is necessarily more important.. If for example one could factor large numbers more efficiently, an AI could do a lot with that if it got minimal internet access.

As Eray Ă–zkural once said: "Every algorithm encodes a bit of intelligence". However, some algorithms do more so than others. A powerful inductive inference engine could be used to solve factoring problems - but also, a huge number of other problems.