Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Khyre comments on Cascades, Cycles, Insight... - Less Wrong

13 Post author: Eliezer_Yudkowsky 24 November 2008 09:33AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (31)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Khyre 25 November 2008 03:58:04AM 0 points [-]

Is the disagreement about the speed and scope of the power of intelligence recursively applied to intelligence improvement (whether ems or AIs) ?

By "speed", I mean the equivalent of the neutron multiplication number in the Fermi analogy. Is Robin saying that whatever it is, it won't be so large that that, if it's higher than estimated, improvement will still be on a timescale that allows for human control (as if Fermi had been off a bit, there would still time to shove in the control rods). In particular, the improvement rate won't be so large that it can't be modelled with traditional economic tools. As opposed to Eliezer, who thinks that the situation is as if Fermi had actually put together an almost-bomb, and being off a bit would have resulted in a nuclear FOOM.

By "scope", I mean that once it's reached it's limits, the eventual level of technology reachable. I guess in the Fermi analogy, this is the difference between the nuclear and electrochemical energy scales. Is there disagreement about what might eventually be achieved by very intelligent entities ?

My intuition is that the hard takeoff is unlikely, but the size of the potential catastrophe is so huge that Friendliness is a worthwhile study.