John_Maxwell_IV comments on Preface to a Proposal for a New Mode of Inquiry - Less Wrong

4 Post author: Daniel_Burfoot 17 May 2010 02:11AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (83)

You are viewing a single comment's thread. Show more comments above.

Comment author: nhamann 18 May 2010 06:44:40AM *  4 points [-]

Okay, but what exactly is the suggestion here? That the OP should not publicize his work on AI? That the OP shouldn't even work on AI at all, and should dedicate his efforts to advocating friendly AI discussion and research instead? If a major current barrier to FAI is understanding how intelligence even works to begin with, then this preliminary work (if it is useful) is going to be a necessary component to both regular AGI and FAI. Is the only problem you see, then, that it's going to be made publicly available? Perhaps we should establish private section of LW for Top Secret AI discussion?

I apologize for being snarky, but I can't help but find it absurd that we should be worrying about the effects of LW articles on unfriendly singularity, especially given that the hard takeoff model, to my knowledge, is still rather fuzzy. (Last I checked, Robin Hanson put probability of hard takeoff at less than 1%. Unfriendly singularity is so bad an outcome that research and discussion about hard takeoff is warranted, of course, but is it not a bit of an overreaction to suggest that this series of articles might be too dangerous to be made available to the public?)

Comment author: John_Maxwell_IV 18 May 2010 07:30:12PM *  1 point [-]

(Last I checked, Robin Hanson put probability of hard takeoff at less than 1%. Unfriendly singularity is so bad an outcome that research and discussion about hard takeoff is warranted, of course, but is it not a bit of an overreaction to suggest that this series of articles might be too dangerous to be made available to the public?)

If the probability of hard takeoff was 0.1%, it's still too high a probability for me to want there to be public discussion of how one might build an AI.

http://www.nickbostrom.com/astronomical/waste.html

Because the lifespan of galaxies is measured in billions of years, whereas the time-scale of any delays that we could realistically affect would rather be measured in years or decades, the consideration of risk trumps the consideration of opportunity cost. For example, a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 million years.