Vladimir_Nesov comments on Preface to a Proposal for a New Mode of Inquiry - Less Wrong

4 Post author: Daniel_Burfoot 17 May 2010 02:11AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (83)

You are viewing a single comment's thread. Show more comments above.

Comment author: rhollerith_dot_com 18 May 2010 09:35:43PM *  2 points [-]

Last I checked, Robin Hanson put probability of hard takeoff at less than 1%.

And among writers actually skilled at general rationality who do not have a very large personal vested interest in one particular answer (i.e., have already invested years of their lives in becoming AGI researchers), Robin Hanson is on one extreme end of the continuum of opinion on the subject.

Okay, but what exactly is the suggestion here? That the OP should not publicize his work on AI?

Seems like the sensible course of action to me! Do you really think Eliezer and other responsible AGI researchers have published all of their insights into AGI?

If the OP wishes to make a career in AGI research, he can do so responsibly by affiliating himself with SIAI, the Future of Humanity Institute or some other group with a responsible approach to AGI. They will probably share their insights with him only after a lengthy probationary period during which they vigorously check him for signs that he might do something irresponsible once they have taken him into their confidence. (ADDED. If it were me, I would look mainly for signs that the candidate might make a choice which tends to have a bad effect on the global situation, but a positive effect on his or her scientific reputation or on some other personal agenda that humans typically care about.) And they will probably share their insights with him only after he has made a commitment to stay with the group for life.

Comment author: Vladimir_Nesov 18 May 2010 09:47:20PM 0 points [-]

Lots of guesswork.