RobinHanson comments on Preface to a Proposal for a New Mode of Inquiry - Less Wrong

4 Post author: Daniel_Burfoot 17 May 2010 02:11AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (83)

You are viewing a single comment's thread. Show more comments above.

Comment author: rhollerith_dot_com 18 May 2010 09:35:43PM *  2 points [-]

Last I checked, Robin Hanson put probability of hard takeoff at less than 1%.

And among writers actually skilled at general rationality who do not have a very large personal vested interest in one particular answer (i.e., have already invested years of their lives in becoming AGI researchers), Robin Hanson is on one extreme end of the continuum of opinion on the subject.

Okay, but what exactly is the suggestion here? That the OP should not publicize his work on AI?

Seems like the sensible course of action to me! Do you really think Eliezer and other responsible AGI researchers have published all of their insights into AGI?

If the OP wishes to make a career in AGI research, he can do so responsibly by affiliating himself with SIAI, the Future of Humanity Institute or some other group with a responsible approach to AGI. They will probably share their insights with him only after a lengthy probationary period during which they vigorously check him for signs that he might do something irresponsible once they have taken him into their confidence. (ADDED. If it were me, I would look mainly for signs that the candidate might make a choice which tends to have a bad effect on the global situation, but a positive effect on his or her scientific reputation or on some other personal agenda that humans typically care about.) And they will probably share their insights with him only after he has made a commitment to stay with the group for life.

Comment author: RobinHanson 21 May 2010 10:52:56AM 2 points [-]

For rational people skeptical about hard takeoff, consider the Interim Report from the Panel Chairs, AAAI Presidential Panel on Long-Term AI Futures. Most economists I've talked to are also quite skeptical, much more so than I. Dismissing such folks because they haven't read enough of your writings or attended your events seems a bit biased to me.

Comment author: rhollerith_dot_com 21 May 2010 11:19:26AM *  1 point [-]

Hi Robin!

If a professional philosopher or an economist gives his probability that AGI researchers will destroy the world, I think a curious inquirer should check for evidence that the philosopher or economist has actually learned the basics of the skills and domains of knowledge the AGI researchers are likely to use.

I am pretty sure that you have, but I do not know that, e.g., Daniel Dennett has, excellent rationalist though he is. All I was saying is that my interlocutor should check that before deciding how much weight to give Dennett's probability.

Comment author: RobinHanson 21 May 2010 05:37:22PM 4 points [-]

But in the above you explicitly choose to exclude AGI researchers. Now you also want to exclude those who haven't read a lot about AGI? Seems like you are trying to exclude as irrelevant everyone who isn't an AGI amateur like you.

Comment author: jimrandomh 21 May 2010 06:30:58PM 0 points [-]

I guess it depends where exactly you set the threshold. Require too much knowledge and the pool of opinions, and the diversity of the sources of those opinions, will be too small (ie, just "AGI ameteurs"). On the other hand, the minimum amount of research required to properly understand the AGI issue is substantial, and if someone demonstrates a serious lack of understanding, such as claiming that AI will never be able to do something that narrow AIs can do already, then I have no problem excluding their opinion.

Comment author: CarlShulman 21 May 2010 06:27:45PM 0 points [-]

Most economists I've talked to are also quite skeptical, much more so than I.

About advanced AI being developed, extremely rapid economic growth upon development, or local gains?