JoshuaZ comments on Existential Risk and Public Relations - Less Wrong

36 Post author: multifoliaterose 15 August 2010 07:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (613)

You are viewing a single comment's thread. Show more comments above.

Comment author: JoshuaZ 15 August 2010 06:37:37PM *  2 points [-]

If you estimate a high chance of this action destroying humanity then trying to get through that bottleneck with slightly better than 75% chance of surviving is almost certainly better than trying to stamp out such research and buying a few years in exchange for replacing 75% with a near certainty. The only argument against that I can see if one accepts the 75% number is that forced delaying until we have uploads might help matters since uploads would have moral systems close to those of their original humans, and uploads can will have a better chance at solving the FAI problem or if not solving it being able to counteract any unFriendly or unfriendly AI.

Comment author: Jordan 15 August 2010 08:18:16PM 2 points [-]

AI research is hard. It's not clear to me that a serious, global ban on AI would only delay the arrival of AGI by a few years. Communication, collaboration, recruitment, funding... all of these would be much more difficulty. Even more, since current AI researchers are open about their work they would be the easiest to track after a ban, thus any new AI research would have to come from green researchers.

That aside, I agree that a ban whose goal is simply indefinite postponement of AGI is unlikely to work (and I'm dubious of any ban in general). Still, it isn't hard for me to imagine that a ban could buy us 10 years, and that a similar amount of political might could also greatly accelerate an upload project.

The biggest argument against, in my opinion, is that the only way the political will could be formed is if the threat of AGI was already so imminent that a ban really would be worse than worthless.

Comment author: wedrifid 16 August 2010 03:31:59AM 1 point [-]

AI research is hard. It's not clear to me that a serious, global ban on AI would only delay the arrival of AGI by a few years.

The other thing to consider is just what the ban would achieve. I would expect it to lower the 75% chance by giving us the opportunity to go extinct in another way before making a mistake with AI. When I say 'extinct' I include (d)evolving to an equilibrium (such as those described by Robin Hanson from time to time).

Comment author: NancyLebovitz 20 August 2010 04:04:19AM 0 points [-]

How well defined is AI research? My assumption is that if AI is reasonably possible for humans to create, then it's going to become much easier as computers become more powerful and human minds and brains become better understood.

Comment author: timtyler 15 August 2010 08:24:25PM 0 points [-]

A ban seems highly implausible to me. What is the case for considering it? Do you really think that enough people will become convinced that there is a significant danger?

Comment author: Jordan 15 August 2010 09:46:01PM 0 points [-]

I agree, it seems highly implausible to me as well. However, the subject at hand (AI, AGI, FAI, uploads, etc) is riddled with extremes, so I'm hesitant to throw out any possibility simply because it would be incredibly difficult.

Do you really think that enough people will become convinced that there is a significant danger?

See the last line of the comment you responded to.