NancyLebovitz comments on Open Thread: March 2010, part 3 - Less Wrong

3 Post author: RobinZ 19 March 2010 03:14AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (254)

You are viewing a single comment's thread. Show more comments above.

Comment author: NancyLebovitz 19 March 2010 02:30:49PM 3 points [-]

I've been thinking about that, and I believe you're right that laws typically don't get passed against hypothetical harms, and also that AI research isn't the kind of thing that's enough fun to think about to set off a moral panic.

However, I'm not sure whether real harm that society can recover from is a possibility.

I'm basing the possibility on two premises-- that a lot of people thinking about AI aren't as concerned about the risks as SIAI, and computer programs are frequently gotten to the point where they work somewhat.

Suppose that a self-improving AI breaks the financial markets-- there might just be efforts to protect the markets, or AI might be an issue in itself.

Comment author: cousin_it 22 March 2010 02:35:18PM *  1 point [-]

laws typically don't get passed against hypothetical harms

Witchcraft? Labeling of GM food?

Comment author: NancyLebovitz 22 March 2010 03:04:01PM 2 points [-]

Those are legitimate examples. I think overreaction to rare events (like the difficulties added to travel and the damage to the rights of suspects after 9/11) is more common, but I can't prove it.

Comment author: RobinZ 22 March 2010 02:49:32PM *  0 points [-]

Some kinds of GM food cause different allergic reactions than their ancestral cultivars. I think you can justifiably care to a similar extent as you care about the difference between a Gala apple and a Golden Delicious apple.

Edit: Granted, most of the reaction is very much overblown.