CarlShulman comments on After critical event W happens, they still won't believe you - Less Wrong

37 Post author: Eliezer_Yudkowsky 13 June 2013 09:59PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (104)

You are viewing a single comment's thread. Show more comments above.

Comment author: Qiaochu_Yuan 14 June 2013 12:06:59AM *  9 points [-]

My version of Example 2 sounds more like "at some point, Watson might badly misdiagnose a human patient, or a bunch of self-driving cars might cause a terrible accident, or more inscrutable algorithms will do more inscrutable things, and this sort of thing might cause public opinion to turn against AI entirely in the same way that it turned against nuclear power."

Comment author: CarlShulman 14 June 2013 03:00:25AM 1 point [-]

I think that people will react more negatively to harms than they react positively to benefits, but I would still expect the impacts of broadly infrahuman AI to be strongly skewed towards the positive. Accidents might lead to more investment in safety, but a "turn against AI entirely" situation seems unlikely to me.

Comment author: Eliezer_Yudkowsky 14 June 2013 04:12:40PM 3 points [-]

You could say the same about nuclear power. It's conceivable that with enough noise about "AI is costing jobs" the broad positive impacts could be viewed as ritually contaminated a la nuclear power. Hm, now I wonder if I should actually publish my "Why AI isn't the cause of modern unemployment" writeup.

Comment author: Yosarian2 17 June 2013 04:10:19PM 0 points [-]

I don't know about that; I think that a lot of the people who think that AI is "costing jobs" view that as a positive thing.