timtyler comments on How can I reduce existential risk from AI? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (92)
Interesting point. I'm worried that, while FAI math will help us understand what is dangerous or outsourceable from our particular path, many many other paths to AGI are possible, and we won't learn from FAI math which of those other paths are dangerous or likely.
I feel like one clear winning strategy is safety promotion. It seems that almost no bad can come from promoting safety ideas among AI researchers and investors. It also seems relatively easy, in that requires only regular human skills of networking, persuasion, et cetera.
Looking many existing risky technologies the consumers and governments are the safety regulators, and manufacturers mostly cater to their demands. Consider the automobile industry, the aeronautical industry and the computer industry for examples.
Unfortunately, AGI isn't a "risky technology" where mostly is going to cut it in any sense, including adhering to expectations for safety regulation.
All the more reason to use resources effectively. Relatively few safety campaigns have attempted to influence manufacturers. What you tend to see instead are F.U.D. campaigns and neagtive marketing - where organisations attempt to smear their competitors by spreading negative rumours about their products. For example, here is Apple's negative marketing machine at work.
Are you suggesting that we encourage consumers to have safety demands? I'm not sure this will work. It's possible that consumers are to reactionary for this to be helpful. Also, I think AI projects will be dangerous before reaching the consumer level. We want AGI researchers to think safe before they even develop theory.
It isn't clear that influencing consumer awareness of safety issues would have much effect. However, it suggests that influencing the designers may not be very effective - they are often just giving users the safety level they are prepared to pay for.