timtyler comments on How can I reduce existential risk from AI? - Less Wrong

46 Post author: lukeprog 13 November 2012 09:56PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (92)

You are viewing a single comment's thread. Show more comments above.

Comment author: Alex_Altair 12 November 2012 06:54:16AM 14 points [-]

I expect significant strategic insights to come from the technical work (e.g. FAI math).

Interesting point. I'm worried that, while FAI math will help us understand what is dangerous or outsourceable from our particular path, many many other paths to AGI are possible, and we won't learn from FAI math which of those other paths are dangerous or likely.

I feel like one clear winning strategy is safety promotion. It seems that almost no bad can come from promoting safety ideas among AI researchers and investors. It also seems relatively easy, in that requires only regular human skills of networking, persuasion, et cetera.

Comment author: timtyler 17 November 2012 06:13:32PM *  -1 points [-]

Looking many existing risky technologies the consumers and governments are the safety regulators, and manufacturers mostly cater to their demands. Consider the automobile industry, the aeronautical industry and the computer industry for examples.

Comment author: adamisom 17 November 2012 09:40:49PM 0 points [-]

Unfortunately, AGI isn't a "risky technology" where mostly is going to cut it in any sense, including adhering to expectations for safety regulation.

Comment author: timtyler 17 November 2012 10:54:09PM -1 points [-]

All the more reason to use resources effectively. Relatively few safety campaigns have attempted to influence manufacturers. What you tend to see instead are F.U.D. campaigns and neagtive marketing - where organisations attempt to smear their competitors by spreading negative rumours about their products. For example, here is Apple's negative marketing machine at work.

Comment author: Alex_Altair 17 November 2012 09:36:57PM 0 points [-]

Are you suggesting that we encourage consumers to have safety demands? I'm not sure this will work. It's possible that consumers are to reactionary for this to be helpful. Also, I think AI projects will be dangerous before reaching the consumer level. We want AGI researchers to think safe before they even develop theory.

Comment author: timtyler 17 November 2012 10:37:58PM -1 points [-]

It isn't clear that influencing consumer awareness of safety issues would have much effect. However, it suggests that influencing the designers may not be very effective - they are often just giving users the safety level they are prepared to pay for.