jimrandomh comments on Safety Culture and the Marginal Effect of a Dollar - Less Wrong

23 Post author: jimrandomh 09 June 2011 03:59AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (105)

You are viewing a single comment's thread. Show more comments above.

Comment author: jimrandomh 09 June 2011 06:42:57AM 12 points [-]

It's hard for me to imagine 100 good papers on the subject of AI safety (as opposed to say, FAI design). Once you have 10 good papers with variations of "AGI is dangerous, please be careful!", what can you say in the 11th one that you haven't already said?

There's a lot to say at one layer remove - things like stability analyses of particular strategies for implementing goal systems, general safety measures such as fake network interfaces, friendliness analyses of hypothetical programs, and so on. A paper can impart the idea that safety is important, without being directly about safety. (In fact, there's some reason to suspect that articles one layer removed may be better than articles that are directly about safety).

Comment author: CarlShulman 09 June 2011 04:59:21PM 4 points [-]

This seems right. One additional thing to note, however, is that while it looks quite likely that good papers lead to improvements at the margin, high-publicity bad work can harm a developing field's prospects and reputation, and thus outsiders' desire to affiliate with it. Robin Hanson emphasizes this point a lot.

Comment author: khafra 09 June 2011 06:03:13PM 2 points [-]

Carl, are you saying that the non-SIAI-affiliated qualified academics among us should attempt to get high-publicity, bad papers published advocating anything-goes GAI design, without regard for safety?

Comment author: CarlShulman 09 June 2011 09:14:20PM 6 points [-]

No, for many reasons, including the following:

  • Such things are very likely to backfire, and moreso than they seem; we live in a world of substantial transparency, and dirty laundry gets found
  • Being the kind of people who would do such things would have bad effects and sabotage friendly cooperation with the very AI folk whose cooperation is so important
  • There is already a lot of stuff along these lines
  • Folk actually in a position to do such things would better use their limited time, reputation, and commitment on other projects
Comment author: timtyler 09 June 2011 09:29:30PM *  4 points [-]

Being the kind of people who would do such things would have bad effects and sabotage friendly cooperation with the very AI folk whose cooperation is so important

My impression is that the bridges are mostly burned there. For years, the SIAI has been campaigning against other projects, in the hope of denying them mindshare and funding.

We have Yudkowsky saying: "And if Novamente should ever cross the finish line, we all die." and saying he will try to make various other AI projects "look merely stupid".

I expect the SIAI looks to most others in the field like a secretive competing organisation, who likes to use negative marketing techniques. Implying that your rivals will destroy the world is an old marketing trick that takes us back to the Daisy Ad. This is not necessarily the kind of organisation one would want to affiliate with.