Vladimir_Nesov comments on Heading off a near-term AGI arms race - Less Wrong

7 Post author: lincolnquirk 22 August 2012 02:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (70)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 22 August 2012 11:49:40PM 0 points [-]

However, it's a very low probability that it'll ever get to that point.

What specifically do you consider low probability? That an uFAI will ever be launched, or that there will be an advance high credibility warning?

Comment author: gwern 22 August 2012 11:59:00PM 9 points [-]

I'd argue the latter. It's hard to imagine how you could know in advance that a uFAI has a high chance of working, rather than being one of thousands of ambitious AGI projects that simply fail.

(Douglas Lenat comes to you, saying that he's finished a powerful fully general self-modifying AI program called Eurisko, which has done very impressive things in its early trials, so he's about to run it on some real-world problems on a supercomputer with Internet access; and by the way, he'll be alone all tomorrow fiddling with it, would you like to come over...)

Comment author: Xachariah 23 August 2012 01:47:25AM *  1 point [-]

Sorry, I was imprecise. I consider it likely that eventually we'll be able to make uFAI, but unlikely that any particular project will make uFAI. Moreover, we probably won't get appreciable warning for uFAI because if researchers knew they were making a uFAI then they wouldn't make one.

Thus, we have to adopt a general strategy that can't target any specific research group. Sabotage does not scale well, and would only drive research underground while imposing social costs on us meanwhile. The best bet then is to promote awareness of uFAI risks and try to have friendliness theory completed by the time the first AGI goes online. Not surprisingly, this seems to be what SIAI is already doing. Discussion of sabotage just harms that strategy.