torekp comments on How to Study Unsafe AGI's safely (and why we might have no choice) - Less Wrong

10 Post author: Punoxysm 07 March 2014 07:24AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (47)

You are viewing a single comment's thread.

Comment author: torekp 08 March 2014 07:11:43PM 0 points [-]

I like this post, and I am glad you included the section on political issues. But I worry that you underestimate those issues. The developers of AGI will probably place much less emphasis on safety and more on rapid progress than you seem to anticipate. Militaries will have enemies, and even corporate research labs will have competitors. I don't see a bright future for the researcher who plans the slow cautious approach you have outlined.

Right now the US military is developing autonomous infantry robots. The AI in them in no way counts as AGI, but any step along the road to AGI would probably improve the performance of such devices. Or at least any few steps. So I doubt we have much time to play in sandboxes.

Comment author: Punoxysm 08 March 2014 08:14:24PM 0 points [-]

The Space Race was high-pressure, but placed a relatively high emphasis on safety on the US side; and even the Russians were doing their best to make sure missions didn't fail too often. A government sponsored effort would place a high emphasis on making sure the source and details of the project weren't leaked in way that could be copied easily (which is already a degree of safety), and it would have the resources available to take any security measures that wouldn't slow things down tremendously.

Most DARPA/Military research projects do receive extensive testing and attempt to ensure reliability. Even when it's done poorly, or unsuccessfully, it's a huge consideration in development.

But yes, there would be a certain level of urgency, which might keep them from waiting for the best possible security. Which is why intermediate approaches grounded in a existing (or technology we can extrapolate will be existing) technologies.