Brian_Tomasik comments on International cooperation vs. AI arms race - Less Wrong

15 Post author: Brian_Tomasik 05 December 2013 01:09AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (143)

You are viewing a single comment's thread. Show more comments above.

Comment author: Brian_Tomasik 07 December 2013 05:47:37PM 1 point [-]

Could be, although remember that everyone else would also prefer for just them, their friends, and their trusted figures to be in CEV. Including more people is for reasons of compromise, not necessarily intrinsic value.

Isaac_Davis made a good point that a true CEV might not depend that sensitively on what country it was seeded from. The bigger danger I had in mind would be the (much more likely) outcome of imperfect CEV, such as regular democracy. In that case, excluding the Chinese could lead to more parochial outcomes, and the Chinese would then also have more reason to worry about a US AI.

Comment author: DanArmak 07 December 2013 06:19:37PM 1 point [-]

Could be, although remember that everyone else would also prefer for just them, their friends, and their trusted figures to be in CEV. Including more people is for reasons of compromise, not necessarily intrinsic value.

That's my point. If you're funding a small-team top secret AGI project, you can keep your seed community small too; you don't need to compromise. Especially if you're consciously racing to finish your project before any rivals, you won't want to include those rivals in your CEV.

Comment author: Brian_Tomasik 07 December 2013 11:57:21PM 0 points [-]

Well, what does that imply your fellow prisoners in this one-shot prisoner's dilemma are deciding to do in the secret of their basements? Maybe our best bet is to change the payoffs so that we get a different game than a one-shot PD, via explicit coordination agreements and surveillance to enforce them.

Comment author: DanArmak 08 December 2013 09:04:27PM 3 points [-]

The surveillance would have to be good enough to prevent all attempts made by the most powerful governments to develop in secret something that may (eventually) require nothing beyond a few programmers in a few rooms running code.

Comment author: Brian_Tomasik 09 December 2013 01:14:34AM 1 point [-]

This is a real issue. Verifying compliance with AI-limitation agreements is much harder than with nuclear agreements, and already those have issues. Carl's paper suggest lie detection and other advanced transparency measures as possibilities, but it's unclear if governments will tolerate this even when the future of the galaxy is at stake.