paulfchristiano comments on Cryptographic Boxes for Unfriendly AI - Less Wrong

24 Post author: paulfchristiano 18 December 2010 08:28AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (155)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 20 December 2010 06:07:31AM *  0 points [-]

If you think that an AI can manipulate our moral values without ever getting to say anything to us, then that is a different story.

With a few seconds of thought it is easy to see how this is possible even without caring about imaginary people. This is a question of cooperation among humans.

This danger occurs even before putting an AI in a box though, and in fact even before the design of AI becomes possible. This scheme does nothing to exacerbate that danger.

This is a good point too, although I may not go as far as saying it does nothing to exacerbate the danger. The increased tangibility matters.

Comment author: paulfchristiano 20 December 2010 06:02:40PM 0 points [-]

I think that running an AI in this way is no worse than simply having the code of an AGI exist. I agree that just having the code sitting around is probably dangerous.

Comment author: wedrifid 21 December 2010 03:51:21AM 0 points [-]

Nod, in terms of direct danger the two cases aren't much different. The difference in risk is only due to the psychological impact on our fellow humans. The Pascal's Commons becomes that much more salient to them. (Yes, I did just make that term up. The implications of the combination are clear I hope.)