You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

diegocaleiro comments on Superintelligence 12: Malignant failure modes - Less Wrong Discussion

7 Post author: KatjaGrace 02 December 2014 02:02AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (50)

You are viewing a single comment's thread. Show more comments above.

Comment author: diegocaleiro 02 December 2014 04:23:47AM *  0 points [-]

Constraining it by determining how many cycles the AI can use to process how to go about making paper clips, plus some spacial restriction (don't touch anything outside this area) plus some amount of energy to be spent restriction (use up to X energy to create 10 paperclips) would help. Allowing for levels of uncertainty such as 89 to 95% certain that something is the case would help.

However very similar suggestions are dealt with at length by Bostrom, who concludes that it would still be extremely difficult to constrain the AI.