You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

HungryHobo comments on What can go wrong with the following protocol for AI containment? - Less Wrong Discussion

0 Post author: ZoltanBerrigomo 11 January 2016 11:03PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (29)

You are viewing a single comment's thread. Show more comments above.

Comment author: Slider 14 January 2016 02:27:21PM 0 points [-]

I don't mean the writing of this post but in general the principle of trying to gain utility from minimising self-awareness.

Usually you don't make processes as opaque as possible to increase their possibility of going right. The opposite of atleast social political processes being transparent is seen as pretty important.

If we are going to create minilife just to calculate 42, seeing it get calculated should not be a super extra temptation. Preventing the "interrupt/tamper" decision by limiting options is rather backwards in doing that while it would be better to argue why it should not be chosen even if available.