You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Luke_A_Somers comments on Thought Crimes - Less Wrong Discussion

5 Post author: Coscott 15 January 2014 05:23AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (49)

You are viewing a single comment's thread.

Comment author: Luke_A_Somers 15 January 2014 04:26:24PM 2 points [-]

1) In the first case, it may raise the general risks to think about AI, but there's a perfectly good Schelling point of 'don't implement the AI'; if you could detect the thoughts, you ought to be able to detect the implementation. In the second case, you don't need a rule specifically about thinking. You just need a rule against torture. If someone's torture method involves only thinking, then, well, that could be an illegal line of thought without having to make laws about thinking.

2) In general, one reason thought crimes are bad is because we don't have strong control over what we think of. If good enough mind-reading is implemented, I suspect that people will have a greater degree of control over what they think.

3) Another reason thought crimes are bad is because we would like a degree of privacy, and enforcement of thought laws would necessarily infringe on that a lot. If your thoughts are computed, it will be possible to make the laws a function to be called on your mental state. That function could be arranged to output only a 'OK/Not OK' output or a 'OK/Not OK/You are getting uncomfortably close on topic X', with no side-effects. That would seem to me to be much less privacy-invading.