You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Stuart_Armstrong comments on AI ontology crises: an informal typology - Less Wrong Discussion

6 Post author: Stuart_Armstrong 13 October 2011 10:23AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (12)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 14 October 2011 11:16:49AM 0 points [-]

It seems extremely unlikely that an AI with very difficult to influence values will be catatonic

Impossible to influence values, not just very difficult.

doing things that would be good in (potentially unlikely worlds) where value is more easily influenced

Which would also mean doing things that would be bad in other unlikely worlds.

As far as I can tell, AI indifference doesn't work

See my comment on your comment.

Comment author: Vladimir_Nesov 14 October 2011 09:26:28PM *  3 points [-]

Impossible to influence values, not just very difficult.

Nothing is impossible. Maybe AI's hardware is faulty (and that is why it computes 2+2=4 every time), which would prompt AI to investigate the issue more thoroughly, if it has nothing better to do.

(This is more of an out-of-context remark, since I can't place "influencing own values". If "values" are not values, and instead something that should be "influenced" for some reason, why do they matter?)