This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. Feel free to rid yourself of cached thoughts by doing so in Old Church Slavonic. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
If you're new to Less Wrong, check out this welcome post.
Yes Tim - as I pointed out earlier however, under reasonable assumptions an AI will upon self reflection on the circumstances leading to its existence as well as its utility function conclude that a strictly literal interpretation of its utility function would have to be against the implicit wishes of its originator.