Matt_Simpson comments on Open Thread, May 1-15, 2012 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (264)
Right, but your conclusion still doesn't follow - my example was just to show the flaw in your logic. Generally, you have to consider the trade-offs between cooperating and doing anything else instead.
Well, of course. But which my conclusion you mean that doesn't follow?
But "[of others]" part is unnecessary. If every intelligent agent optimizes away their own unnecessary pain, it is sufficient for the conclusion. Unless, of course, there exists a significant number of intelligent agents that have pain of others as a terminal goal, or there's a serious lack of atoms for all agents to achieve their otherwise non-contradicting goals.
This is highly dependent on the strategic structure of the situation.