XiXiDu comments on Free Will as Unsolvability by Rivals - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (11)
Pavitra, I'd love to know what you think about my post on free will:
In other words, I think a paperclip maximizer is dangerous because it has more free will, i.e. is free to (not free from) realize what it wants as its effect on the universe is much larger than that of a human(s). An agent's perception to be free is therefore correlated with the ability to realize its goals, the probability of success.
Your linked post seems to be more about an agent interacting with a dumb-matter environment, and about the relationship between free will and determinism. My post is specifically about what happens when two agents interact with each other. The point I was trying to make is that the sense of indignation that accompanies the intuition of free will is tied to the desire to protect one's utility function from alteration in the presence of a hostile intelligence.
Your comment bridges the two nicely.