Mass_Driver comments on UDT agents as deontologists - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (109)
Edited
Previously, I attempted to disagree with this comment. My disagreement was tersely dismissed, and, when I protested, my protests were strongly downvoted. This suggests two possibilities:
(1) I fail to understand this topic in ways that I fail to understand or (2) I lack the status in this community for my disagreement with Vladmir_Nesov on this topic to be welcomed or taken seriously.
If I were certain that the problem were (2), then I would continue to press my point, and the karma loss be damned. However, I am still uncertain about what the problem is, and so I am deleting all my posts on the thread underneath this comment.
One commenter suggested that I was being combative myself; he may be right. If so, I apologize for my tone.
Saying that this decision is "right" has no explanatory power, gives no guidelines on the design of decision-making algorithms.
I am nowhere purporting to be giving guidelines for the design of a decision-making algorithm. As I said, I am not suggesting any alteration of the UDT formalism. I was also explicit in the OP that there is no problem understanding at an intuitive level what the agent's builders were thinking when they decided to use UDT.
If all you care about is designing an agent that you can set loose to harvest utility for you, then my post is not meant to be interesting to you.
Beliefs should pay rent, not fly in the ether, unattached to what they are supposed to be about.
The whole Eliezer quote is that beliefs should "pay rent in future anticipations". Beliefs about which once-possible world is actual do this.
The beliefs in question are yours, and anticipation is about agent's design or behavior.