Vladimir_Nesov comments on UDT agents as deontologists - Less Wrong

8 Post author: Tyrrell_McAllister 10 June 2010 05:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (109)

You are viewing a single comment's thread.

Comment author: Vladimir_Nesov 09 June 2010 11:55:35PM 4 points [-]

The act-evaluating function is just a particular computation which, for the agent, constitutes the essence of rightness.

This sounds almost like saying that the agent is running its own algorithm because running this particular algorithm constitutes the essence of rightness. This perspective doesn't improve understanding of the process of decision-making, it just rounds up the whole agent in an opaque box and labels it an officially approved way to compute. The "rightness" and "actual world" properties you ascribe to this opaque box don't seem to be actually present.

Comment author: Tyrrell_McAllister 10 June 2010 12:33:40AM *  0 points [-]

The "rightness" and "actual world" properties you ascribe to this opaque box don't seem to be actually present.

They aren't present as part of what we must know to predict the agent's actions. They are part of a "stance" (like Dennett's intentional stance) that we can use to give a narrative framework within which to understand agent's motivation. What you are calling a black box isn't supposed to be part of the "view" at all. Instead of a black box, there is a socket where a particular program vector <P1, P2, . . .> and "preference vector" <E1, E2, . . .>, together with the UDT formalism, can be plugged in.

ETA: The reference to a "'preference vector' <E1, E2, . . .>" was a misreading of Wei Dai's post on my part. What I (should have) meant was the utility function U over world-evolution vectors <E1, E2, . . .>.

Comment author: Vladimir_Nesov 10 June 2010 01:06:54AM 0 points [-]

I don't understand this.

Comment author: Mass_Driver 10 June 2010 12:30:35AM *  0 points [-]

Edited

Previously, I attempted to disagree with this comment. My disagreement was tersely dismissed, and, when I protested, my protests were strongly downvoted. This suggests two possibilities:

(1) I fail to understand this topic in ways that I fail to understand or (2) I lack the status in this community for my disagreement with Vladmir_Nesov on this topic to be welcomed or taken seriously.

If I were certain that the problem were (2), then I would continue to press my point, and the karma loss be damned. However, I am still uncertain about what the problem is, and so I am deleting all my posts on the thread underneath this comment.

One commenter suggested that I was being combative myself; he may be right. If so, I apologize for my tone.

Comment author: Vladimir_Nesov 10 June 2010 12:47:23AM 0 points [-]

Saying that this decision is "right" has no explanatory power, gives no guidelines on the design of decision-making algorithms.

Comment author: Tyrrell_McAllister 10 June 2010 01:32:25AM *  0 points [-]

gives no guidelines on the design of decision-making algorithms.

I am nowhere purporting to be giving guidelines for the design of a decision-making algorithm. As I said, I am not suggesting any alteration of the UDT formalism. I was also explicit in the OP that there is no problem understanding at an intuitive level what the agent's builders were thinking when they decided to use UDT.

If all you care about is designing an agent that you can set loose to harvest utility for you, then my post is not meant to be interesting to you.

Comment author: Vladimir_Nesov 10 June 2010 01:40:23AM 2 points [-]

Beliefs should pay rent, not fly in the ether, unattached to what they are supposed to be about.

Comment author: Tyrrell_McAllister 10 June 2010 04:03:43AM *  0 points [-]

Beliefs should pay rent . . .

The whole Eliezer quote is that beliefs should "pay rent in future anticipations". Beliefs about which once-possible world is actual do this.

Comment author: Vladimir_Nesov 10 June 2010 10:33:42AM 0 points [-]

The beliefs in question are yours, and anticipation is about agent's design or behavior.