Tyrrell_McAllister comments on UDT agents as deontologists - Less Wrong

8 Post author: Tyrrell_McAllister 10 June 2010 05:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (109)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 09 June 2010 11:55:35PM 4 points [-]

The act-evaluating function is just a particular computation which, for the agent, constitutes the essence of rightness.

This sounds almost like saying that the agent is running its own algorithm because running this particular algorithm constitutes the essence of rightness. This perspective doesn't improve understanding of the process of decision-making, it just rounds up the whole agent in an opaque box and labels it an officially approved way to compute. The "rightness" and "actual world" properties you ascribe to this opaque box don't seem to be actually present.

Comment author: Tyrrell_McAllister 10 June 2010 12:33:40AM *  0 points [-]

The "rightness" and "actual world" properties you ascribe to this opaque box don't seem to be actually present.

They aren't present as part of what we must know to predict the agent's actions. They are part of a "stance" (like Dennett's intentional stance) that we can use to give a narrative framework within which to understand agent's motivation. What you are calling a black box isn't supposed to be part of the "view" at all. Instead of a black box, there is a socket where a particular program vector <P1, P2, . . .> and "preference vector" <E1, E2, . . .>, together with the UDT formalism, can be plugged in.

ETA: The reference to a "'preference vector' <E1, E2, . . .>" was a misreading of Wei Dai's post on my part. What I (should have) meant was the utility function U over world-evolution vectors <E1, E2, . . .>.

Comment author: Vladimir_Nesov 10 June 2010 01:06:54AM 0 points [-]

I don't understand this.