Tyrrell_McAllister comments on UDT agents as deontologists - Less Wrong

8 Post author: Tyrrell_McAllister 10 June 2010 05:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (109)

You are viewing a single comment's thread. Show more comments above.

Comment author: Tyrrell_McAllister 10 June 2010 12:47:54AM *  2 points [-]

If you do that properly, you'll find that UDT acts correctly

Are you under the impression that I am saying that UDT acts incorrectly? I was explicit that I was suggesting no change to the UDT formalism. I was explicit that I was suggesting a way to anthropomorphize what the agent is thinking. Are you familiar with Dennett's notion of an intentional stance? This is like that. To suggest that we view the agent from a different stance is not to suggest that the agent acts differently.

ETA: I'm gathering that I should have been clearer that the so-called "true counterfactual mugging" is trivial or senseless when posed to a UDT agent. I'm a little surprised that I failed to make this clear, because it was the original thought that motivated the post. It's not immediately obvious to me how to make this clearer, so I will give it some thought.

Comment author: Vladimir_Nesov 10 June 2010 01:02:57AM 0 points [-]

You've got this in the post:

In a True Counterfactual Mugging, Omega would ask the agent to give up utility. Here we see that the UDT agent cannot possibly satisfy this request.

I'm not sure what you intended to say by that, but it sounds like "UDT agent will make the wrong decision", together with an opaque proposition that Omega offers "actual utility and not even expected utility", which it's not at all clear how to represent formally.

Comment author: Tyrrell_McAllister 10 June 2010 03:50:26AM *  1 point [-]

I'm not sure what you intended to say by that, but it sounds like "UDT agent will make the wrong decision",

No, that is not at all what I meant. That interpretation never occurred to me. I meant that the UDT agent cannot possibly give up the utility that Omega asks for in the previous sentence. Now that I understood how you misunderstood that part, I will edit it.

Comment author: Vladimir_Nesov 10 June 2010 10:08:26AM 0 points [-]

Well, isn't it a good thing that UDT won't give up utility to Omega? You can't take away utility on one side of the coin, and return it on the other, utility is global.

Comment author: Tyrrell_McAllister 10 June 2010 01:58:59PM *  1 point [-]

Well, isn't it a good thing that UDT won't give up utility to Omega?

Yes, of course it is. I'm afraid that I don't yet understand why you thought that I suggest otherwise.

You can't take away utility on one side of the coin, and return it on the other, utility is global.

Yes, that is why I said that the agent couldn't possibly satisfy Omega's request to give it utility.

You are attacking a position that I don't hold. But I'm not sure what position you're thinking of, so I don't know how to address the misunderstanding. You haven't made any claim that I disagree with in response to that paragraph.