Alicorn comments on Open Thread: March 2010, part 3 - Less Wrong

3 Post author: RobinZ 19 March 2010 03:14AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (254)

You are viewing a single comment's thread. Show more comments above.

Comment author: Alicorn 31 March 2010 04:17:40AM 1 point [-]

Interesting. Just to be contrary?

Comment author: JGWeissman 31 March 2010 04:56:54AM 4 points [-]

Because, as near as I can calculate, UDT advises me too. Like what Wedrifid said.

And like Eliezer said here:

Or the Countess just decides not to pay, unconditional on anything the Baron does. Also, if the Baron ends up in an infinite loop or failing to resolve the way the Baron wants to, that is not really the Countess's problem.

And here:

As I always press the "Reset" button in situations like this, I will never find myself in such a situation.

EDIT: Just to be clear, the idea is not that I quickly shut off the AI before it can torture simulated Eliezers; it could have already done so in the past, as Wei Dai points out below. Rather, because in this situation I immediately perform an action detrimental to the AI (switching it off), any AI that knows me well enough to simulate me knows that there's no point in making or carrying out such a threat.

I am assuming that an agent powerful enough to put me in this situation can predict that I would behave this way.

Comment author: wedrifid 31 March 2010 04:32:13AM 2 points [-]

It is also potentialy serves decision-theoretic purposes. Much like a Dutchess choosing not to pay off her blackmailer. If it is assumed that a cheesecake maximiser has a reason to force you into such a position (rather than doing it himself) then it is not unreasonable to expect that the universe may be better off if Cheesy had to take his second option.