FAWS comments on The Blackmail Equation - Less Wrong

13 Post author: Stuart_Armstrong 10 March 2010 02:46PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (87)

You are viewing a single comment's thread.

Comment author: FAWS 10 March 2010 04:52:17PM *  8 points [-]

What's to stop the Countess from having precommitted to never respond to blackmail?

Or to have precommitted to act as though having precommitted to the course of action having precommitted to in retrospect seems the most beneficial (including meta-precommittments, meta-meta-precommitments, meta^meta^meta precommitments etc up to the highest level she can model)?

Which would presumably include not being blackmailable to agents who would not try to blackmail if she absolutely committed to not be blackmailable, but being blackmailable to agents who would try blackmail even if she absolutely committed to not be blackmailable, except agents who would not have modified themselves into such agents were in not for such exceptions. Or in short: Being blackmailable only to irrationally blackmailing agents who were never deliberately modified into such by anyone.

Comment author: prase 10 March 2010 06:14:02PM 4 points [-]

Who precommits first wins. If the baron precommits to fulfil the threat unless he gets the money, later precommitment of the countess is worthless, since she expects the baron to fulfil the threat anyway. Her precommitment has sense only if she makes it and if the baron knows about it before his threat is announced. Assumed that all precommitments are public, the countess' precommitment to never respond to threats and the baron's precommitment to reveal the secret are mutually exclusive. Hence, if the baron actually threatens the countess, we can be sure that she hasn't precommited to never respond.

Comment author: Vladimir_Nesov 10 March 2010 06:40:22PM *  4 points [-]

There is no "first" in precommiting -- your source code precommits you to certain actions, and you can't influence your source code, only carry out what the code states. The notion of precommiting, as a modification, is bogus (not so for the signalling of being precommited, or of being precommited in the particular case). You could be precommited to ignore certain signals of precommitment as well, and at some point signal such a precommitment. There seems to be no sense in distinguishing between when the same signal of precommitment is made (but it should be about the same precommitment, not a conditional variant of the previous one).

Comment author: wedrifid 10 March 2010 08:39:45PM 1 point [-]

There is no "first" in precommiting -- your source code precommits you to certain actions, and you can't influence your source code, only carry out what the code states. The notion of precommiting, as a modification, is bogus

You can influence your source code. You change the words and symbols in the text file, hit recompile, load the new binary into memory and execute it. If your code is such that it considers making such modifications as a suitable action to a situation then that is what you will do.

Comment author: prase 11 March 2010 09:21:40AM 2 points [-]

Common computer programs have a rather sharp boundary between their source code and the data. In brains (and hypothetical AIs) this distinction is (would be) probably less explicit. Whenever the baron learns anything, his source code changes in some sense, involuntarily, without recompiling. Still, the original source code contains all the information. Precommiting, in order to have some importance, should mean learning about a particular output of your own source code, rather than recompiling.

Comment author: wedrifid 12 March 2010 12:17:48AM 0 points [-]

The use of 'source code' here is merely a metaphor.

Comment author: prase 12 March 2010 08:30:37AM 0 points [-]

Metaphor standing for what exactly?

Comment author: wedrifid 13 March 2010 04:01:28AM 0 points [-]

UTM tape, brain, clockwork mechanism... whatever.

Comment author: Vladimir_Nesov 10 March 2010 08:45:53PM 0 points [-]

Think functional program, or what was initially written on the tape of a UTM. We are interested in that particular fact, not what happened after.

Comment author: wedrifid 10 March 2010 09:02:21PM *  2 points [-]

But I am interested in what happened after. If a tape operating on a UTM is programmed to operate a peripheral device to take the tape and modify it. then it is able to do that and the original tape is no longer running, the new one is. For any given agent in the universe it is possible to alter its state such that it behaves differently. Agents that are not implemented within this universe may not be changed in this way and those are the agents that I am not interested in.

Think functional program

Functional programs can operate machines that alter code to produce new, different functional programs.

The baron can alter his source code. Once he does so he is a different agent. How a countess responds to the baron's decision to modify his source code is a different question. If the countess is wise she will not pay in such a situation, the baron will know this and he will choose not to modify his source code. But it is a choise, the universe permits it.

Comment author: Vladimir_Nesov 10 March 2010 09:19:12PM 1 point [-]

If the countess is wise she will not pay in such a situation, the baron will know this and he will choose not to modify his source code. But it is a choise, the universe permits it.

Now this is a game of signalling -- to lie or not to lie, to trust or not to trust (or just how to interpret a given signal). The payoffs of the original game induce the payoff for this game of signalling the facts useful for efficiently playing the original game.

You don't neet to talk about "modified source code" to discuss this data as signalling the original source code. (The original source code is interesting, because it describes the strategy.) The modified code is only interesting to the extent it signals the original code (which it probably doesn't).

(Incidentally, one can only change things in accordance with the laws of physics, and many-to-one mapping may not be an option, though reconstructing the past may be infeasible in practice.)

Comment author: wedrifid 10 March 2010 09:30:25PM 1 point [-]

to lie or not to lie, to trust or not to trust

But it isn't a lie. It is the truth.

You don't neet to talk about "modified source code" to discuss this data as signalling the original source code.

I don't want to signal the original source code.

Comment author: Vladimir_Nesov 10 March 2010 09:47:31PM 0 points [-]

I don't want to signal the original source code.

But I want to know it, so whatever you do, signals something about the original source code, possibly very little.

But it isn't a lie. It is the truth.

What's not a lie? (I'm confused.) I was just listing the possible moves in a new meta-game.

Comment author: FAWS 10 March 2010 07:01:17PM *  1 point [-]

Having precommitted first is equivalent to deterministically acting as if already precommmitted in this instance, having precommitted too late is equivalent to only acting that way in future instances. I use "having precommitted" rather than "having source code such that..." because it's simpler, more intuitive, and more easily applicable to agents who don't have source code in the straightforward sense.

Comment author: Vladimir_Nesov 10 March 2010 07:11:06PM *  1 point [-]

When you say "precommited", you mean "effectively signalled precommitment". When you say "can't precommit" (that is, can precommit only to certain other things), you mean "there is no way of effectively signalling this precommitment". Thus, you state that you can't signal that you'd uphold a counterfactual precommitment. But if it's possible to give your source code, you can.

(Or the game might have a notion of rational strategy, and so you won't need either source code or signalling of precommitment.)

Comment author: FAWS 10 March 2010 07:21:40PM *  4 points [-]

Please don't correct me on what I think. My use of precommitting has absolutely nothing to do with signaling. I first thought about these things (this explicitly) in the context of time travel, and you can't fool the universe with signaling, no matter how good your acting skills.

Comment author: Vladimir_Nesov 10 March 2010 08:53:35PM *  1 point [-]

I don't propose fooling anyone, signaling is most effective when it's truthful.

What could it mean to "make a precommitment", if not to signal the fact that your strategy is a certain way? You strategy either is, or isn't a certain way, this is a fixed fact about yourself, facts don't change. This being apparently the only resolution, I was not so much correcting as elucidating what you were saying (but assuming you didn't think of this elucidation explicitly), in order to make the conclusion easier to see (that the problem is with inability to signal counterfactual aspects of the strategy).

Comment author: FAWS 10 March 2010 09:10:26PM *  1 point [-]

I don't propose fooling anyone, signaling is most effective when it's truthful.

Signaling is about perceptions, not the truth by necessity. That means that fooling is at least a hypothetical possibility. Which is not the case for my use of precommittment.

What could it mean to "make a precommitment", if not to signal the fact that your strategy is a certain way?

Taking the decision not to change your mind later in a way you will stick to. If as you seem to suggest the question whether the agent later acts a certain way or not is already implicit in its original source code then this agent already comes into existence precommitted (or not, as the case may be).

Comment author: Vladimir_Nesov 10 March 2010 09:30:05PM *  2 points [-]

Taking the decision not to change your mind later in a way you will stick to.

That you've taken this decision is a fact about your strategy (as such, it's timeless: looking at it from ten years ago doesn't change it). There is a similar fact of what you'd do if the situation was different.

Did you read about counterfactual mugging, and do you agree that one should give up the money? No precommitment in this sense could help you there: there is no explicit decision in advance, it has to be a "passive" property of your strategy (the distinction between a decision that was "made" and that wasn't is superficial one -- that's my point).

If as you seem to suggest the question whether the agent later acts a certain way or not is already implicit in its original source code then this agent already comes into existence precommitted (or not, as the case may be).

How could it be otherwise? And if so, "deciding to precommit" (in the sense of fixing this fact at a certain moment) is impossible in principle. All you can do is tell the other player about this fact, maybe only after you yourself discovered it (as being the way to win, and so the thing to do, etc.)

Comment author: FAWS 10 March 2010 09:40:59PM *  1 point [-]

That you've taken this decision is a fact about your strategy (as such, it's timeless: looking at it from ten years ago doesn't change it). There is a similar fact of what you'd do if the situation was different.

Yes, its a fact about your strategy, but this particular strategy would not have been your strategy before making that decision (it may have been a strategy you were considering, though). Unless you want to argue that there is no such thing as a decision, which would be a curious position in the context of a thought experiment about decision theory.

Did you read about counterfactual mugging, and do you agree that one should give up the money?

Yes, I considered myself precommitted to hand over the money when reading that. I would not have considered myself precommmitted before my speculations about time travel a couple of years ago, and if I had read the scenario of the counterfactual mugging and nothing else here, and if I had been forced to say whether I would hand over the money without time to think it though I would have said that I would not (I can't tell what I would have said given unlimited time).

Comment author: Vladimir_Nesov 10 March 2010 09:44:55PM 0 points [-]

Signaling is about perceptions, not the truth by necessity.

Any evidence, that is any way in which you may know facts about the world, is up to interpretation, and you may err in interpreting it. But it's also the only way to observe the truth.

Comment author: FAWS 10 March 2010 09:51:25PM 1 point [-]

You are talking about the relation between truth and your own perceptions. None of this is relevant for the relation between truth and what you want other peoples perceptions to be, which is the context those words are used in the post you reply to. Are you deliberately trying to misinterpret me? Do I need to make all of my posts lawyer-proof?

Comment author: wedrifid 10 March 2010 09:14:00PM 0 points [-]

this is a fixed fact about yourself, facts don't change.

What I was 10 years ago is a fixed fact about what I was 10 years ago. That doesn't change. But I have.

Comment author: Vladimir_Nesov 10 March 2010 09:22:18PM 0 points [-]

So? (Not a rhetorical question.)

Comment author: wedrifid 10 March 2010 09:33:29PM *  0 points [-]

The point is that it is not a fixed fact about yourself unless you have an esoteric definition of self that is "what I was, am or will be at one particular instant in time". Under the conventional meaning of 'yourself', you can change and do so constantly. Essentially the 'So?' is a fundamental rejection of the core premise of your comment.

(We disagree about a fundamental fact here. It is a fact that appears trivial and obvious to me and I assume your view appears trivial and obvious to you. It doesn't seem likely that we will resolve this disagreement. Do you agree that it would be best for us if we just leave it at that? You can, of course, continue the discussion with FAWS who on this point at least seems to have the same belief as I.)

Comment author: wedrifid 10 March 2010 08:50:22PM 2 points [-]

When you say "precommited", you mean "effectively signalled precommitment". When you say "can't precommit" (that is, can precommit only to certain other things), you mean "there is no way of effectively signalling this precommitment".

FAWS clearly does not mean that. He means what he says he means and you disagree with him.

Since the game stipulates that one of the two acts before the other editing their source code is a viable option. If you happen to know that the other party is vulnerable to this kind of tactic then this is the right decision to make.

(Or the game might have a notion of rational strategy, and so you won't need either source code or signalling of precommitment.)

On this I agree.

Comment author: Vladimir_Nesov 10 March 2010 08:58:43PM *  0 points [-]

FAWS clearly does not mean that. He means what he says he means and you disagree with him.

I don't disagree with him, because I don't see what else it could mean.

Since the game stipulates that one of the two acts before the other editing their source code is a viable option.

See the other reply -- the edited code is not an interesting fact. The communicated code must be the original one -- if it's impossible to verify, this just means it can't be effectively communicated (signalled), which implies that you can't signal your counterfactual precommitment.

Comment author: wedrifid 10 March 2010 09:09:58PM 0 points [-]

See the other reply -- the edited code is not an interesting fact. The communicated code must be the original one

No, it need not be the original code. In fact, if the Baron really wants to he can destroy all copies of the original code. This is a counterfactual actual universe. The agent that is the baron is made up of quarks which can be moved about using the normal laws of physics.

Comment author: Vladimir_Nesov 10 March 2010 09:40:42PM *  0 points [-]

It need not be the original code, but if we are interested in the original code, then we read the communicated data as evidence about the original code -- for what it's worth. It may well be in Baron's interest to give info about his code -- since otherwise, what distinguishes him from a random jumble of wires, in which case the outcome may not be appropriate for his skills.

Comment author: prase 11 March 2010 09:09:27AM 0 points [-]

By precommiting I understand starting to be aware of the fact that my source code will do the particular thing with certainty. Nobody knows his source code completely, and even knowing the source code doesn't imply knowing all its outputs immediately. So, what I wanted to say is that when making the threat, the baron must know that he will certainly act the way he announces (this is the precommitment) and the countess has to know this fact about the baron (this is the signalling part).

Time matters because the baron has to calculate his counterfactual actions (i.e. partly simulate himself) before he can precommit in the sense I understand the word.

Comment author: FAWS 10 March 2010 06:31:51PM *  1 point [-]

Who precommits first wins. If the baron precommits to fulfil the threat unless he gets the money, later precommitment of the countess is worthless, since she expects the baron to fulfil the threat anyway.

Obviously. Hence my use of perfect tense rather than present tense. A world with agents acting and reflecting in the way the two players acting in the example do, but without previous commitments that make this precise behavior impossible seems highly implausible to me. I personally would have considered myself as being precommitted not to respond to blackmail in the scenario given even before reading it, and that would have been obvious to anyone familiar enough with me to reasonably feel as confident about predicting my reaction as would be required in the scenario.

Comment author: Unknowns 10 March 2010 05:50:49PM 2 points [-]

"Being blackmailable only to irrationally blackmailing agents who were never deliberately modified into such by anyone"... i.e. being blackmailable by any old normal blackmailer.

Comment author: FAWS 10 March 2010 06:35:25PM 2 points [-]

Most normal blackmailers don't try to blackmail knowably unblackmailable agents and are therefore insufficiently irrational in the sense used.