Vladimir_Nesov comments on The Blackmail Equation - Less Wrong

13 Post author: Stuart_Armstrong 10 March 2010 02:46PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (87)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 10 March 2010 06:40:22PM *  4 points [-]

There is no "first" in precommiting -- your source code precommits you to certain actions, and you can't influence your source code, only carry out what the code states. The notion of precommiting, as a modification, is bogus (not so for the signalling of being precommited, or of being precommited in the particular case). You could be precommited to ignore certain signals of precommitment as well, and at some point signal such a precommitment. There seems to be no sense in distinguishing between when the same signal of precommitment is made (but it should be about the same precommitment, not a conditional variant of the previous one).

Comment author: wedrifid 10 March 2010 08:39:45PM 1 point [-]

There is no "first" in precommiting -- your source code precommits you to certain actions, and you can't influence your source code, only carry out what the code states. The notion of precommiting, as a modification, is bogus

You can influence your source code. You change the words and symbols in the text file, hit recompile, load the new binary into memory and execute it. If your code is such that it considers making such modifications as a suitable action to a situation then that is what you will do.

Comment author: prase 11 March 2010 09:21:40AM 2 points [-]

Common computer programs have a rather sharp boundary between their source code and the data. In brains (and hypothetical AIs) this distinction is (would be) probably less explicit. Whenever the baron learns anything, his source code changes in some sense, involuntarily, without recompiling. Still, the original source code contains all the information. Precommiting, in order to have some importance, should mean learning about a particular output of your own source code, rather than recompiling.

Comment author: wedrifid 12 March 2010 12:17:48AM 0 points [-]

The use of 'source code' here is merely a metaphor.

Comment author: prase 12 March 2010 08:30:37AM 0 points [-]

Metaphor standing for what exactly?

Comment author: wedrifid 13 March 2010 04:01:28AM 0 points [-]

UTM tape, brain, clockwork mechanism... whatever.

Comment author: Vladimir_Nesov 10 March 2010 08:45:53PM 0 points [-]

Think functional program, or what was initially written on the tape of a UTM. We are interested in that particular fact, not what happened after.

Comment author: wedrifid 10 March 2010 09:02:21PM *  2 points [-]

But I am interested in what happened after. If a tape operating on a UTM is programmed to operate a peripheral device to take the tape and modify it. then it is able to do that and the original tape is no longer running, the new one is. For any given agent in the universe it is possible to alter its state such that it behaves differently. Agents that are not implemented within this universe may not be changed in this way and those are the agents that I am not interested in.

Think functional program

Functional programs can operate machines that alter code to produce new, different functional programs.

The baron can alter his source code. Once he does so he is a different agent. How a countess responds to the baron's decision to modify his source code is a different question. If the countess is wise she will not pay in such a situation, the baron will know this and he will choose not to modify his source code. But it is a choise, the universe permits it.

Comment author: Vladimir_Nesov 10 March 2010 09:19:12PM 1 point [-]

If the countess is wise she will not pay in such a situation, the baron will know this and he will choose not to modify his source code. But it is a choise, the universe permits it.

Now this is a game of signalling -- to lie or not to lie, to trust or not to trust (or just how to interpret a given signal). The payoffs of the original game induce the payoff for this game of signalling the facts useful for efficiently playing the original game.

You don't neet to talk about "modified source code" to discuss this data as signalling the original source code. (The original source code is interesting, because it describes the strategy.) The modified code is only interesting to the extent it signals the original code (which it probably doesn't).

(Incidentally, one can only change things in accordance with the laws of physics, and many-to-one mapping may not be an option, though reconstructing the past may be infeasible in practice.)

Comment author: wedrifid 10 March 2010 09:30:25PM 1 point [-]

to lie or not to lie, to trust or not to trust

But it isn't a lie. It is the truth.

You don't neet to talk about "modified source code" to discuss this data as signalling the original source code.

I don't want to signal the original source code.

Comment author: Vladimir_Nesov 10 March 2010 09:47:31PM 0 points [-]

I don't want to signal the original source code.

But I want to know it, so whatever you do, signals something about the original source code, possibly very little.

But it isn't a lie. It is the truth.

What's not a lie? (I'm confused.) I was just listing the possible moves in a new meta-game.

Comment author: FAWS 10 March 2010 07:01:17PM *  1 point [-]

Having precommitted first is equivalent to deterministically acting as if already precommmitted in this instance, having precommitted too late is equivalent to only acting that way in future instances. I use "having precommitted" rather than "having source code such that..." because it's simpler, more intuitive, and more easily applicable to agents who don't have source code in the straightforward sense.

Comment author: Vladimir_Nesov 10 March 2010 07:11:06PM *  1 point [-]

When you say "precommited", you mean "effectively signalled precommitment". When you say "can't precommit" (that is, can precommit only to certain other things), you mean "there is no way of effectively signalling this precommitment". Thus, you state that you can't signal that you'd uphold a counterfactual precommitment. But if it's possible to give your source code, you can.

(Or the game might have a notion of rational strategy, and so you won't need either source code or signalling of precommitment.)

Comment author: FAWS 10 March 2010 07:21:40PM *  4 points [-]

Please don't correct me on what I think. My use of precommitting has absolutely nothing to do with signaling. I first thought about these things (this explicitly) in the context of time travel, and you can't fool the universe with signaling, no matter how good your acting skills.

Comment author: Vladimir_Nesov 10 March 2010 08:53:35PM *  1 point [-]

I don't propose fooling anyone, signaling is most effective when it's truthful.

What could it mean to "make a precommitment", if not to signal the fact that your strategy is a certain way? You strategy either is, or isn't a certain way, this is a fixed fact about yourself, facts don't change. This being apparently the only resolution, I was not so much correcting as elucidating what you were saying (but assuming you didn't think of this elucidation explicitly), in order to make the conclusion easier to see (that the problem is with inability to signal counterfactual aspects of the strategy).

Comment author: FAWS 10 March 2010 09:10:26PM *  1 point [-]

I don't propose fooling anyone, signaling is most effective when it's truthful.

Signaling is about perceptions, not the truth by necessity. That means that fooling is at least a hypothetical possibility. Which is not the case for my use of precommittment.

What could it mean to "make a precommitment", if not to signal the fact that your strategy is a certain way?

Taking the decision not to change your mind later in a way you will stick to. If as you seem to suggest the question whether the agent later acts a certain way or not is already implicit in its original source code then this agent already comes into existence precommitted (or not, as the case may be).

Comment author: Vladimir_Nesov 10 March 2010 09:30:05PM *  2 points [-]

Taking the decision not to change your mind later in a way you will stick to.

That you've taken this decision is a fact about your strategy (as such, it's timeless: looking at it from ten years ago doesn't change it). There is a similar fact of what you'd do if the situation was different.

Did you read about counterfactual mugging, and do you agree that one should give up the money? No precommitment in this sense could help you there: there is no explicit decision in advance, it has to be a "passive" property of your strategy (the distinction between a decision that was "made" and that wasn't is superficial one -- that's my point).

If as you seem to suggest the question whether the agent later acts a certain way or not is already implicit in its original source code then this agent already comes into existence precommitted (or not, as the case may be).

How could it be otherwise? And if so, "deciding to precommit" (in the sense of fixing this fact at a certain moment) is impossible in principle. All you can do is tell the other player about this fact, maybe only after you yourself discovered it (as being the way to win, and so the thing to do, etc.)

Comment author: FAWS 10 March 2010 09:40:59PM *  1 point [-]

That you've taken this decision is a fact about your strategy (as such, it's timeless: looking at it from ten years ago doesn't change it). There is a similar fact of what you'd do if the situation was different.

Yes, its a fact about your strategy, but this particular strategy would not have been your strategy before making that decision (it may have been a strategy you were considering, though). Unless you want to argue that there is no such thing as a decision, which would be a curious position in the context of a thought experiment about decision theory.

Did you read about counterfactual mugging, and do you agree that one should give up the money?

Yes, I considered myself precommitted to hand over the money when reading that. I would not have considered myself precommmitted before my speculations about time travel a couple of years ago, and if I had read the scenario of the counterfactual mugging and nothing else here, and if I had been forced to say whether I would hand over the money without time to think it though I would have said that I would not (I can't tell what I would have said given unlimited time).

Comment author: Vladimir_Nesov 10 March 2010 09:56:34PM 0 points [-]

Yes, I considered myself precommitted to hand over the money when reading that. I would not have considered myself precommmitted before my speculations about time travel a couple of years ago, and if I had read the scenario of the counterfactual mugging and nothing else here, and if I had been forced to say whether I would hand over the money without time to think it though I would have said that I would not (I can't tell what I would have said given unlimited time).

Would it make a difference if Omega told you that it tossed the coin a thousand years ago (before you've "precommited"), but only came for the money now?

Comment author: FAWS 10 March 2010 10:03:05PM 2 points [-]

That would make no difference whatsoever of course. Only the time I learn about the mugging matters.

Comment author: Vladimir_Nesov 10 March 2010 09:57:14PM *  -1 points [-]

Yes, its a fact about your strategy, but this particular strategy would not have been your strategy before making that decision.

Determinism doesn't allow such magic. You need to read up on free will.

Comment author: FAWS 10 March 2010 10:11:02PM *  3 points [-]

Are you being deliberately obtuse?

I consider a strategy that involves killing myself in certain circumstances, but have not yet committed to it.

  • Before I can do so these circumstances suddenly arise. I chicken out and don't kill myself, because I haven't committed yet (or psyched myself up if you want to call it that). That strategy wasn't really my strategy yet.

  • 5 Minutes later I have committed myself to that strategy. The circumstances I would kill myself under arise, and I actually do it (or so I hope. I'm not completely sure I can make precommittments that strong) The strategy I previously considered is now my strategy.

How is any of that free will magic?

Comment author: Vladimir_Nesov 10 March 2010 09:44:55PM 0 points [-]

Signaling is about perceptions, not the truth by necessity.

Any evidence, that is any way in which you may know facts about the world, is up to interpretation, and you may err in interpreting it. But it's also the only way to observe the truth.

Comment author: FAWS 10 March 2010 09:51:25PM 1 point [-]

You are talking about the relation between truth and your own perceptions. None of this is relevant for the relation between truth and what you want other peoples perceptions to be, which is the context those words are used in the post you reply to. Are you deliberately trying to misinterpret me? Do I need to make all of my posts lawyer-proof?

Comment author: Vladimir_Nesov 10 March 2010 10:04:24PM 0 points [-]

Are you deliberately trying to misinterpret me?

No.

You are talking about the relation between truth and your own perceptions. None of this is relevant for the relation between truth and what you want other peoples perceptions to be, which is the context those words are used in the post you reply to.

The other people will interpret your words depending on whether they expect them to be in accordance with reality. Thus, I'm talking about the relation between the way your words will be interpreted by the people you talk to, and the truth of your words. If signaling (communication) bore no relation to the truth, it would be as useless as listening to white noise.

Comment author: FAWS 10 March 2010 10:22:56PM 1 point [-]

You're doing it again. I never said that signaling bore no relationship to the truth whatsoever, I said it was about perceptions and not by necessity about the truth, and what I (obviously, it seemed to me) meant was that signaling means attempting to manipulate the perceptions of others in a certain way, and that this does not necessarily mean changing the reality of the thing these perceptions are about.

Comment author: wedrifid 10 March 2010 09:14:00PM 0 points [-]

this is a fixed fact about yourself, facts don't change.

What I was 10 years ago is a fixed fact about what I was 10 years ago. That doesn't change. But I have.

Comment author: Vladimir_Nesov 10 March 2010 09:22:18PM 0 points [-]

So? (Not a rhetorical question.)

Comment author: wedrifid 10 March 2010 09:33:29PM *  0 points [-]

The point is that it is not a fixed fact about yourself unless you have an esoteric definition of self that is "what I was, am or will be at one particular instant in time". Under the conventional meaning of 'yourself', you can change and do so constantly. Essentially the 'So?' is a fundamental rejection of the core premise of your comment.

(We disagree about a fundamental fact here. It is a fact that appears trivial and obvious to me and I assume your view appears trivial and obvious to you. It doesn't seem likely that we will resolve this disagreement. Do you agree that it would be best for us if we just leave it at that? You can, of course, continue the discussion with FAWS who on this point at least seems to have the same belief as I.)

Comment author: Vladimir_Nesov 10 March 2010 10:12:52PM *  0 points [-]

Also, you shouldn't agree with the statement I cited here. (At least, it seems to be more clear-cut than the rest of the discussion.) Do you?

Comment author: wedrifid 10 March 2010 10:26:23PM 0 points [-]

I agree with the statement of FAWS' that you quoted there. Although I do note that FAWS' statement is ambiguous. I only agree with it to the extent that the meaning is this:

Yes, its a fact about your strategy, but this particular strategy would not have been your strategy before making [the decision to precommit which involved some change in the particles in the universe such that your new state is one that will take a certtain action in a particular circumstance].

Comment author: Vladimir_Nesov 10 March 2010 09:50:16PM *  0 points [-]

What is the fact about which you see us disagreeing? I don't understand this discussion as having a point of disagreement. From my point of view, we are arguing relevance, not facts. (For example, I don't see why it's interesting to talk of "Who this fact is about?", and I've lost the connection of this point to the original discussion.)

Comment author: wedrifid 10 March 2010 10:01:03PM *  0 points [-]

What is the fact about which you see us disagreeing?

  1. You can modify your source code.
  2. You can make precommitments.
  3. "What could it mean to "make a precommitment"" is 'make a precommitment'. That is a distinct thing and 'signalling that you have made a precommitment". (If you make a precommitment and do not signal it effectively then it sucks for you.)
  4. More simply - on the point on which you were disagreeing with FAWS (I assert that)
    • FAWS' position does have meaning.
    • FAWS' meaning is a different meaning to what you corrected it to.
    • FAWS is right.

I don't understand this discussion as having a point of disagreement. From my point of view, we are arguing definitions, not facts.

It is probably true that we would make the same predictions about what would happen in given interactions between agents.

Comment author: wedrifid 10 March 2010 08:50:22PM 2 points [-]

When you say "precommited", you mean "effectively signalled precommitment". When you say "can't precommit" (that is, can precommit only to certain other things), you mean "there is no way of effectively signalling this precommitment".

FAWS clearly does not mean that. He means what he says he means and you disagree with him.

Since the game stipulates that one of the two acts before the other editing their source code is a viable option. If you happen to know that the other party is vulnerable to this kind of tactic then this is the right decision to make.

(Or the game might have a notion of rational strategy, and so you won't need either source code or signalling of precommitment.)

On this I agree.

Comment author: Vladimir_Nesov 10 March 2010 08:58:43PM *  0 points [-]

FAWS clearly does not mean that. He means what he says he means and you disagree with him.

I don't disagree with him, because I don't see what else it could mean.

Since the game stipulates that one of the two acts before the other editing their source code is a viable option.

See the other reply -- the edited code is not an interesting fact. The communicated code must be the original one -- if it's impossible to verify, this just means it can't be effectively communicated (signalled), which implies that you can't signal your counterfactual precommitment.

Comment author: wedrifid 10 March 2010 09:09:58PM 0 points [-]

See the other reply -- the edited code is not an interesting fact. The communicated code must be the original one

No, it need not be the original code. In fact, if the Baron really wants to he can destroy all copies of the original code. This is a counterfactual actual universe. The agent that is the baron is made up of quarks which can be moved about using the normal laws of physics.

Comment author: Vladimir_Nesov 10 March 2010 09:40:42PM *  0 points [-]

It need not be the original code, but if we are interested in the original code, then we read the communicated data as evidence about the original code -- for what it's worth. It may well be in Baron's interest to give info about his code -- since otherwise, what distinguishes him from a random jumble of wires, in which case the outcome may not be appropriate for his skills.

Comment author: prase 11 March 2010 09:09:27AM 0 points [-]

By precommiting I understand starting to be aware of the fact that my source code will do the particular thing with certainty. Nobody knows his source code completely, and even knowing the source code doesn't imply knowing all its outputs immediately. So, what I wanted to say is that when making the threat, the baron must know that he will certainly act the way he announces (this is the precommitment) and the countess has to know this fact about the baron (this is the signalling part).

Time matters because the baron has to calculate his counterfactual actions (i.e. partly simulate himself) before he can precommit in the sense I understand the word.