Will_Newsome comments on Open Thread September, Part 3 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (203)
Are you implying that there is an irrational focus on cooperation? I could see how this claim could be made about Eliezer or Drescher but less so about Nesov or Wei. It's not so much a focus on the aesthetics of the shiny idea of cooperation so much as the realization that if cooperation yields the best results, our decision theory should probably cooperate. It's not so much accommodating cooperation or acausal interaction as capitalizing on them. If it's impossible in practice, then the decision theory should reflect that. Currently, it seems incredibly difficult to find or define isomorphisms between computations an agent would consider itself, though people are working on it with interesting approaches. It's the ideal we'd like our decision theory to reach.
Also, I don't believe that timeless ontology is necessary -- at least, I'm not sure that it actually changes anything decision theoretically speaking. At any rate Wei Dai's and I think others' decision theory work is being done under the assumption that the agent in question will be operating in a Tegmark multiverse (or generally some kind of ensemble universe), and the notion of time doesn't really make sense in that case, even if it does make sense in 'our' multiverse (though I don't know what postulating this 'time' thing gets you, really).
Acausal trade is just a way to capitalize on comparative advantage over vast distances... it's a brilliant and frighteningly logical idea. (I believe Carl Shulman thought it up? I'm rather jealous at any rate.) Why do you think acausal trade wouldn't be a good idea, decision theoretically speaking? Or why is the concept confused, metaphysically speaking? Practically speaking, the combinatorial explosion of potential trading partners is difficult to work with, but if a human can choose between branches in the combinatorial explosion of a multiverse via basic planning on stupid faulty hardware like brains, an AGI might very well be able to do similar simulation of trading partners in an ensemble universe (or just limit the domain, of course). (I think Vladimir Nesov came up with this analogy, or something like it.)
I don't know what's going on, except that peculiar statements are being made, even about something as mundane as voting.
That's what ordinary decision theory does. The one example of a deficiency that I've seen is Newcomb's problem, which is not really a cooperation problem. Instead, I see people making magical statements about the consequences of an individual decision (Nesov, quoted above) or people wanting to explain mundane examples of coordination in exotic ways (Alan Crowe, in the other thread I linked).
Empirical adequacy? Talking about "time" strays a little from the real issue, which is the denial of change (or "becoming" or "flow"). It ends up being yet another aspect of reality filed under "subjectivity" and "how things feel". You postulate a timeless reality, and then attached to various parts of that are little illusions or feelings of time passing. This is not plausible as an ultimate picture. In fact, it's surely an inversion of reality: fundamentally, you do change; you are "becoming", you aren't just "being"; the timeless reality is the imagined thing, a way to spatialize or logicize temporal relations so that a whole history can be grasped at once by mental modalities which specialize in static gestalts.
We need a little more basic conceptual and ontological progress before we can re-integrate the true nature of time with our physical models.
To a first approximation, for every possible world where a simulation of you existed in an environment where your thought or action produced an outcome X, there would be another possible world where it has the opposite effect. Also, for every world where a simulation of you exists, there are many more worlds where the simulated entity differs from you in every way imaginable, minor and major. Also, what you do here has zero causal effect on any other possible world.
The fallacy may be to equate yourself with the equivalence class of isomorphic computations, rather than seeing yourself to be a member of that class (an instantiation of an abstract computation, if you like). By incorrectly identifying yourself with the schema rather than the instantiation, you imagine that your decision here is somehow responsible for your copy's decision there, and so on. But that's not how it is, and the fact that someone simulating you in another world can switch at any time to simulating a variant who is no longer you highlights the pragmatic error as well. The people running the simulation have all the power. If they don't like the deal you're offering them, they'll switch to another you who is more accommodating.
Another illusion which may be at work here is the desire to believe that the simulation is the thing itself - that your simulators in the other world really are looking at you, and vice versa. But I find it hard to refute the thinking here, because it's so fuzzy and the details are probably different for different individuals. I actually had ideas like this myself at various times in the distant past, so it may be a natural thing to think of, when you get into the idea of multiple worlds and simulations.
Do you know the expression, folie a deux? It means a shared madness. I can imagine acausal trade (or other acausal exchanges) working in that way. That is, there might be two entities in different worlds who really do have a mutually consistent relationship, in which they are simulating each other and acting on the basis of the simulation. But they would have to share the same eccentric value system or the same logical errors. Precisely because it's an acausal relationship, there is no way for either party to genuinely enforce anything, threaten anything, or guarantee anything, and if you dare to look into the possible worlds nearby the one you're fixated on, you will find variations of your partner in acausal trade doing many wacky things which break the contract, or getting rewarded for doing so, or getting punished for fulfilling it.
Many problems with your comment.
1) Why do you pull subjective experience into the discussion at all? I view decision theory as a math problem, like game theory. Unfeeling robots can use it.
2) How can an "instantiation" of a class of isomorphic computations tell "itself" from all the other instantiations?
3) The opposing effects in all possible worlds don't have to balance out, especially after we weigh them by our utility function on the worlds. (This is the idea of "probability as degree of caring", I'm a little skeptical about it but it does seem to work in toy problems.)
4) The most important part. We already have programs that cooperate with each other in the Prisoner's Dilemma while being impossible to cheat, and all sorts of other shiny little mathematical results. How can your philosophical objections break them?
If you're referring to the discussion about time, that's a digression that doesn't involve decision theory.
It's a logical distinction, not an empirical one. Whoever you are, you are someone in particular, not someone in general.
I disagree with "probability as degree of caring", but your main point is correct independently of that. However, it is not enough just to say that the effects "don't have to balance out". The nearby possible worlds definitely do contain all sorts of variations on the trading agents for whom the logic of the trade does not work or is interpreted differently. But it seems like no-one has even thought about this aspect of the situation.
Are these programs and results in conflict with ordinary decision theory? That's the issue here - whether we need an alternative to "causal decision theory".
Can't parse.
Yes, UDT and CDT act differently in Newcomb's Problem, Parfit's Hitchhiker, symmetric PD and the like. (We currently formalize such problems along these lines.) But that seems to be obvious, maybe you were asking about something else?
Even if there are infinitely many subjective copies of you in the multiverse, it's a matter of logic that this particular you is just one of them. You don't get to say "I am all of them". You-in-this-world are only in this world, by definition, even if you don't know exactly which world this is.
Parfit's Hitchhiker seems like a pretty ridiculous reason to abandon CDT. The guy will leave you to die because he knows you won't keep your word. If you know that, and you are capable of sincerely committing in advance to give him the money when you reach the town, then making that sincere commitment is the thing to do, and CDT should say as much.
I also don't believe that a new decision theory will consistently do better than CDT on PD. If you cooperate "too much", if you have biases towards cooperation, you will be exploited in other settings. It's a sort of no-free-lunch principle.
It should, but it doesn't. If you get a ride to town, CDT tells you to break your promise and stiff the guy. So in order to sincerely commit yourself, you'd want to modify yourself to become an agent that follows CDT in all cases except when deciding whether to pay the guy in the end. So, strictly speaking, you aren't a CDT agent anymore. What we want is a decision theory that won't try to become something else.
CDT always defects in one-shot PD, right? But it's obvious that you should cooperate with an exact copy of yourself. So CDT plus cooperating with exact copies of yourself is strictly superior to CDT in PD.
I consider it debatable whether these amendments to naive CDT - CDT plus keeping a commitment, CDT plus cooperating with yourself - really constitute a new decision theory. They arise from reasoning about the situation just a little further, rather than importing a whole new method of thought. Do TDT or UDT have a fundamentally different starting point to CDT?
Well, I'm not sure what you're asking here. The problem that needs solving is this: We don't have a mathematical formalism that tells us what to do and which also satisfies a bunch of criteria (like one-boxing on Newcomb's problem, etc.) which attempt to capture the idea that "a good decision theory should win".
When we criticize classical CDT, we are actually criticizing the piece of math that can be translated as "do the thing that, if I-here-now did it, would cause the best possible situation to come about". There are lots of problems with this. "Reasoning about the situation" ought to go into formulating a new piece of math that has no problems. All we want is this new piece of math.
I'm only just learning that (apparently) the standard rival of causal decision theory is "evidential decision theory". So is that the original acausal decision theory, with TDT and UDT just latecomers local to LW? As you can see I am dangerously underinformed about the preexisting theoretical landscape, but I will nonetheless state my impressions.
If I think about a "decision theory" appropriate for real-world decisions, I think about something like expected-utility maximization. There are a number of problems specific to the adoption of a EUM framework. For example, you have to establish a total order on all possible states of the world, and so you want to be sure that the utility function you construct genuinely represents your preferences. But assuming that this has been accomplished, the problem of actually maximizing expected utility turns into a problem of computation, modeling an uncertain world, and so forth.
The problems showing up in these debates about causal vs evidential and causal vs acausal seem to have a very different character. If I am making a practical decision, I expect both to use causal thinking and to rely on evidence. CDT vs EDT then sounds like a debate about which indispensable thing I can dispense with.
Another thing I notice is that the thought experiments which supposedly create problems for CDT all involve extremes that don't actually happen. Newcomb's problem involves a superbeing with a perfect capacity to predict your choice, Parfit's Hitchhiker is picked up by a mind reader who absolutely knows whether you will keep a promise or not, PD against your copy assumes that you and your copy will knowably make exactly the same choice. (At least this last thought experiment is realizable, in miniature, with simple computer programs.) What happens to these problems if you remove the absolutism?
Suppose Omega or Parfit's mindreader is right only 99% of the time. Suppose your copy only makes the same choice as you do, 99% of the time. It seems like a practically relevant decision theory (whether or not you call it CDT) should be able to deal with such situations, because they are only a variation on the usual situation in reality, where you don't have paranormally assured 100% knowledge of other agents, and where everything is a little inferential and a little uncertain. It seem that, if you want to think about these matters, first you should see how your decision theory deals with the "99% case", and then you should "take the limit" to the 100% case which defines the traditional thought experiment, and you should see if the recommended decisions vary continuously or discontinuously.
Err... the 'C'? 'Causal'.
Only settings that directly reward stupidity (capricious Omega, etc). A sane DT will cooperate whenever that is most likely to give you the best result but not a single time more.
It is even possible to consider (completely arbitrary) situations in which TDT will defect while CDT will cooperate. There isn't an inherent bias in TDT itself (just some proponents.)
Can you give an example? (situation where CDT cooperates but TDT defects)
Do you mean for PD variants?
I don't know what your method is for determining what cooperation maps to for the general case, but I believe this non-PD example works: costly punishment. Do you punish a wrongdoer in a case where the costs of administering the punishment exceed the benefits (including savings from future deterrence of others), and there is no other punishment option?
I claim the following:
1) Defection -> punish
2) Cooperation -> not punish
3) CDT reasons that punishing will cause lower utility on net, so it does not punish.
4) TDT reasons that "If this algorithm did not output 'punish', the probability of this crime having happened would be higher; thus, for the action 'not punish', the crime's badness carries a higher weighting than it does for the action 'punish'." (note: does not necessarily imply punish)
5) There exist values for the crime's badness, punishment costs, and criminal response to expected punishment for which TDT punishes, while CDT always doesn't.
6) In cases where TDT differs from CDT, the former has the higher EU.
Naturally, you can save CDT by positing a utility function that values punishing of wrongdoers ("sense of justice"), but we're assuming the UF is fixed -- changing it is cheating.
What do you think of this example?
Not specifically. I'm just seeking general enlightenment.
It's bringing the features of TDT into better view for me. There's this Greg Egan story where you have people whose brains were forcibly modified so as to make them slaves to a cause, and they rediscover autonomy by first reasoning that, because of the superhuman loyalty to the cause which the brain modification gives them, they are more reliable adherents of the cause than the nominal masters who enslaved them, and from there they proceed to reestablish the ability to set their own goals. TDT reminds me of that.