Followup to: Newcomb's Problem and Regret of Rationality, Towards a New Decision Theory
Wei Dai asked:
"Why didn't you mention earlier that your timeless decision theory mainly had to do with logical uncertainty? It would have saved people a lot of time trying to guess what you were talking about."
...
All right, fine, here's a fast summary of the most important ingredients that go into my "timeless decision theory". This isn't so much an explanation of TDT, as a list of starting ideas that you could use to recreate TDT given sufficient background knowledge. It seems to me that this sort of thing really takes a mini-book, but perhaps I shall be proven wrong.
The one-sentence version is: Choose as though controlling the logical output of the abstract computation you implement, including the output of all other instantiations and simulations of that computation.
The three-sentence version is: Factor your uncertainty over (impossible) possible worlds into a causal graph that includes nodes corresponding to the unknown outputs of known computations; condition on the known initial conditions of your decision computation to screen off factors influencing the decision-setup; compute the counterfactuals in your expected utility formula by surgery on the node representing the logical output of that computation.
To obtain the background knowledge if you don't already have it, the two main things you'd need to study are the classical debates over Newcomblike problems, and the Judea Pearl synthesis of causality. Canonical sources would be "Paradoxes of Rationality and Cooperation" for Newcomblike problems and "Causality" for causality.
For those of you who don't condescend to buy physical books, Marion Ledwig's thesis on Newcomb's Problem is a good summary of the existing attempts at decision theories, evidential decision theory and causal decision theory. You need to know that causal decision theories two-box on Newcomb's Problem (which loses) and that evidential decision theories refrain from smoking on the smoking lesion problem (which is even crazier). You need to know that the expected utility formula is actually over a counterfactual on our actions, rather than an ordinary probability update on our actions.
I'm not sure what you'd use for online reading on causality. Mainly you need to know:
- That a causal graph factorizes a correlated probability distribution into a deterministic mechanism of chained functions plus a set of uncorrelated unknowns as background factors.
- Standard ideas about "screening off" variables (D-separation).
- The standard way of computing counterfactuals (through surgery on causal graphs).
It will be helpful to have the standard Less Wrong background of defining rationality in terms of processes that systematically discover truths or achieve preferred outcomes, rather than processes that sound reasonable; understanding that you are embedded within physics; understanding that your philosophical intutions are how some particular cognitive algorithm feels from inside; and so on.
The first lemma is that a factorized probability distribution which includes logical uncertainty - uncertainty about the unknown output of known computations - appears to need cause-like nodes corresponding to this uncertainty.
Suppose I have a calculator on Mars and a calculator on Venus. Both calculators are set to compute 123 * 456. Since you know their exact initial conditions - perhaps even their exact initial physical state - a standard reading of the causal graph would insist that any uncertainties we have about the output of the two calculators, should be uncorrelated. (By standard D-separation; if you have observed all the ancestors of two nodes, but have not observed any common descendants, the two nodes should be independent.) However, if I tell you that the calculator at Mars flashes "56,088" on its LED display screen, you will conclude that the Venus calculator's display is also flashing "56,088". (And you will conclude this before any ray of light could communicate between the two events, too.)
If I was giving a long exposition I would go on about how if you have two envelopes originating on Earth and one goes to Mars and one goes to Venus, your conclusion about the one on Venus from observing the one on Mars does not of course indicate a faster-than-light physical event, but standard ideas about D-separation indicate that completely observing the initial state of the calculators ought to screen off any remaining uncertainty we have about their causal descendants so that the descendant nodes are uncorrelated, and the fact that they're still correlated indicates that there is a common unobserved factor, and this is our logical uncertainty about the result of the abstract computation. I would also talk for a bit about how if there's a small random factor in the transistors, and we saw three calculators, and two showed 56,088 and one showed 56,086, we would probably treat these as likelihood messages going up from nodes descending from the "Platonic" node standing for the ideal result of the computation - in short, it looks like our uncertainty about the unknown logical results of known computations, really does behave like a standard causal node from which the physical results descend as child nodes.
But this is a short exposition, so you can fill in that sort of thing yourself, if you like.
Having realized that our causal graphs contain nodes corresponding to logical uncertainties / the ideal result of Platonic computations, we next construe the counterfactuals of our expected utility formula to be counterfactuals over the logical result of the abstract computation corresponding to the expected utility calculation, rather than counterfactuals over any particular physical node.
You treat your choice as determining the result of the logical computation, and hence all instantiations of that computation, and all instantiations of other computations dependent on that logical computation.
Formally you'd use a Godelian diagonal to write:
Argmax[A in Actions] in Sum[O in Outcomes](Utility(O)*P(this computation yields A []-> O|rest of universe))
(where P( X=x []-> Y | Z ) means computing the counterfactual on the factored causal graph P, that surgically setting node X to x, leads to Y, given Z)
Setting this up correctly (in accordance with standard constraints on causal graphs, like noncircularity) will solve (yield reflectively consistent, epistemically intuitive, systematically winning answers to) 95% of the Newcomblike problems in the literature I've seen, including Newcomb's Problem and other problems causing CDT to lose, the Smoking Lesion and other problems causing EDT to fail, Parfit's Hitchhiker which causes both CDT and EDT to lose, etc.
Note that this does not solve the remaining open problems in TDT (though Nesov and Dai may have solved one such problem with their updateless decision theory). Also, although this theory goes into much more detail about how to compute its counterfactuals than classical CDT, there are still some visible incompletenesses when it comes to generating causal graphs that include the uncertain results of computations, computations dependent on other computations, computations uncertainly correlated to other computations, computations that reason abstractly about other computations without simulating them exactly, and so on. On the other hand, CDT just has the entire counterfactual distribution rain down on the theory as mana from heaven (e.g. James Joyce, Foundations of Causal Decision Theory), so TDT is at least an improvement; and standard classical logic and standard causal graphs offer quite a lot of pre-existing structure here. (In general, understanding the causal structure of reality is an AI-complete problem, and so in philosophical dilemmas the causal structure of the problem is implicitly given in the story description.)
Among the many other things I am skipping over:
- Some actual examples of where CDT loses and TDT wins, EDT loses and TDT wins, both lose and TDT wins, what I mean by "setting up the causal graph correctly" and some potential pitfalls to avoid, etc.
- A rather huge amount of reasoning which defines reflective consistency on a problem class; explains why reflective consistency is a rather strong desideratum for self-modifying AI; why the need to make "precommitments" is an expensive retreat to second-best and shows lack of reflective consistency; explains why it is desirable to win and get lots of money rather than just be "reasonable" (that is conform to pre-existing intuitions generated by a pre-existing algorithm); which notes that, considering the many pleas from people who want, but can't find any good intermediate stage between CDT and EDT, it's a fascinating little fact that if you were rewriting your own source code, you'd rewrite it to one-box on Newcomb's Problem and smoke on the smoking lesion problem...
- ...and so, having given many considerations of desirability in a decision theory, shows that the behavior of TDT corresponds to reflective consistency on a problem class in which your payoff is determined by the type of decision you make, but not sensitive to the exact algorithm you use apart from that - that TDT is the compact way of computing this desirable behavior we have previously defined in terms of reflectively consistent systematic winning.
- Showing that classical CDT, given self-modification ability, modifies into a crippled and inelegant form of TDT.
- Using TDT to fix the non-naturalistic behavior of Pearl's version of classical causality in which we're supposed to pretend that our actions are divorced from the rest of the universe - the counterfactual surgery, written out Pearl's way, will actually give poor predictions for some problems (like someone who two-boxes on Newcomb's Problem and believes that box B has a base-rate probability of containing a million dollars, because the counterfactual surgery says that box B's contents have to be independent of the action). TDT not only gives the correct prediction, but explains why the counterfactual surgery can have the form it does - if you condition on the initial state of the computation, this should screen off all the information you could get about outside things that affect your decision; then your actual output can be further determined only by the Godel-diagonal formula written out above, permitting the formula to contain a counterfactual surgery that assumes its own output, so that the formula does not need to infinitely recurse on calling itself.
- An account of some brief ad-hoc experiments I performed on IRC to show that a majority of respondents exhibited a decision pattern best explained by TDT rather than EDT or CDT.
- A rather huge amount of exposition of what TDT decision theory actually corresponds to in terms of philosophical intuitions, especially those about "free will". For example, this is the theory I was using as hidden background when I wrote in "Causality and Moral Responsibility" that factors like education and upbringing can be thought of as determining which person makes a decision - that you rather than someone else makes a decision - but that the decision made by that particular person is up to you. This corresponds to conditioning on the known initial state of the computation, and performing the counterfactual surgery over its output. I've actually done a lot of this exposition on OBLW without explicitly mentioning TDT, like Timeless Control and Thou Art Physics for reconciling determinism with choice (actually effective choice requires determinism, but this confuses humans for reasons given in Possibility and Could-ness). But if you read the other parts of the solution to "free will", and then furthermore explicitly formulate TDT, then this is what utterly, finally, completely, and without even a tiny trace of confusion or dissatisfaction or a sense of lingering questions, kills off entirely the question of "free will".
- Some concluding chiding of those philosophers who blithely decided that the "rational" course of action systematically loses; that rationalists defect on the Prisoner's Dilemma and hence we need a separate concept of "social rationality"; that the "reasonable" thing to do is determined by consulting pre-existing intuitions of reasonableness, rather than first looking at which agents walk away with huge heaps of money and then working out how to do it systematically; people who take their intuitions about free will at face value; assuming that counterfactuals are fixed givens raining down from the sky rather than non-observable constructs which we can construe in whatever way generates a winning decision theory; et cetera. And celebrating of the fact that rationalists can cooperate with each other, vote in elections, and do many other nice things that philosophers have claimed they can't. And suggesting that perhaps next time one should extend "rationality" a bit more credit before sighing and nodding wisely about its limitations.
- In conclusion, rational agents are not incapable of cooperation, rational agents are not constantly fighting their own source code, rational agents do not go around helplessly wishing they were less rational, and finally, rational agents win.
Those of you who've read the quantum mechanics sequence can extrapolate from past experience that I'm not bluffing. But it's not clear to me that writing this book would be my best possible expenditure of the required time.
First of all, congratulations, Eliezer! That's great work. When I read your 3-line description, I thought it would never be computable. I'm glad to see you can actually test it.
Eliezer_Yudkowsky wrote on 19 August 2009 03:05:15PM
Rock-paper-scissors ?
Negotiating to buy a car?
I would like to begin by saying that I don't believe my own statements are True, and I suggest you don't either. I do request that you try thinking WITH them before attacking them. It's really hard to think with an idea AFTER you've attacked it. I've been told my writing sounds preachy or even fanatical. I don't say "In My Opinion" enough. Please imagine "IMO" in front of every one of my statements. Thanks!
Having more information (not incorrect "information") on the opponent's decisions is beneficial.
Let's distinguish Secret Commit & Simultaneous Effect (SCSE) from Commit First & Simultaneous Effect (CFSE) and from Act & Effect First (AEF). That's just a few categories from a coarse categorization of board war games.
The classic gunfight at high noon is AEF (to a first approximation, not counting watching his face & guessing when his reaction time will be lengthened). The fighter who draws first has a serious advantage, the fighter who hits first has a tremendous advantage, but not certain victory. (Hollywood not withstanding, people sometimes keep fighting after taking handgun hits, even a dozen of them.) I contend that all AEFs give advantage to the first actor. Chess is AEF.
My understanding of the Prisoner's Dilemma is that it is SCSE as presented. On this thread, it seems to have mutated into a CFSE (otherwise, there just isn't any "first", in the ordinary, inside-the-Box-Universe, timeful sense). If Prisoner A has managed to get information on Prisoner B's commitment before he commits, this has to be useful. Even if PA is a near-Omega, it can be a reality check on his Visualization of the Cosmic All. In realistic July 2009 circumstances, it identifies PB as one of the 40% of humans who choose 'cooperate' in one-shot PD. PA now has a choice whether to be an economist or a friend.
And now we get down to something fundamental. Some humans are better people than the economic definition of rationality, which " ... assume that each player cares only about minimizing his or her own time in jail". " ... cooperating is strictly dominated) by defecting ... " even with leaked information.
"I don't care what happens to my partner in crime. I don't and I won't. You can't make me care. On the advice of my economist... " That gets both prisoners a 5-year sentence when they could have had 6 months.
That is NOT wisdom! That will make us extinct. (In My Opinion)
Now try on "an injury to one is an injury to all". Or maybe "an injury to one is an (discounted) injury to ME". We just might be able to see that the big nuclear arsenals are a BAD IDEA!
Taking that on, the payoff matrix offered by Wei Dai's Omega (19 August 2009 07:08:23AM)
is now transformed into PA's Internal Payoff Matrix (IPM)
In other words, his utility function has a term for the freedom of Prisoner B. (Economists be damned! Some of us do, sometimes.)
"I'll set κ=0.3 ," Says PA (well, he is a thief). Now PA's IPM is:
Lo and behold! 'cooperate' now strictly dominates!
When over 6 billion people are affected, it doesn't take much of a κ to swing my decisions around. If I'm not working to save humanity, I must have a very low κ for each distant person unknown to me.
People say, "Human life is precious!" Show it to me in results. Show it to me in how people budget their time and money. THAT is why Friendly AI is our only hope. We will 'defect' our way into thwarting any plan that requires a lot of people to change their beliefs or actions. That sub-microscopic κ for unknown strangers is evolved-in, it's not going away. We need a program that can be carried out by a tiny number of people.
.
.
.
IMO.
---=
Maybe I missed the point. Maybe the whole point of TDT is to derive some sort of reduced-selfishness decision norm without an ad-hoc utility function adjustment (is that what "rained down from heaven" means?). I can derive the κ needed in order to save humanity, if there were a way to propagate it through the population. I cannot derive The One True κ from absolute principles, nor have I shown a derivation of "we should save humanity". I certainly fell short of " ... looking at which agents walk away with huge heaps of money and then working out how to do it systematically ... ". I would RATHER look at which agents get their species through their singularity alive. Then, and only then, can we look at something grander than survival. I don't grok in fullness "reflective consistency", but from extinction we won't be doing a lot of reflecting on what went wrong.
IMO.
Now, back to one-shot PD and "going first". For some values of κ and some external payoff matrices (not this one), the resulting IPM is not strictly dominated, and having knowledge of PB's commitment actually determines whether 'cooperate' or 'defect' produces a better world in PA's internal not-quite-so-selfish world-view. Is that a disadvantage? (That's a serious, non-rhetorical question. I'm a neophyte and I may not see some things in the depths where Eliezer & Wei think.)
Now let's look at that game of chicken. Was "throw out the steering wheel" in the definition of the thought experiment? If not, that player just changed the universe-under-consideration, which is a fairly impressive effect in an AEF, not a CFSE.
If re-engineering was included, then Driver A may complete his wheel-throwing (while in motion!) only to look up and see Driver B's steering gear on a ballistic trajectory. Each will have a few moments to reflect on "always get away with it."
If Driver A successfully defenestrates first, is Driver B at a disadvantage? Among humans, the game may be determined more by autonomic systems than by conscious computation, and B now knows that A won't be flinching away. However, B now has information and choices. One that occurs to me is to stop the car and get out. "Your move, A." A truly intelligent player (in which category I do not, alas, qualify) would think up better, or funnier, choices.
Hmmm... to even play Chicken you have to either be irrational or have a damned strange IPM. We should establish that before proceeding further.
I challenge anyone to show me a CFSE game that gives a disadvantage to the second player.
I'm not too proud to beg: I request your votes. I've got an article I'd like to post, and I need the karma.
Thanks for your time and attention.
RickJS
Saving Humanity from Homo Sapiens
08/28/2009 ~20:10 Edit: formatting ... learning formatting ... grumble ... GDSOB tab-deleter ... Fine. I'll create the HTML for tables, but this is a LOT of work for 3 simple tables ... COMMENT TOO LONG!?!? ... one last try ... now I can't quit, I'm hooked! ... NAILED that sucker! ... ~22:40 : added one more example *YAWN*
It's incomprehensible. Try debugging individual ideas first, written up more carefully.