What Program Are You?
I've been trying for a while to make sense of the various alternate decision theories discussed here at LW, and have kept quiet until I thought I understood something well enough to make a clear contribution. Here goes.
You simply cannot reason about what to do by referring to what program you run, and considering the other instances of that program, for the simple reason that: there is no unique program that corresponds to any physical object.
Yes, you can think of many physical objects O as running a program P on data D, but there are many many ways to decompose an object into program and data, as in O = <P,D>. At one extreme you can think of every physical object as running exactly the same program, i.e., the laws of physics, with its data being its particular arrangements of particles and fields. At the other extreme, one can think of each distinct physical state as a distinct program, with an empty unused data structure. Inbetween there are an astronomical range of other ways to break you into your program P and your data D.
Eliezer's descriptions of his "Timeless Decision Theory", however refer often to "the computation" as distinguished from "its input" in this "instantiation" as if there was some unique way to divide a physical state into these two components. For example:
The one-sentence version is: Choose as though controlling the logical output of the abstract computation you implement, including the output of all other instantiations and simulations of that computation.
The three-sentence version is: Factor your uncertainty over (impossible) possible worlds into a causal graph that includes nodes corresponding to the unknown outputs of known computations; condition on the known initial conditions of your decision computation to screen off factors influencing the decision-setup; compute the counterfactuals in your expected utility formula by surgery on the node representing the logical output of that computation.
Timeless decision theory, in which the (Godelian diagonal) expected utility formula is written as follows: Argmax[A in Actions] in Sum[O in Outcomes](Utility(O)*P(this computation yields A []-> O|rest of universe)) ... which is why TDT one-boxes on Newcomb's Problem - both your current self's physical act, and Omega's physical act in the past, are logical-causal descendants of the computation, and are recalculated accordingly inside the counterfactual. ... Timeless decision theory can state very definitely how it treats the various facts, within the interior of its expected utility calculation. It does not update any physical or logical parent of the logical output - rather, it conditions on the initial state of the computation, in order to screen off outside influences; then no further inferences about them are made.
These summaries give the strong impression that one cannot use this decision theory to figure out what to decide until one has first decomposed one's physical state into one's "computation" as distinguished from one's "initial state" and its followup data structures eventually leading to an "output." And since there are many many ways to make this decomposition, there can be many many decisions recommended by this decision theory.
The advice to "choose as though controlling the logical output of the abstract computation you implement" might have you choose as if you controlled the actions of all physical objects, if you viewed the laws of physics as your program, or choose as if you only controlled the actions of the particular physical state that you are, if every distinct physical state is a different program.
Least Signaling Activities?
I take it as obvious that signaling is an important function in many human behaviors. That is, the details of many of our behaviors make sense as a package designed to persuade others to think well of us. While we may not be conscious of this design, it seems important nonetheless. In fact, in many areas we seem to be designed to not be conscious of this influence on our behavior.
But if signaling is not equally important to all behaviors, we can sensibly ask the question: for which behaviors does signaling least influence our detailed behavior patterns? That is, for what behaviors need we be the least concerned that our detailed behaviors are designed to achieve signaling functions? For what actions can we most reasonably believe that we do them for the non-signaling reasons we usually give?
You might suggest sleep, but others are often jealous of how much sleep we get, or impressed by how little sleep we can get by on. You might suggest watching TV, but people often go out of their way to mention what TV shows they watch. The best candidate I can think of so far is masturbation, though some folks seem to brag about it as a sign of their inexhaustible libido.
So I thought to ask the many thoughtful commentors at Less Wrong: what are good candidates for our least signaling activities?
Added: My interest in this question is to look for signs of when we can more trust our conscious reasoning about what to do when how. The more signaling matters, the less I can trust such reasoning, as it usually does not acknowledge the signaling influences. If there is a distinctive mental mode we enter when reasoning about how exactly to defecate, nose-pick, sleep, masturbate, and so on, this is plausibly a more honest mental mode. It would be useful to know what our most honest mental modes look like.
Rationality Toughness Tests
(Epistemic) rationality has two major components:
- Smarts: An ability to, by attending, infer truth from info under ideal circumstances.
- Toughness: An ability to limit performance degradation as circumstances worsen.
Attending takes time, energy, quiet, etc. Circumstances where human rationality degrades include when:
- We expect the truth to long remain hidden.
- The stakes are very low, or very high, to us.
- Others see our opinions, and prefer certain ones.
- The topics are where humans oft self-deceive.
It seems relatively easy to test rationality smarts; repeatedly give folks info and time to work new problems and measure their accuracy, calibration, etc. And I have an idea for testing for rationality toughness: compare performance on info-similar pairs of good/bad-circumstance problems.
For example, assume people are better at evaluating if a spouse is cheating when considering an acquaintance in their social circle, relative to a stranger or their own spouse. If so, we could pose them a pair of problems with very similar info structure, one with an easy spouse and one with a hard spouse. The closeness of their response in these two cases would then be a measure of their rationality toughness.
Of course this test may fail if the similarity is too obvious, or the pair are asked too closely in time. But maybe we don't even need to ask the same person the two questions; perhaps we could usefully compare someone's answer on a hard question to answers from a pool of similar people on matched easy questions.
While I haven't thought this through, it already suggests a training technique: consider matched hard/easy circumstance problems and compare your answers, separated by enough time that you forget most of your previous analysis.
Most Rationalists Are Elsewhere
Most healthy intellectual blogs/forums participate in conversations among larger communities of blogs and forums. Rather than just "preaching to a choir" of readers, such blogs often quote and respond to posts on other blogs. Such responses sometimes support, and sometimes criticize, but either way can contribute to a healthy conversation.
If folks at Less Wrong saw themselves as a part of a larger community of rationalists, they would realize that most rationalist authors and readers are not at Less Wrong. To participate in a healthy conversation among the wider community of rationalists, they would often respond to posts at other sites, and expect other sites to respond often to them. In contrast, an insular group defined by something other than its rationality would be internally focused, rarely participating in such larger conversations.
Today at Overcoming Bias I respond to a post by Eliezer here at Less Wrong. Though I post occasionally here at Less Wrong, I will continue to post primarily at Overcoming Bias. I consider myself part of a larger rationalist community, and will continue to riff off relevant posts here and elsewhere. I hope you will continue to see me as a part of your relevant world.
I worry a little that Less Wrong karma score incentives may encourage an inward focus, since karma is so far only scored for internal site activity.
Rational Me or We?
Martial arts can be a good training to ensure your personal security, if you assume the worst about your tools and environment. If you expect to find yourself unarmed in a dark alley, or fighting hand to hand in a war, it makes sense. But most people do a lot better at ensuring their personal security by coordinating to live in peaceful societies and neighborhoods; they pay someone else to learn martial arts. Similarly, while "survivalists" plan and train to stay warm, dry, and fed given worst case assumptions about the world around them, most people achieve these goals by participating in a modern economy.
The martial arts metaphor for rationality training seems popular at this website, and most discussions here about how to believe the truth seem to assume an environmental worst case: how to figure out everything for yourself given fixed info and assuming the worst about other folks. In this context, a good rationality test is a publicly-visible personal test, applied to your personal beliefs when you are isolated from others' assistance and info.
I'm much more interested in how we can can join together to believe truth, and it actually seems easier to design institutions which achieve this end than to design institutions to test individual isolated general tendencies to discern truth. For example, with subsidized prediction markets, we can each specialize on the topics where we contribute best, relying on market consensus on all other topics. We don't each need to train to identify and fix each possible kind of bias; each bias can instead have specialists who look for where that bias appears and then correct it.
Perhaps martial-art-style rationality makes sense for isolated survivalist Einsteins forced by humanity's vast stunning cluelessness to single-handedly block the coming robot rampage. But for those of us who respect the opinions of enough others to want to work with them to find truth, it makes more sense to design and field institutions which give each person better incentives to update a common consensus.
The Costs of Rationality
The word "rational" is overloaded with associations, so let me be clear: to me [here], more "rational" means better believing what is true, given one's limited info and analysis resources.
Rationality certainly can have instrumental advantages. There are plenty of situations where being more rational helps one achieve a wide range of goals. In those situtations, "winnners", i.e., those who better achieve their goals, should tend to be more rational. In such cases, we might even estimate someone's rationality by looking at his or her "residual" belief-mediated success, i.e., after explaining that success via other observable factors.
But note: we humans were designed in many ways not to be rational, because believing the truth often got in the way of achieving goals evolution had for us. So it is important for everyone who intends to seek truth to clearly understand: rationality has costs, not only in time and effort to achieve it, but also in conflicts with other common goals.
Yes, rationality might help you win that game or argument, get promoted, or win her heart. Or more rationality for you might hinder those outcomes. If what you really want is love, respect, beauty, inspiration, meaning, satisfaction, or success, as commonly understood, we just cannot assure you that rationality is your best approach toward those ends. In fact we often know it is not.
The truth may well be messy, ugly, or dispriting; knowing it make you less popular, loved, or successful. These are actually pretty likely outcomes in many identifiable situations. You may think you want to know the truth no matter what, but how sure can you really be of that? Maybe you just like the heroic image of someone who wants the truth no matter what; or maybe you only really want to know the truth if it is the bright shining glory you hope for.
Be warned; the truth just is what it is. If just knowing the truth is not reward enough, perhaps you'd be better off not knowing. Before you join us in this quixotic quest, ask yourself: do you really want to be generally rational, on all topics? Or might you be better off limiting your rationality to the usual practical topics where rationality is respected and welcomed?
Test Your Rationality
So you think you want to be rational, to believe what is true even when sirens tempt you? Great, get to work; there's lots you can do. Do you want to justifiably believe that you are more rational than others, smugly knowing your beliefs are more accurate? Hold on; this is hard.
Humans nearly universally find excuses to believe that they are more correct that others, at least on the important things. They point to others' incredible beliefs, to biases afflicting others, and to estimation tasks where they are especially skilled. But they forget most everyone can point to such things.
But shouldn't you get more rationality credit if you spend more time studying common biases, statistical techniques, and the like? Well this would be good evidence of your rationality if you were in fact pretty rational about your rationality, i.e., if you knew that when you read or discussed such issues your mind would then systematically, broadly, and reasonably incorporate those insights into your reasoning processes.
But what if your mind is far from rational? What if your mind is likely to just go through the motions of studying rationality to allow itself to smugly believe it is more accurate, or to bond you more closely to your social allies?
It seems to me that if you are serious about actually being rational, rather than just believing in your rationality or joining a group that thinks itself rational, you should try hard and often to test your rationality. But how can you do that?
To test the rationality of your beliefs, you could sometimes declare beliefs, and later score those beliefs via tests where high scoring beliefs tend to be more rational. Better tests are those where scores are more tightly and reliably correlated with rationality. So, what are good rationality tests?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)