The Anthropic Trilemma
Speaking of problems I don't know how to solve, here's one that's been gnawing at me for years.
The operation of splitting a subjective worldline seems obvious enough - the skeptical initiate can consider the Ebborians, creatures whose brains come in flat sheets and who can symmetrically divide down their thickness. The more sophisticated need merely consider a sentient computer program: stop, copy, paste, start, and what was one person has now continued on in two places. If one of your future selves will see red, and one of your future selves will see green, then (it seems) you should anticipate seeing red or green when you wake up with 50% probability. That is, it's a known fact that different versions of you will see red, or alternatively green, and you should weight the two anticipated possibilities equally. (Consider what happens when you're flipping a quantum coin: half your measure will continue into either branch, and subjective probability will follow quantum measure for unknown reasons.)
But if I make two copies of the same computer program, is there twice as much experience, or only the same experience? Does someone who runs redundantly on three processors, get three times as much weight as someone who runs on one processor?
Let's suppose that three copies get three times as much experience. (If not, then, in a Big universe, large enough that at least one copy of anything exists somewhere, you run into the Boltzmann Brain problem.)
Just as computer programs or brains can split, they ought to be able to merge. If we imagine a version of the Ebborian species that computes digitally, so that the brains remain synchronized so long as they go on getting the same sensory inputs, then we ought to be able to put two brains back together along the thickness, after dividing them. In the case of computer programs, we should be able to perform an operation where we compare each two bits in the program, and if they are the same, copy them, and if they are different, delete the whole program. (This seems to establish an equal causal dependency of the final program on the two original programs that went into it. E.g., if you test the causal dependency via counterfactuals, then disturbing any bit of the two originals, results in the final program being completely different (namely deleted).)
So here's a simple algorithm for winning the lottery:
Mundane Magic
Followup to: Joy in the Merely Real, Joy in Discovery, If You Demand Magic, Magic Won't Help
As you may recall from some months earlier, I think that part of the rationalist ethos is binding yourself emotionally to an absolutely lawful reductionistic universe—a universe containing no ontologically basic mental things such as souls or magic—and pouring all your hope and all your care into that merely real universe and its possibilities, without disappointment.
There's an old trick for combating dukkha where you make a list of things you're grateful for, like a roof over your head.
So why not make a list of abilities you have that would be amazingly cool if they were magic, or if only a few chosen individuals had them?
For example, suppose that instead of one eye, you possessed a magical second eye embedded in your forehead. And this second eye enabled you to see into the third dimension—so that you could somehow tell how far away things were—where an ordinary eye would see only a two-dimensional shadow of the true world. Only the possessors of this ability can accurately aim the legendary distance-weapons that kill at ranges far beyond a sword, or use to their fullest potential the shells of ultrafast machinery called "cars".
"Binocular vision" would be too light a term for this ability. We'll only appreciate it once it has a properly impressive name, like Mystic Eyes of Depth Perception.
So here's a list of some of my favorite magical powers:
Wrong Questions
Followup to: Dissolving the Question, Mysterious Answers to Mysterious Questions
Where the mind cuts against reality's grain, it generates wrong questions—questions that cannot possibly be answered on their own terms, but only dissolved by understanding the cognitive algorithm that generates the perception of a question.
One good cue that you're dealing with a "wrong question" is when you cannot even imagine any concrete, specific state of how-the-world-is that would answer the question. When it doesn't even seem possible to answer the question.
Take the Standard Definitional Dispute, for example, about the tree falling in a deserted forest. Is there any way-the-world-could-be—any state of affairs—that corresponds to the word "sound" really meaning only acoustic vibrations, or really meaning only auditory experiences?
("Why, yes," says the one, "it is the state of affairs where 'sound' means acoustic vibrations." So Taboo the word 'means', and 'represents', and all similar synonyms, and describe again: How can the world be, what state of affairs, would make one side right, and the other side wrong?)
Or if that seems too easy, take free will: What concrete state of affairs, whether in deterministic physics, or in physics with a dice-rolling random component, could ever correspond to having free will?
And if that seems too easy, then ask "Why does anything exist at all?", and then tell me what a satisfactory answer to that question would even look like.
Celebrate Trivial Impetuses
There is a flipside to the trivial inconvenience: the trivial impetus. This is the objectively inconsequential factor that gets you off your rear and doing something you probably would have left undone. It doesn't have to be a major, crippling akrasia issue. I'm not talking so much about finishing your dissertation or remodeling your house, although a trivial impetus could probably get you to make some progress on either. I'm talking about little things that make your life a little better, like trying a new food or permitting a friend to drag you along to a gathering of people and pizza.
An illustrative anecdote: the first time I tried guacamole, I was out with my family at a restaurant and my parents decided to order some. The waiter came out with a little cart with decorative little bowls full of ingredients and a couple of avocados, and proceeded to make guacamole right there with all the finesse of one of those chefs at a hibachi restaurant. He then presented us with the dish of guacamole and a basket of chips.
If my prior reasons for avoiding guacamole had been related to concerns about its freshness or possible arsenic content, this would have been a non-trivial reason to try the new food, but they weren't - I was just twelve, and it was green goop. But on that day, it was green goop that someone had made right in front of me like performance art! I simply had to have some! It was delicious. I have enjoyed guacamole ever since. I would almost certainly have taken years longer to try it, if ever I did, had it not been for that restaurant's habit of making each batch of guacamole fresh in front of the customer.
Not all trivial impetuses have to be so random and fortuitous. Just as you can arrange trivial inconveniences to stand between you and things you should not be doing, you can often arrange trivial impetuses to push you towards things you should be doing. For instance, I often get my friends to instruct me to do things when I'm having trouble getting moving: sometimes all it takes to get me to stop dithering and start making the pasta salad I agreed to bring to a party is someone agreeing when I say, "I should make pasta salad now". Or "I should go to bed now", or "I should probably pay that bill now".
Does anyone have any other ideas for trivial impetuses that could be helpful in fighting small-scale akrasia (or large-scale)?
Every Cause Wants To Be A Cult
Followup to: Correspondence Bias, Affective Death Spirals, The Robbers Cave Experiment
Cade Metz at The Register recently alleged that a secret mailing list of Wikipedia's top administrators has become obsessed with banning all critics and possible critics of Wikipedia. Including banning a productive user when one administrator—solely because of the productivity—became convinced that the user was a spy sent by Wikipedia Review. And that the top people at Wikipedia closed ranks to defend their own. (I have not investigated these allegations myself, as yet. Hat tip to Eugen Leitl.)
Is there some deep moral flaw in seeking to systematize the world's knowledge, which would lead pursuers of that Cause into madness? Perhaps only people with innately totalitarian tendencies would try to become the world's authority on everything—
Correspondence bias alert! (Correspondence bias: making inferences about someone's unique disposition from behavior that can be entirely explained by the situation in which it occurs. When we see someone else kick a vending machine, we think they are "an angry person", but when we kick the vending machine, it's because the bus was late, the train was early and the machine ate our money.) If the allegations about Wikipedia are true, they're explained by ordinary human nature, not by extraordinary human nature.
The ingroup-outgroup dichotomy is part of ordinary human nature. So are happy death spirals and spirals of hate. A Noble Cause doesn't need a deep hidden flaw for its adherents to form a cultish in-group. It is sufficient that the adherents be human. Everything else follows naturally, decay by default, like food spoiling in a refrigerator after the electricity goes off.
Policy Debates Should Not Appear One-Sided
Robin Hanson recently proposed stores where banned products could be sold. There are a number of excellent arguments for such a policy—an inherent right of individual liberty, the career incentive of bureaucrats to prohibit everything, legislators being just as biased as individuals. But even so (I replied), some poor, honest, not overwhelmingly educated mother of 5 children is going to go into these stores and buy a "Dr. Snakeoil's Sulfuric Acid Drink" for her arthritis and die, leaving her orphans to weep on national television.
I was just making a simple factual observation. Why did some people think it was an argument in favor of regulation?
Taboo Your Words
Followup to: Empty Labels
In the game Taboo (by Hasbro), the objective is for a player to have their partner guess a word written on a card, without using that word or five additional words listed on the card. For example, you might have to get your partner to say "baseball" without using the words "sport", "bat", "hit", "pitch", "base" or of course "baseball".
As soon as I see a problem like that, I at once think, "An artificial group conflict in which you use a long wooden cylinder to whack a thrown spheroid, and then run between four safe positions." It might not be the most efficient strategy to convey the word 'baseball' under the stated rules - that might be, "It's what the Yankees play" - but the general skill of blanking a word out of my mind was one I'd practiced for years, albeit with a different purpose.
Are Your Enemies Innately Evil?
Followup to: Correspondence Bias
As previously discussed, we see far too direct a correspondence between others' actions and their inherent dispositions. We see unusual dispositions that exactly match the unusual behavior, rather than asking after real situations or imagined situations that could explain the behavior. We hypothesize mutants.
When someone actually offends us—commits an action of which we (rightly or wrongly) disapprove—then, I observe, the correspondence bias redoubles. There seems to be a very strong tendency to blame evil deeds on the Enemy's mutant, evil disposition. Not as a moral point, but as a strict question of prior probability, we should ask what the Enemy might believe about their situation which would reduce the seeming bizarrity of their behavior. This would allow us to hypothesize a less exceptional disposition, and thereby shoulder a lesser burden of improbability.
On September 11th, 2001, nineteen Muslim males hijacked four jet airliners in a deliberately suicidal effort to hurt the United States of America. Now why do you suppose they might have done that? Because they saw the USA as a beacon of freedom to the world, but were born with a mutant disposition that made them hate freedom?
Deciding on our rationality focus
I have a problem: I'm not sure what this community is about.
To illustrate, recently I've been experimenting with a number of tricks to overcome my akrasia. This morning, a succession of thoughts struck me:
- The readers of Less Wrong have been interested in the subject of akrasia, maybe I should make a top-level post of my experiences once I see what works and what doesn't.
- But wait, that would be straying into the territory of traditional self-help, and I'm sure there are already plenty of blogs and communities for that. It isn't about rationality anymore.
- But then, we have already discussed akrasia several times, isn't this then also on-topical?
- (Even if this was topical, wouldn't a simple recount of "what worked for me" be too Kaj-optimized to work for very many others?)
Part of the problem seems to stem from the fact that we have a two-fold definition of rationality:
- Epistemic rationality: believing, and updating on evidence, so as to systematically improve the correspondence between your map and the territory. The art of obtaining beliefs that correspond to reality as closely as possible. This correspondence is commonly termed "truth" or "accuracy", and we're happy to call it that.
- Instrumental rationality: achieving your values. Not necessarily "your values" in the sense of being selfish values or unshared values: "your values" means anything you care about. The art of choosing actions that steer the future toward outcomes ranked higher in your preferences. On LW we sometimes refer to this as "winning".
If this community was only about epistemic rationality, there would be no problem. Akrasia isn't related to epistemic rationality, and neither are most self-help tricks. Case closed.
However, by including instrumental rationality, we have expanded the sphere of potential topics to cover practically anything. Productivity tips, seduction techniques, the best ways for grooming your physical appearance, the most effective ways to relax (and by extension, listing the best movies / books / video games of all time), how you can most effectively combine different rebate coupons and where you can get them from... all of those can be useful in achieving your values.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
A couple thoughts: