The Lifespan Dilemma
One of our most controversial posts ever was "Torture vs. Dust Specks". Though I can't seem to find the reference, one of the more interesting uses of this dilemma was by a professor whose student said "I'm a utilitarian consequentialist", and the professor said "No you're not" and told them about SPECKS vs. TORTURE, and then the student - to the professor's surprise - chose TORTURE. (Yay student!)
In the spirit of always making these things worse, let me offer a dilemma that might have been more likely to unconvince the student - at least, as a consequentialist, I find the inevitable conclusion much harder to swallow.
Working Mantras
While working with Marcello on AI this summer, I've noticed that I have some standard mantras that I invoke in my inner dialogue (though only on appropriate occasions, not as a literal repeated mantra). This says something about which tropes are most often useful - in my own working life, anyway!
Exterminating life is rational
Followup to This Failing Earth, Our society lacks good self-preservation mechanisms, Is short term planning in humans due to a short life or due to bias?
I don't mean that deciding to exterminate life is rational. But if, as a society of rational agents, we each maximize our expected utility, this may inevitably lead to our exterminating life, or at least intelligent life.
Ed Regis reports on p 216 of “Great Mambo Chicken and the TransHuman Condition,” (Penguin Books, London, 1992):
Edward Teller had thought about it, the chance that the atomic explosion would light up the surrounding air and that this conflagration would then propagate itself around the world. Some of the bomb makers had even calculated the numerical odds of this actually happening, coming up with the figure of three chances in a million they’d incinerate the Earth. Nevertheless, they went ahead and exploded the bomb.
Was this a bad decision? Well, consider the expected value to the people involved. Without the bomb, there was a much, much greater than 3/1,000,000 chance that either a) they would be killed in the war, or b) they would be ruled by Nazis or the Japanese. The loss to them if they ignited the atmosphere would be another 30 or so years of life. The loss to them if they lost the war and/or were killed by their enemies would also be another 30 or so years of life. The loss in being conquered would also be large. Easy decision, really.
Suppose that, once a century, some party in a conflict chooses to use some technique to help win the conflict that has a p=3/1,000,000 chance of eliminating life as we know it. Then our expected survival time is 100 times the sum from n=1 to infinity of np(1-p)n-1. If I've done my math right, that's ≈ 33,777,000 years.
Time to See If We Can Apply Anything We Have Learned
It seems to me that this blog has just reached it's first real crisis.
Three people are announcing three apparently opposed beliefs with substantial real expected consequences and yet no-one has yet spoken, or it seems to me implied, the key slogan... "LETS USE SCIENCE!" or, as hubristic Bayesian wannabes, not invoked Bayes as an idol to swear by, but rather said "LETS USE HUMANE REFLECTIVE DECISION THEORY, THE QUANTITATIVELY UNKNOWN BUT QUALITATIVELY INTUITED POWER DEEPER THAN SCIENCE FROM WHICH IT STEMS AND TO WHICH OUR COMMUNITY IS DEVOTED".
IF RDS was applied to our current situation, people would be analyzing Yvain's, Davis' and Eby's proposals, working out exactly what their implications are, and trying to propose, in the name of SCIENCE, hypotheses which will distinguish between them, and in the name of BAYES, confidence estimates of their analyses and of the quality with which the denotations of their words have cleaved reality at the joints enabling an odds ratio of updating to be extracted from a single data point. People would be working out what features of which of the models used by Yvain, Davis and Eby constitute evidence against what other features. They would be trying to evaluate non-verbally, through subjectively opaque but known-to-be-informative processes vulnerable to verbal overshadowing, what relative odds to place on those different features of the models. Finally, they would be examining the expected costs entailed by experiments being proposed and selecting those experiments which promise to provide the most information for the least cost be performed. The cost estimate would include both the effort required to perform the experiments, probably best assessed with an outside view in most cases like these, and the dangers to the minds of the participants from possible adverse outcomes, taking into account, as well as possible, the structural uncertainty of the models.
I sincerely hope to see some of that in the comments section soon, either under this post or the "Applied Picoeconomics" post.
The Laws of Magic
People are always telling you that "we have always done thus", and then you find that their "always" means a generation or two, or a century or two, at most a millennium or two. Cultural ways and habits are blips compared to the ways and habits of the body, of the race. There really is very little that human beings on our plane have always done, except find food and drink, sing, talk, procreate, nurture the children, and probably band together to some extent.
- Ursula K. Le Guin, "Seasons of the Ansarac", Changing Planes
Human cultures vary wildly and dicursively, so it is worth noting which things all known human societies have in common. Several generations ago, anthropologists noted that cultures' beliefs about a suite of concepts crudely describable as 'magic' had certain principles in common.
Rationalists lose when others choose
At various times, we've argued over whether rationalists always win. I posed Augustine's paradox of optimal repentance to argue that, in some situations, rationalists lose. One criticism of that paradox is that its strongest forms posit a God who penalizes people for being rational. My response was, So what? Who ever said that nature, or people, don't penalize rationality?
There are instances where nature penalizes the rational. For instance, revenge is irrational, but being thought of as someone who would take revenge gives advantages.1
How Not to be Stupid: Know What You Want, What You Really Really Want
Previously: Starting Up
So, you want to be rational, huh? You want to be Less Wrong than you were before, hrmmm? First you must pass through the posting titles of a thousand groans. Muhahahahaha!
Let's start with the idea of preference rankings. If you prefer A to B, well, given the choice between A and B, you'd choose A.
For example, if you face a choice between a random child being tortured to death vs them leading a happy and healthy life, all else being equal and the choice costing you nothing, which do you choose?
This isn't a trick question. If you're a perfectly ordinary human, you presumably prefer the latter to the former.
Therefore you choose it. That's what it means to prefer something. That if you prefer A over B, you'd give up situation B to gain situation A. You want situation A more than you want situation B.
Now, if there're many possibilities, you may ask... "But, what if I prefer B to A, C to B, and A to C?"
The answer, of course, is that you're a bit confused about what you actually prefer. I mean, all that ranking would do is just keep you switching between those, looping around.
"Self-pretending" is not as useful as we think
A few weeks ago I made a draft of a post that was originally intended to be about the same issue addressed in MBlume’s post regarding beneficial false beliefs. Coincidentally, my draft included the same exact hypothetical about entering a club believing you’re the most attractive person in the room in order to increase chances of attracting women. There seems to be a general agreement with MBlume’s “it’s ok to pretend because it’s not self-deception and produces similar results” conclusion. I was surprised to see so much agreement considering that when I made my original draft I reached a completely different conclusion.
I do agree, however, that pretending may have some benefits, but those benefits are much more limited than MBlume makes them out to be. He brings up a time where pretending helped him better fit into his character in a play. Unfortunately, his anecdote is not an appropriate example of overcoming vestigial evolutionary impulses by pretending. His mind wasn’t evolutionarily programmed to “be afraid” when pretending to be someone else, it was programmed to “be afraid” when hitting on attractive women. When I am alone in my room I can act like a real alpha male all day long, but put me in front of attractive women (or people in general) and I will retreat back to my stifled self.
The only way false beliefs can overcome your obsolete evolutionary impulses is to truly believe in those false beliefs. And we all know why that would be a bad idea. Furthermore, pretending can be dangerous just like reading fiction can be dangerous. So the small benefit that pretending might give may not even be worth the cost (at times).
But there is something we can learn from these (sometimes beneficial) false beliefs.
Obviously, there is no direct casual chain that goes from self-fulfilling beliefs to real-world success. Beliefs, per se, are not the key variables in causing success; instead, these beliefs give rise to whatever the key variable is. We should figure out what are the key variables that arise and find a systematic way of getting those variables.
With the club example, we should instead figure out what behavior changes may result from believing that every girl is attracted to you. Then, figure out which of those behaviors attract women and find a way to perfect those behaviors. This is the approach the seduction community adopts for learning how to attract women—and it works.
Same goes with public speaking. If you have a fear of public speaking, you can’t expect to pretend your fear away. There are ways of reducing unnecessary emotions; the ways that work, however, don’t depend on pretending.
This Didn't Have To Happen
My girlfriend/SO's grandfather died last night, running on a treadmill when his heart gave out.
He wasn't signed up for cryonics, of course. She tried to convince him, and I tried myself a little the one time I met her grandparents.
"This didn't have to happen. Fucking religion."
That's what my girlfriend said.
I asked her if I could share that with you, and she said yes.
Just so that we're clear that all the wonderful emotional benefits of self-delusion come with a price, and the price isn't just to you.
Slow down a little... maybe?
I think that three posts a day over and above Yudkowsky and/or Hanson posts might be enough. Where anything that gets voted to 0 or below doesn't count, nor do quick links.
Say you differently, readers? I'm just trying to space things out so we don't get overloaded with everything, all at once... if it turns out that people just have more to say than this, sustainably in the long term, then we can raise the posting speed.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)