Utilons vs. Hedons
Related to: Would Your Real Preferences Please Stand Up?
I have to admit, there are a lot of people I don't care about. Comfortably over six billion, I would bet. It's not that I'm a callous person; I simply don't know that many people, and even if I did I hardly have time to process that much information. Every day hundreds of millions of incredibly wonderful and terrible things happen to people out there, and if they didn't, I wouldn't even know it.
On the other hand, my professional goals deal with economics, policy, and improving decision making for the purpose of making millions of people I'll never meet happier. Their happiness does not affect my experience of life one bit, but I believe it's a good thing and I plan to work hard to figure out how to create more happiness.
This underscores an essential distinction in understanding any utilitarian viewpoint: the difference between experience and values. One can value unweighted total utility. One cannot experience unweighted total utility. It will always hurt more if a friend or loved one dies than if someone you never knew in a place you never heard of dies. I would be truly amazed to meet someone who is an exception to this rule and is not an absolute stoic. Your experiential utility function may have coefficients for other people's happiness (or at least your perception of such), but there's no way it has an identical coefficient for everyone everywhere, unless that coefficient is zero. On the other hand, you probably care in an abstract way about whether people you don't know die or are enslaved or imprisoned, and may even contribute some money or effort to prevent such from happening. I'm going to use "utilons" to refer to value utility units and "hedons" to refer to experiential utility units; I'll demonstrate that this is a meaningful distinction shortly, and that we value utilons over hedons explains much of our moral reasoning appearing to fail.
Why You're Stuck in a Narrative
The narrative fallacy addresses our limited ability to look at sequences of facts without weaving an explanation into them, or, equivalently, forcing a logical link, an arrow of relationship upon them. Explanations bind facts together. They make them all the more easily remembered; they help them make more sense. Where this propensity can go wrong is when it increases our impression of understanding.
Thomas C. Schelling's "Strategy of Conflict"
It's an old book, I know, and one that many of us have already read. But if you haven't, you should.
If there's anything in the world that deserves to be called a martial art of rationality, this book is the closest approximation yet. Forget rationalist Judo: this is rationalist eye-gouging, rationalist gang warfare, rationalist nuclear deterrence. Techniques that let you win, but you don't want to look in the mirror afterward.
Imagine you and I have been separately parachuted into an unknown mountainous area. We both have maps and radios, and we know our own positions, but don't know each other's positions. The task is to rendezvous. Normally we'd coordinate by radio and pick a suitable meeting point, but this time you got lucky. So lucky in fact that I want to strangle you: upon landing you discovered that your radio is broken. It can transmit but not receive.
Two days of rock-climbing and stream-crossing later, tired and dirty, I arrive at the hill where you've been sitting all this time smugly enjoying your lack of information.
And after we split the prize and cash our checks I learn that you broke the radio on purpose.
Information cascades in scientific practice
Here's an interesting recent paper in the British Medical Journal: "How citation distortions create unfounded authority: analysis of a citation network". (I don't know if this is freely accessible, but the abstract should be.)
From the paper:
"Objective To understand belief in a specific scientific claim by studying the pattern of citations among papers stating it."
"Conclusion Citation is both an impartial scholarly method and a powerful form of social communication. Through distortions in its social use that include bias, amplification, and invention, citation can be used to generate information cascades resulting in unfounded authority of claims. Construction and analysis of a claim specific citation network may clarify the nature of a published belief system and expose distorted methods of social citation."
It also includes a list of specific ways in which citations were found to amplify or invent evidence.
It's all in your head-land
From David Foster Wallace's Infinite Jest:
He could do the dextral pain the same way: Abiding. No one single instant of it was unendurable. Here was a second right here: he endured it. What was undealable-with was the thought of all the instants all lined up and stretching ahead, glittering. And the projected future fear... It's too much to think about. To Abide there. But none of it's as of now real... He could just hunker down in the space between each heartbeat and make each heartbeat a wall and live in there. Not let his head look over. What's unendurable is what his own head could make of it all. What his head could report to him, looking over and ahead and reporting. But he could choose not to listen... He hadn't quite gotten this before now, how it wasn't just the matter of riding out cravings for a Substance: everything unendurable was in the head, was the head not Abiding in the Present but hopping the wall and doing a recon and then returning with unendurable news you then somehow believed.
I've come to draw, or at to emphasize, a distinction separating two realms between which I divide my time: real-land and head-land. Real-land is the physical world, occupied by myself and billions of equally real others, in which my fingers strike a series of keys and a monitor displays strings of text corresponding to these keystrokes. Head-land is the world in which I construct an image of what this sentence will look like when complete, what this paragraph will look like when complete and what this entire post will look like when complete. And it doesn't stop there: in head-land, the finished post is already being read, readers are reacting, readers are (or aren't) responding and the resulting conversations are, for better or for worse, playing themselves out. In head-land, the thoughts I've translated into words and thus defined and developed in this post are already shaping the thoughts to be explored in future posts, the composition of which is going on there even now.
Of Exclusionary Speech and Gender Politics
I suspect that the ick reaction being labeled "objectification" actually has more to do with the sense that the speaker is addressing a closed group that doesn't include you.
Suppose I wrote a story about a man named Frank, whose twin brother (Frank has learned) is in the process of being framed for murder this very night. Frank is in the middle of a complicated plot to give his brother an alibi. He's already found the cabdriver and tricked him into waiting outside a certain apartment for an hour. Now all he needs is the last ingredient of his plan - a woman to go home with him (as he poses as his brother). Frank is, with increasing desperation, propositioning ladies at the bar - any girl will do for his plan, it doesn't matter who she is or what she's about...
I'd bet I could write that story without triggering the ick reaction, because Frank is an equal-opportunity manipulator - he manipulated the cabdriver, too. The story isn't about Frank regarding women as things on the way to implementing his plan, it's about Frank regarding various people, men and women alike, as means to the end of saving his brother.
If a woman reads that story, I think, she won't get a sense of being excluded from the intended audience.
Extreme Rationality: It's Not That Great
Related to: Individual Rationality is a Matter of Life and Death, The Benefits of Rationality, Rationality is Systematized Winning
But I finally snapped after reading: Mandatory Secret Identities
Okay, the title was for shock value. Rationality is pretty great. Just not quite as great as everyone here seems to think it is.
For this post, I will be using "extreme rationality" or "x-rationality" in the sense of "techniques and theories from Overcoming Bias, Less Wrong, or similar deliberate formal rationality study programs, above and beyond the standard level of rationality possessed by an intelligent science-literate person without formal rationalist training." It seems pretty uncontroversial that there are massive benefits from going from a completely irrational moron to the average intelligent person's level. I'm coining this new term so there's no temptation to confuse x-rationality with normal, lower-level rationality.
And for this post, I use "benefits" or "practical benefits" to mean anything not relating to philosophy, truth, winning debates, or a sense of personal satisfaction from understanding things better. Money, status, popularity, and scientific discovery all count.
So, what are these "benefits" of "x-rationality"?
A while back, Vladimir Nesov asked exactly that, and made a thread for people to list all of the positive effects x-rationality had on their lives. Only a handful responded, and most responses weren't very practical. Anna Salamon, one of the few people to give a really impressive list of benefits, wrote:
I'm surprised there are so few apparent gains listed. Are most people who benefited just being silent? We should expect a certain number of headache-cures, etc., just by placebo effects or coincidences of timing.
There have since been a few more people claiming practical benefits from x-rationality, but we should generally expect more people to claim benefits than to actually experience them. Anna mentions the placebo effect, and to that I would add cognitive dissonance - people spent all this time learning x-rationality, so it MUST have helped them! - and the same sort of confirmation bias that makes Christians swear that their prayers really work.
I find my personal experience in accord with the evidence from Vladimir's thread. I've gotten countless clarity-of-mind benefits from Overcoming Bias' x-rationality, but practical benefits? Aside from some peripheral disciplines1, I can't think of any.
Looking over history, I do not find any tendency for successful people to have made a formal study of x-rationality. This isn't entirely fair, because the discipline has expanded vastly over the past fifty years, but the basics - syllogisms, fallacies, and the like - have been around much longer. The few groups who made a concerted effort to study x-rationality didn't shoot off an unusual number of geniuses - the Korzybskians are a good example. In fact as far as I know the only follower of Korzybski to turn his ideas into a vast personal empire of fame and fortune was (ironically!) L. Ron Hubbard, who took the basic concept of techniques to purge confusions from the mind, replaced the substance with a bunch of attractive flim-flam, and founded Scientology. And like Hubbard's superstar followers, many of this century's most successful people have been notably irrational.
There seems to me to be approximately zero empirical evidence that x-rationality has a large effect on your practical success, and some anecdotal empirical evidence against it. The evidence in favor of the proposition right now seems to be its sheer obviousness. Rationality is the study of knowing the truth and making good decisions. How the heck could knowing more than everyone else and making better decisions than them not make you more successful?!?
This is a difficult question, but I think it has an answer. A complex, multifactorial answer, but an answer.
Outside Analysis and Blind Spots
(I originally tried to make this a comment, but it kept on growing.)
I was looking through the Google results for "Less Wrong" when I found the blog of a rather intelligent Leon Kass acolyte, who's written a critique of our community. While it's a bit of a caricature, it's not entirely off the mark. For example:
Trying to think more like a mathematician, whose empiricism resides in the realm of pure thought, does not predispose these 'rationalists' to collect evidence from the real world. Neither does the downplaying of personal experiences. Many are computer science majors, used to being in the comfortable position of being capable of testing their hypotheses without needing to leave their office. It is, then, an easy temptation for them to come up with a nice-sounding theory which appears to explain the facts, and then consider the question solved. Reason must reign supreme, must it not?
How seriously do you take this critique? Do you wonder why I'm bothering with this straw-man criticism of Less Wrong?
Creating The Simple Math of Everything
Eliezer once proposed an Idea for a book, The Simple Math of Everything. The basic idea is to compile articles on the basic mathematics of a wide variety of fields, but nothing too complicated.
Not Jacobean matrices for frequency-dependent gene selection; just Haldane's calculation of time to fixation. Not quantum physics; just the wave equation for sound in air. Not the maximum entropy solution using Lagrange Multipliers; just Bayes's Rule.
Now, writing a book is a pretty daunting task. Luckily brian_jaress had the idea of creating an index of links to already available online articles. XFrequentist pointed out that something like this has been done before over at Evolving Thoughts. This initially discourage me, but it eventually helped me refine what I thought the index should be. A key characteristic of Eliezer's idea is that it should be worthwhile for someone who doesn't know the material to read the entire index. Many of the links at evolving thoughts point to rather narrow topics that might not be very interesting to a generalist. Also there is just plain a ton of stuff to read over there - at least 100 articles.
So we should come up with some basic criteria for the articles. Here is what I suggest (let me know what you think):
Are You Anosognosic?
Followup to: The Strangest Thing An AI Could Tell You
Brain damage patients with anosognosia are incapable of considering, noticing, admitting, or realizing even after being argued with, that their left arm, left leg, or left side of the body, is paralyzed. Again I'll quote Yvain's summary:
After a right-hemisphere stroke, she lost movement in her left arm but continuously denied it. When the doctor asked her to move her arm, and she observed it not moving, she claimed that it wasn't actually her arm, it was her daughter's. Why was her daughter's arm attached to her shoulder? The patient claimed her daughter had been there in the bed with her all week. Why was her wedding ring on her daughter's hand? The patient said her daughter had borrowed it. Where was the patient's arm? The patient "turned her head and searched in a bemused way over her left shoulder".
A brief search didn't turn up a base-rate frequency in the population for left-arm paralysis with anosognosia, but let's say the base rate is 1 in 10,000,000 individuals (so around 670 individuals worldwide).
Supposing this to be the prior, what is your estimated probability that your left arm is currently paralyzed?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)