Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Long-time readers will recall that I've long been uncomfortable with the idea that you can adopt a Cause as a hedonic accessory:
"Unhappy people are told that they need a 'purpose in life', so they should pick out an altruistic cause that goes well with their personality, like picking out nice living-room drapes, and this will brighten up their days by adding some color, like nice living-room drapes."
But conversely it's also a fact that having a Purpose In Life consistently shows up as something that increases happiness, as measured by reported subjective well-being.
One presumes that this works equally well hedonically no matter how misguided that Purpose In Life may be—no matter if it is actually doing harm—no matter if the means are as cheap as prayer. Presumably, all that matters for your happiness is that you believe in it. So you had better not question overmuch whether you're really being effective; that would disturb the warm glow of satisfaction you paid for.
And here we verge on Zen, because you can't deliberately pursue "a purpose that takes you outside yourself", in order to take yourself outside yourself. That's still all about you.
Which is the whole Western concept of "spirituality" that I despise: You need a higher purpose so that you can be emotionally healthy. The external world is just a stream of victims for you to rescue.
Previously in series: Sympathetic Minds
Today I shall criticize yet another Utopia. This Utopia isn't famous in the literature. But it's considerably superior to many better-known Utopias—more fun than the Christian Heaven, or Greg Egan's upload societies, for example. And so the main flaw is well worth pointing out.
This Utopia consists of a one-line remark on an IRC channel:
<reedspacer> living in your volcano lair with catgirls is probably a vast increase in standard of living for most of humanity
I've come to think of this as Reedspacer's Lower Bound.
Sure, it sounds silly. But if your grand vision of the future isn't at least as much fun as a volcano lair with catpersons of the appropriate gender, you should just go with that instead. This rules out a surprising number of proposals.
But today I am here to criticize Reedspacer's Lower Bound—the problem being the catgirls.
I've joked about the subject, now and then—"Donate now, and get a free catgirl or catboy after the Singularity!"—but I think it would actually be a terrible idea. In fact, today's post could have been entitled "Why Fun Theorists Don't Believe In Catgirls."
Followup to: Humans in Funny Suits
"Mirror neurons" are neurons that are active both when performing an action and observing the same action—for example, a neuron that fires when you hold up a finger or see someone else holding up a finger. Such neurons have been directly recorded in primates, and consistent neuroimaging evidence has been found for humans.
You may recall from my previous writing on "empathic inference" the idea that brains are so complex that the only way to simulate them is by forcing a similar brain to behave similarly. A brain is so complex that if a human tried to understand brains the way that we understand e.g. gravity or a car—observing the whole, observing the parts, building up a theory from scratch—then we would be unable to invent good hypotheses in our mere mortal lifetimes. The only possible way you can hit on an "Aha!" that describes a system as incredibly complex as an Other Mind, is if you happen to run across something amazingly similar to the Other Mind—namely your own brain—which you can actually force to behave similarly and use as a hypothesis, yielding predictions.
So that is what I would call "empathy".
And then "sympathy" is something else on top of this—to smile when you see someone else smile, to hurt when you see someone else hurt. It goes beyond the realm of prediction into the realm of reinforcement.
If I were to make a short list of the most important human qualities—
—and yes, this is a fool's errand, because human nature is immensely complicated, and we don't even notice all the tiny tweaks that fine-tune our moral categories, and who knows how our attractors would change shape if we eliminated a single human emotion—
—but even so, if I had to point to just a few things and say, "If you lose just one of these things, you lose most of the expected value of the Future; but conversely if an alien species independently evolved just these few things, we might even want to be friends"—
—then the top three items on the list would be sympathy, boredom and consciousness.
Boredom is a subtle-splendored thing. You wouldn't want to get bored with breathing, for example—even though it's the same motions over and over and over and over again for minutes and hours and years and decades.
Now I know some of you out there are thinking, "Actually, I'm quite bored with breathing and I wish I didn't have to," but then you wouldn't want to get bored with switching transistors.
According to the human value of boredom, some things are allowed to be highly repetitive without being boring—like obeying the same laws of physics every day.
Conversely, other repetitions are supposed to be boring, like playing the same level of Super Mario Brothers over and over and over again until the end of time. And let us note that if the pixels in the game level have a slightly different color each time, that is not sufficient to prevent it from being "the same damn thing, over and over and over again".
Once you take a closer look, it turns out that boredom is quite interesting.
Previously in series: Justified Expectation of Pleasant Surprises
"Vagueness" usually has a bad name in rationality—connoting skipped steps in reasoning and attempts to avoid falsification. But a rational view of the Future should be vague, because the information we have about the Future is weak. Yesterday I argued that justified vague hopes might also be better hedonically than specific foreknowledge—the power of pleasant surprises.
But there's also a more severe warning that I must deliver: It's not a good idea to dwell much on imagined pleasant futures, since you can't actually dwell in them. It can suck the emotional energy out of your actual, current, ongoing life.
Epistemically, we know the Past much more specifically than the Future. But also on emotional grounds, it's probably wiser to compare yourself to Earth's past, so you can see how far we've come, and how much better we're doing. Rather than comparing your life to an imagined future, and thinking about how awful you've got it Now.
Having set out to explain George Orwell's observation that no one can seem to write about a Utopia where anyone would want to live—having laid out the various Laws of Fun that I believe are being violated in these dreary Heavens—I am now explaining why you shouldn't apply this knowledge to invent an extremely seductive Utopia and write stories set there. That may suck out your soul like an emotional vacuum cleaner.
I recently tried playing a computer game that made a major fun-theoretic error. (At least I strongly suspect it's an error, though they are game designers and I am not.)
The game showed me—right from the start of play—what abilities I could purchase as I increased in level. Worse, there were many different choices; still worse, you had to pay a cost in fungible points to acquire them, making you feel like you were losing a resource... But today, I'd just like to focus on the problem of telling me, right at the start of the game, about all the nice things that might happen to me later.
I can't think of a good experimental result that backs this up; but I'd expect that a pleasant surprise would have a greater hedonic impact, than being told about the same gift in advance. Sure, the moment you were first told about the gift would be good news, a moment of pleasure in the moment of being told. But you wouldn't have the gift in hand at that moment, which limits the pleasure. And then you have to wait. And then when you finally get the gift—it's pleasant to go from not having it to having it, if you didn't wait too long; but a surprise would have a larger momentary impact, I would think.
This particular game had a status screen that showed all my future class abilities at the start of the game—inactive and dark but with full information still displayed. From a hedonic standpoint this seems like miserable fun theory. All the "good news" is lumped into a gigantic package; the items of news would have much greater impact if encountered separately. And then I have to wait a long time to actually acquire the abilities, so I get an extended period of comparing my current weak game-self to all the wonderful abilities I could have but don't.
Imagine living in two possible worlds. Both worlds are otherwise rich in challenge, novelty, and other aspects of Fun. In both worlds, you get smarter with age and acquire more abilities over time, so that your life is always getting better.
But in one world, the abilities that come with seniority are openly discussed, hence widely known; you know what you have to look forward to.
In the other world, anyone older than you will refuse to talk about certain aspects of growing up; you'll just have to wait and find out.
Followup to: Eutopia is Scary
"Two roads diverged in the woods. I took the one less traveled, and had to eat bugs until Park rangers rescued me."
Utopia and Dystopia have something in common: they both confirm the moral sensibilities you started with. Whether the world is a libertarian utopia of the non-initiation of violence and everyone free to start their own business, or a hellish dystopia of government regulation and intrusion—you might like to find yourself in the first, and hate to find yourself in the second; but either way you nod and say, "Guess I was right all along."
So as an exercise in creativity, try writing them down side by side: Utopia, Dystopia, and Weirdtopia. The zig, the zag and the zog.
I'll start off with a worked example for public understanding of science:
- Utopia: Most people have the equivalent of an undergrad degree in something; everyone reads the popular science books (and they're good books); everyone over the age of nine understands evolutionary theory and Newtonian physics; scientists who make major contributions are publicly adulated like rock stars.
- Dystopia: Science is considered boring and possibly treasonous; public discourse elevates religion or crackpot theories; stem cell research is banned.
- Weirdtopia: Science is kept secret to avoid spoiling the surprises; no public discussion but intense private pursuit; cooperative ventures surrounded by fearsome initiation rituals because that's what it takes for people to feel like they've actually learned a Secret of the Universe and be satisfied; someone you meet may only know extremely basic science, but they'll have personally done revolutionary-level work in it, just like you. Too bad you can't compare notes.
Followup to: Why is the Future So Absurd?
"The big thing to remember about far-future cyberpunk is that it will be truly ultra-tech. The mind and body changes available to a 23rd-century Solid Citizen would probably amaze, disgust and frighten that 2050 netrunner!"
Pick up someone from the 18th century—a smart someone. Ben Franklin, say. Drop them into the early 21st century.
We, in our time, think our life has improved in the last two or three hundred years. Ben Franklin is probably smart and forward-looking enough to agree that life has improved. But if you don't think Ben Franklin would be amazed, disgusted, and frightened, then I think you far overestimate the "normality" of your own time. You can think of reasons why Ben should find our world compatible, but Ben himself might not do the same.
Movies that were made in say the 40s or 50s, seem much more alien—to me—than modern movies allegedly set hundreds of years in the future, or in different universes. Watch a movie from 1950 and you may see a man slapping a woman. Doesn't happen a lot in Lord of the Rings, does it? Drop back to the 16th century and one popular entertainment was setting a cat on fire. Ever see that in any moving picture, no matter how "lowbrow"?
("But," you say, "that's showing how discomforting the Past's culture was, not how scary the Future is." Of which I wrote, "When we look over history, we see changes away from absurd conditions such as everyone being a peasant farmer and women not having the vote, toward normal conditions like a majority middle class and equal rights...")
Something about the Future will shock we 21st-century folk, if we were dropped in without slow adaptation. This is not because the Future is cold and gloomy—I am speaking of a positive, successful Future; the negative outcomes are probably just blank. Nor am I speaking of the idea that every Utopia has some dark hidden flaw. I am saying that the Future would discomfort us because it is better.
When is it adaptive for an organism to be satisfied with what it has? When does an organism have enough children and enough food? The answer to the second question, at least, is obviously "never" from an evolutionary standpoint. The first proposition might be true if the reproductive risks of all available options exceed their reproductive benefits. In general, though, it is a rare organism in a rare environment whose reproductively optimal strategy is to rest with a smile on its face, feeling happy.
To a first approximation, we might say something like "The evolutionary purpose of emotion is to direct the cognitive processing of the organism toward achievable, reproductively relevant goals". Achievable goals are usually located in the Future, since you can't affect the Past. Memory is a useful trick, but learning the lesson of a success or failure isn't the same goal as the original event—and usually the emotions associated with the memory are less intense than those of the original event.
Then the way organisms and brains are built right now, "true happiness" might be a chimera, a carrot dangled in front of us to make us take the next step, and then yanked out of our reach as soon as we achieve our goals.
This hypothesis is known as the hedonic treadmill.
The famous pilot studies in this domain demonstrated e.g. that past lottery winners' stated subjective well-being was not significantly greater than that of an average person, after a few years or even months. Conversely, accident victims with severed spinal cords were not as happy as before the accident after six months—around 0.75 sd less than control groups—but they'd still adjusted much more than they had expected to adjust.
This being the transhumanist form of Fun Theory, you might perhaps say: "Let's get rid of this effect. Just delete the treadmill, at least for positive events."
Every Utopia ever constructed—in philosophy, fiction, or religion—has been, to one degree or another, a place where you wouldn't actually want to live. I am not alone in this important observation: George Orwell said much the same thing in "Why Socialists Don't Believe In Fun", and I expect that many others said it earlier.
If you read books on How To Write—and there are a lot of books out there on How To Write, because amazingly a lot of book-writers think they know something about writing—these books will tell you that stories must contain "conflict".
That is, the more lukewarm sort of instructional book will tell you that stories contain "conflict". But some authors speak more plainly.
"Stories are about people's pain." Orson Scott Card.
"Every scene must end in disaster." Jack Bickham.
In the age of my youthful folly, I took for granted that authors were excused from the search for true Eutopia, because if you constructed a Utopia that wasn't flawed... what stories could you write, set there? "Once upon a time they lived happily ever after." What use would it be for a science-fiction author to try to depict a positive Singularity, when a positive Singularity would be...
...the end of all stories?
It seemed like a reasonable framework with which to examine the literary problem of Utopia, but something about that final conclusion produced a quiet, nagging doubt.
View more: Next