Interlude with the Confessor (4/8)
(Part 4 of 8 in "Three Worlds Collide")
The two of them were alone now, in the Conference Chair's Privilege, the huge private room of luxury more suited to a planet than to space. The Privilege was tiled wall-to-wall and floor-to-ceiling with a most excellent holo of the space surrounding them: the distant stars, the system's sun, the fleeing nova ashes, and the glowing ember of the dwarf star that had siphoned off hydrogen from the main sun until its surface had briefly ignited in a nova flash. It was like falling through the void.
Akon sat on the edge of the four-poster bed in the center of the room, resting his head in his hands. Weariness dulled him at the moment when he most needed his wits; it was always like that in crisis, but this was unusually bad. Under the circumstances, he didn't dare snort a hit of caffeine - it might reorder his priorities. Humanity had yet to discover the drug that was pure energy, that would improve your thinking without the slightest touch on your emotions and values.
"I don't know what to think," Akon said.
The Ship's Confessor was standing stately nearby, in full robes and hood of silver. From beneath the hood came the formal response: "What seems to be confusing you, my friend?"
"Did we go wrong?" Akon said. No matter how hard he tried, he couldn't keep the despair out of his voice. "Did humanity go down the wrong path?"
31 Laws of Fun
So this is Utopia, is it? Well
I beg your pardon, I thought it was Hell.
-- Sir Max Beerholm, verse entitled
In a Copy of More's (or Shaw's or Wells's or Plato's or Anybody's) Utopia
This is a shorter summary of the Fun Theory Sequence with all the background theory left out - just the compressed advice to the would-be author or futurist who wishes to imagine a world where people might actually want to live:
- Think of a typical day in the life of someone who's been adapting to Utopia for a while. Don't anchor on the first moment of "hearing the good news". Heaven's "You'll never have to work again, and the streets are paved with gold!" sounds like good news to a tired and poverty-stricken peasant, but two months later it might not be so much fun. (Prolegomena to a Theory of Fun.)
- Beware of packing your Utopia with things you think people should do that aren't actually fun. Again, consider Christian Heaven: singing hymns doesn't sound like loads of endless fun, but you're supposed to enjoy praying, so no one can point this out. (Prolegomena to a Theory of Fun.)
- Making a video game easier doesn't always improve it. The same holds true of a life. Think in terms of clearing out low-quality drudgery to make way for high-quality challenge, rather than eliminating work. (High Challenge.)
- Life should contain novelty - experiences you haven't encountered before, preferably teaching you something you didn't already know. If there isn't a sufficient supply of novelty (relative to the speed at which you generalize), you'll get bored. (Complex Novelty.)
The Fun Theory Sequence
(A shorter gloss of Fun Theory is "31 Laws of Fun", which summarizes the advice of Fun Theory to would-be Eutopian authors and futurists.)
Fun Theory is the field of knowledge that deals in questions such as "How much fun is there in the universe?", "Will we ever run out of fun?", "Are we having fun yet?" and "Could we be having more fun?"
Many critics (including George Orwell) have commented on the inability of authors to imagine Utopias where anyone would actually want to live. If no one can imagine a Future where anyone would want to live, that may drain off motivation to work on the project. The prospect of endless boredom is routinely fielded by conservatives as a knockdown argument against research on lifespan extension, against cryonics, against all transhumanism, and occasionally against the entire Enlightenment ideal of a better future.
Fun Theory is also the fully general reply to religious theodicy (attempts to justify why God permits evil). Our present world has flaws even from the standpoint of such eudaimonic considerations as freedom, personal responsibility, and self-reliance. Fun Theory tries to describe the dimensions along which a benevolently designed world can and should be optimized, and our present world is clearly not the result of such optimization. Fun Theory also highlights the flaws of any particular religion's perfect afterlife - you wouldn't want to go to their Heaven.
Investing for the Long Slump
I have no crystal ball with which to predict the Future, a confession that comes as a surprise to some journalists who interview me. Still less do I think I have the ability to out-predict markets. On every occasion when I've considered betting against a prediction market - most recently, betting against Barack Obama as President - I've been glad that I didn't. I admit that I was concerned in advance about the recent complexity crash, but then I've been concerned about it since 1994, which isn't very good market timing.
I say all this so that no one panics when I ask:
Suppose that the whole global economy goes the way of Japan (which, by the Nikkei 225, has now lost two decades).
Suppose the global economy is still in the Long Slump in 2039.
Most market participants seem to think this scenario is extremely implausible. Is there a simple way to bet on it at a very low price?
If most traders act as if this scenario has a probability of 1%, is there a simple bet, executable using an ordinary brokerage account, that pays off 100 to 1?
Why do I ask? Well... in general, it seems to me that other people are not pessimistic enough; they prefer not to stare overlong or overhard into the dark; and they attach too little probability to things operating in a mode outside their past experience.
But in this particular case, the question is motivated by my thinking, "Conditioning on the proposition that the Earth as we know it is still here in 2040, what might have happened during the preceding thirty years?"
Failed Utopia #4-2
Followup to: Interpersonal Entanglement
Shock after shock after shock—
First, the awakening adrenaline jolt, the thought that he was falling. His body tried to sit up in automatic adjustment, and his hands hit the floor to steady himself. It launched him into the air, and he fell back to the floor too slowly.
Second shock. His body had changed. Fat had melted away in places, old scars had faded; the tip of his left ring finger, long ago lost to a knife accident, had now suddenly returned.
And the third shock—
"I had nothing to do with it!" she cried desperately, the woman huddled in on herself in one corner of the windowless stone cell. Tears streaked her delicate face, fell like slow raindrops into the décolletage of her dress. "Nothing! Oh, you must believe me!"
With perceptual instantaneity—the speed of surprise—his mind had already labeled her as the most beautiful woman he'd ever met, including his wife.
Imaginary Positions
Every now and then, one reads an article about the Singularity in which some reporter confidently asserts, "The Singularitarians, followers of Ray Kurzweil, believe that they will be uploaded into techno-heaven while the unbelievers languish behind or are extinguished by the machines."
I don't think I've ever met a single Singularity fan, Kurzweilian or otherwise, who thinks that only believers in the Singularity will go to upload heaven and everyone else will be left to rot. Not one. (There's a very few pseudo-Randian types who believe that only the truly selfish who accumulate lots of money will make it, but they expect e.g. me to be damned with the rest.)
But if you start out thinking that the Singularity is a loony religious meme, then it seems like Singularity believers ought to believe that they alone will be saved. It seems like a detail that would fit the story.
This fittingness is so strong as to manufacture the conclusion without any particular observations. And then the conclusion isn't marked as a deduction. The reporter just thinks that they investigated the Singularity, and found some loony cultists who believe they alone will be saved.
Or so I deduce. I haven't actually observed the inside of their minds, after all.
Has any rationalist ever advocated behaving as if all people are reasonable and fair? I've repeatedly heard people say, "Well, it's not always smart to be rational, because other people aren't always reasonable." What rationalist said they were? I would deduce: This is something that non-rationalists believe it would "fit" for us to believe, given our general blind faith in Reason. And so their minds just add it to the knowledge pool, as though it were an observation. (In this case I encountered yet another example recently enough to find the reference; see here.)
Prolegomena to a Theory of Fun
Followup to: Joy in the Merely Good
Raise the topic of cryonics, uploading, or just medically extended lifespan/healthspan, and some bioconservative neo-Luddite is bound to ask, in portentous tones:
"But what will people do all day?"
They don't try to actually answer the question. That is not a bioethicist's role, in the scheme of things. They're just there to collect credit for the Deep Wisdom of asking the question. It's enough to imply that the question is unanswerable, and therefore, we should all drop dead.
That doesn't mean it's a bad question.
It's not an easy question to answer, either. The primary experimental result in hedonic psychology—the study of happiness—is that people don't know what makes them happy.
And there are many exciting results in this new field, which go a long way toward explaining the emptiness of classical Utopias. But it's worth remembering that human hedonic psychology is not enough for us to consider, if we're asking whether a million-year lifespan could be worth living.
Fun Theory, then, is the field of knowledge that would deal in questions like:
- "How much fun is there in the universe?"
- "Will we ever run out of fun?"
- "Are we having fun yet?"
- "Could we be having more fun?"
Visualizing Eutopia
Followup to: Not Taking Over the World
"Heaven is a city 15,000 miles square or 6,000 miles around. One side is 245 miles longer than the length of the Great Wall of China. Walls surrounding Heaven are 396,000 times higher than the Great Wall of China and eight times as thick. Heaven has twelve gates, three on each side, and has room for 100,000,000,000 souls. There are no slums. The entire city is built of diamond material, and the streets are paved with gold. All inhabitants are honest and there are no locks, no courts, and no policemen."
-- Reverend Doctor George Hawes, in a sermon
Yesterday I asked my esteemed co-blogger Robin what he would do with "unlimited power", in order to reveal something of his character. Robin said that he would (a) be very careful and (b) ask for advice. I asked him what advice he would give himself. Robin said it was a difficult question and he wanted to wait on considering it until it actually happened. So overall he ran away from the question like a startled squirrel.
The character thus revealed is a virtuous one: it shows common sense. A lot of people jump after the prospect of absolute power like it was a coin they found in the street.
When you think about it, though, it says a lot about human nature that this is a difficult question. I mean - most agents with utility functions shouldn't have such a hard time describing their perfect universe.
For a long time, I too ran away from the question like a startled squirrel. First I claimed that superintelligences would inevitably do what was right, relinquishing moral responsibility in toto. After that, I propounded various schemes to shape a nice superintelligence, and let it decide what should be done with the world.
Not that there's anything wrong with that. Indeed, this is still the plan. But it still meant that I, personally, was ducking the question.
Why? Because I expected to fail at answering. Because I thought that any attempt for humans to visualize a better future was going to end up recapitulating the Reverend Doctor George Hawes: apes thinking, "Boy, if I had human intelligence I sure could get a lot more bananas."
Not Taking Over the World
Followup to: What I Think, If Not Why
My esteemed co-blogger Robin Hanson accuses me of trying to take over the world.
Why, oh why must I be so misunderstood?
(Well, it's not like I don't enjoy certain misunderstandings. Ah, I remember the first time someone seriously and not in a joking way accused me of trying to take over the world. On that day I felt like a true mad scientist, though I lacked a castle and hunchbacked assistant.)
But if you're working from the premise of a hard takeoff - an Artificial Intelligence that self-improves at an extremely rapid rate - and you suppose such extra-ordinary depth of insight and precision of craftsmanship that you can actually specify the AI's goal system instead of automatically failing -
- then it takes some work to come up with a way not to take over the world.
Robin talks up the drama inherent in the intelligence explosion, presumably because he feels that this is a primary source of bias. But I've got to say that Robin's dramatic story, does not sound like the story I tell of myself. There, the drama comes from tampering with such extreme forces that every single idea you invent is wrong. The standardized Final Apocalyptic Battle of Good Vs. Evil would be trivial by comparison; then all you have to do is put forth a desperate effort. Facing an adult problem in a neutral universe isn't so straightforward. Your enemy is yourself, who will automatically destroy the world, or just fail to accomplish anything, unless you can defeat you. - That is the drama I crafted into the story I tell myself, for I too would disdain anything so cliched as Armageddon.
So, Robin, I'll ask you something of a probing question. Let's say that someone walks up to you and grants you unlimited power.
What do you do with it, so as to not take over the world?
You Only Live Twice
"It just so happens that your friend here is only mostly dead. There's a big difference between mostly dead and all dead."
-- The Princess Bride
My co-blogger Robin and I may disagree on how fast an AI can improve itself, but we agree on an issue that seems much simpler to us than that: At the point where the current legal and medical system gives up on a patient, they aren't really dead.
Robin has already said much of what needs saying, but a few more points:
• Ben Best's Cryonics FAQ, Alcor's FAQ, Alcor FAQ for scientists, Scientists' Open Letter on Cryonics
• I know more people who are planning to sign up for cryonics Real Soon Now than people who have actually signed up. I expect that more people have died while cryocrastinating than have actually been cryopreserved. If you've already decided this is a good idea, but you "haven't gotten around to it", sign up for cryonics NOW. I mean RIGHT NOW. Go to the website of Alcor or the Cryonics Institute and follow the instructions.
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)