"But is there anyone who actually wants to live in a Wellsian Utopia?  On the contrary, not to live in a world like that, not to wake up in a hygenic garden suburb infested by naked schoolmarms, has actually become a conscious political motive.  A book like Brave New World is an expression of the actual fear that modern man feels of the rationalised hedonistic society which it is within his power to create."
        —George Orwell, Why Socialists Don't Believe in Fun

There are three reasons I'm talking about Fun Theory, some more important than others:

  1. If every picture ever drawn of the Future looks like a terrible place to actually live, it might tend to drain off the motivation to create the future.  It takes hope to sign up for cryonics.
  2. People who leave their religions, but don't familiarize themselves with the deep, foundational, fully general arguments against theism, are at risk of backsliding.  Fun Theory lets you look at our present world, and see that it is not optimized even for considerations like personal responsibility or self-reliance.  It is the fully general reply to theodicy.
  3. Going into the details of Fun Theory helps you see that eudaimonia is actually complicated —that there are a lot of properties necessary for a mind to lead a worthwhile existence.  Which helps you appreciate just how worthless a galaxy would end up looking (with extremely high probability) if it was optimized by something with a utility function rolled up at random.

To amplify on these points in order:

(1)  You've got folks like Leon Kass and the other members of Bush's "President's Council on Bioethics" running around talking about what a terrible, terrible thing it would be if people lived longer than threescore and ten.  While some philosophers have pointed out the flaws in their arguments, it's one thing to point out a flaw and another to provide a counterexample.  "Millions long for immortality who do not know what to do with themselves on a rainy Sunday afternoon," said Susan Ertz, and that argument will sound plausible for as long as you can't imagine what to do on a rainy Sunday afternoon, and it seems unlikely that anyone could imagine it.

It's not exactly the fault of Hans Moravec that his world in which humans are kept by superintelligences as pets, doesn't sound quite Utopian.  Utopias are just really hard to construct, for reasons I'll talk about in more detail later—but this observation has already been made by many, including George Orwell.

Building the Future is part of the ethos of secular humanism, our common project.  If you have nothing to look forward to—if there's no image of the Future that can inspire real enthusiasm—then you won't be able to scrape up enthusiasm for that common project.  And if the project is, in fact, a worthwhile one, the expected utility of the future will suffer accordingly from that nonparticipation.  So that's one side of the coin, just as the other side is living so exclusively in a fantasy of the Future that you can't bring yourself to go on in the Present.

I recommend thinking vaguely of the Future's hopes, thinking specifically of the Past's horrors, and spending most of your time in the Present.  This strategy has certain epistemic virtues beyond its use in cheering yourself up.

But it helps to have legitimate reason to vaguely hope—to minimize the leaps of abstract optimism involved in thinking that, yes, you can live and obtain happiness in the Future.

(2)  Rationality is our goal, and atheism is just a side effect—the judgment that happens to be produced.  But atheism is an important side effect.  John C. Wright, who wrote the heavily transhumanist The Golden Age, had some kind of temporal lobe epileptic fit and became a Christian.  There's a once-helpful soul, now lost to us.

But it is possible to do better, even if your brain malfunctions on you.  I know a transhumanist who has strong religious visions, which she once attributed to future minds reaching back in time and talking to her... but then she reasoned it out, asking why future superminds would grant only her the solace of conversation, and why they could offer vaguely reassuring arguments but not tell her winning lottery numbers or the 900th digit of pi.  So now she still has strong religious experiences, but she is not religious.  That's the difference between weak rationality and strong rationality, and it has to do with the depth and generality of the epistemic rules that you know and apply.

Fun Theory is part of the fully general reply to religion; in particular, it is the fully general reply to theodicy.  If you can't say how God could have better created the world without sliding into an antiseptic Wellsian Utopia, you can't carry Epicurus's argument.  If, on the other hand, you have some idea of how you could build a world that was not only more pleasant but also a better medium for self-reliance, then you can see that permanently losing both your legs in a car accident when someone else crashes into you, doesn't seem very eudaimonic.

If we can imagine what the world might look like if it had been designed by anything remotely like a benevolently inclined superagent, we can look at the world around us, and see that this isn't it.  This doesn't require that we correctly forecast the full optimization of a superagent—just that we can envision strict improvements on the present world, even if they prove not to be maximal.

(3) There's a severe problem in which people, due to anthropomorphic optimism and the lack of specific reflective knowledge about their invisible background framework and many other biases which I have discussed, think of a "nonhuman future" and just subtract off a few aspects of humanity that are salient, like enjoying the taste of peanut butter or something.  While still envisioning a future filled with minds that have aesthetic sensibilities, experience happiness on fulfilling a task, get bored with doing the same thing repeatedly, etcetera.  These things seem universal, rather than specifically human—to a human, that is.  They don't involve having ten fingers or two eyes, so they must be universal, right?

And if you're still in this frame of mind—where "real values" are the ones that persuade every possible mind, and the rest is just some extra specifically human stuff—then Friendly AI will seem unnecessary to you, because, in its absence, you expect the universe to be valuable but not human.

It turns out, though, that once you start talking about what specifically is and isn't valuable, even if you try to keep yourself sounding as "non-human" as possible—then you still end up with a big complicated computation that is only instantiated physically in human brains and nowhere else in the universe.  Complex challenges?  Novelty?  Individualism?  Self-awareness?  Experienced happiness?  A paperclip maximizer cares not about these things.

It is a long project to crack people's brains loose of thinking that things will turn out regardless—that they can subtract off a few specifically human-seeming things, and then end up with plenty of other things they care about that are universal and will appeal to arbitrarily constructed AIs.  And of this I have said a very great deal already.  But it does not seem to be enough.  So Fun Theory is one more step—taking the curtains off some of the invisible background of our values, and revealing some of the complex criteria that go into a life worth living.

 

Part of The Fun Theory Sequence

Next post: "Higher Purpose"

Previous post: "Seduced by Imagination"

New Comment
16 comments, sorted by Click to highlight new comments since: Today at 6:47 AM

Complex challenges? Novelty? Individualism? Self-awareness? Experienced happiness? A paperclip maximizer cares not about these things.
But advanced evolved organisms probably will.

The paper-clipper is a straw man that is only relevant if some well-meaning person tries to replace evolution with their own optimization or control system. (It may also be relevant in the case of a singleton; but it would be non-trivial to demonstrate that.)

All of Tim Tyler's points have been addressed in previous posts. Likewise the idea that evolution would have more shaping influence than a simple binary filter on utility functions. Don't particularly feel like going over these points again; other commenters are welcome to do so.

A random utility function will do fine, iff the agent has perfect knowledge.

Imagine, if you will a stabber, something that wants to turn the world into things that have been stabbed. If it knows that stabbing itself will kill itself, it will know to stab itself last. If it doesn't know know that stabbing itself will lead to it no longer being able to stab things, then it may not do well in actually achieving its stabbing goal by stabbing itself too early.

Well, that is so vague as to hardly be worth the trouble of responding to - but I will say that I do hope you were not thinking of referring me here.

However, I should perhaps add that I overspoke. I did not literally mean "any sufficiently-powerful optimisation process". Only that such things are natural tendencies - that tend to be produced unless you actively wire things into the utility function to prevent their manifestation.

All of Tim Tyler's points have been addressed in previous posts. Likewise the idea that evolution would have more shaping influence than a simple binary filter on utility functions. Don't particularly feel like going over these points again; other commenters are welcome to do so.
Or perhaps someone else will at least explain what "having more shaping influence than a simple binary filter on utility functions" means. It sounds like it's supposed to mean that all evolution can do is eliminate some utility functions. If that's what it means, I don't see how it's relevant.

My guess is that it's a representation of my position on sexual selection and cultural evolution. I may still be banned from discussing this subject - and anyway, it seems off-topic on this thread, so I won't go into details.

If this hypothesis about the comment is correct, the main link that I can see would be: things that Eliezer and Tim disagree about.

The society of Brave New World actually seemed like quite an improvement to me.

"John C. Wright, who wrote the heavily transhumanist The Golden Age, had some kind of temporal lobe epileptic fit and became a Christian. There's a once-helpful soul, now lost to us."

this seems needlessly harsh. as you've pointed out in the past, the world's biggest idiot/liar saying the sun is shining, does not necesarily mean its dark out. the fictional evidence fallacy notwithstanding, if Mr. Wright's novels have useful things to say about transhumanism or the future in general, they should be apreciated for that. the fact the author is born-again shouldnt mean we throw his work on the bonfire.

TGGP,

The Brave New World was exceedingly stable and not improving. Our current society has some chance of becoming much better.

My own complaints regarding the Brave New World consist mainly of noting that Huxley's dystopia specialized in making people fit the needs of society. And if meant whittling down a square peg so it would fit into a round hole, so be it.

Embryos were intentionally damaged (primarily through exposure to alcohol) so that they would be unlikely to have capabilities beyond what society needed them to.

This is completely incompatible with my beliefs about the necessity of self-regulating feedback loops, and developing order from the bottom upwards.

Mr. Tyler:

I admire your persistence; however, you should be reminded that preaching to the deaf is not a particularly worthwhile activity.

I know a transhumanist who has strong religious visions, which she once attributed to future minds reaching back in time and talking to her... but then she reasoned it out, asking why future superminds would grant only her the solace of conversation, and why they could offer vaguely reassuring arguments but not tell her winning lottery numbers or the 900th digit of pi. So now she still has strong religious experiences, but she is not religious. That's the difference between weak rationality and strong rationality, and it has to do with the depth and generality of the epistemic rules that you know and apply.

Does this person genuinely have schizophrenia? I've occasionally wondered what would happen if a schizophrenic was taught rationality, or a rationalist developed schizophrenic. I didn't think such a thing had happened already though.

I recall a neurologist that suffered a stroke and was able to reason out that she was suffering a stroke and managed to use the phone to call for help while severely impaired. It doubled as a religious experience for her.

I also recall a story about a woman trained in medicine who developed schizophrenia and turned her intellect to coping with her delusions, and rationalizing them, and poking holes in her rationalizations. Unfortunately I can't find the story, but I remember that she was convinced that rats were running around in her brain chewing on her nerves, but that she could electrocute them by thinking really hard. She realized that real rats couldn't possibly be running around in her brain, but had some rationalization for that.

[-][anonymous]13y10

That sounds fascinating, I wish I could read it..

"Millions long for immortality who do not know what to do with themselves on a rainy Sunday afternoon,"

Of late, during my discussions with others about rational politics and eudaimonia, I've been experiencing a strangely significant proportion of people (particularly the religious) asking me - with no irony - "What would you even DO with immortality?" My favored response: "Anything. And everything. In that order." LessWrong and HP:MoR has played no small part in that answer, and much of the further discussion that generally ensues.

So... thanks, everyone!