Previously in seriesJustified Expectation of Pleasant Surprises

"Vagueness" usually has a bad name in rationality—connoting skipped steps in reasoning and attempts to avoid falsification.  But a rational view of the Future should be vague, because the information we have about the Future is weak.  Yesterday I argued that justified vague hopes might also be better hedonically than specific foreknowledge—the power of pleasant surprises.

But there's also a more severe warning that I must deliver:  It's not a good idea to dwell much on imagined pleasant futures, since you can't actually dwell in them.  It can suck the emotional energy out of your actual, current, ongoing life.

Epistemically, we know the Past much more specifically than the Future.  But also on emotional grounds, it's probably wiser to compare yourself to Earth's past, so you can see how far we've come, and how much better we're doing.  Rather than comparing your life to an imagined future, and thinking about how awful you've got it Now.

Having set out to explain George Orwell's observation that no one can seem to write about a Utopia where anyone would want to live—having laid out the various Laws of Fun that I believe are being violated in these dreary Heavens—I am now explaining why you shouldn't apply this knowledge to invent an extremely seductive Utopia and write stories set there.  That may suck out your soul like an emotional vacuum cleaner.

I briefly remarked on this phenomenon earlier, and someone said, "Define 'suck out your soul'."  Well, it's mainly a tactile thing: you can practically feel the pulling sensation, if your dreams wander too far into the Future.  It's like something out of H. P. Lovecraft:  The Call of Eutopia.  A professional hazard of having to stare out into vistas that humans were meant to gaze upon, and knowing a little too much about the lighter side of existence.

But for the record, I will now lay out the components of "soul-sucking", that you may recognize the bright abyss and steer your thoughts away:

  • Your emotional energy drains away into your imagination of Paradise:
    • You find yourself thinking of it more and more often.
    • The actual challenges of your current existence start to seem less interesting, less compelling; you think of them less and less.
    • Comparing everything to your imagined perfect world heightens your annoyances and diminishes your pleasures.
  • You go into an affective death spiral around your imagined scenario; you're reluctant to admit anything bad could happen on your assumptions, and you find more and more nice things to say.
  • Your mind begins to forget the difference between fiction and real life:
    • You originally made many arbitrary or iffy choices in constructing your scenario.  You forget that the Future is actually more unpredictable than this, and that you made your choices using limited foresight and merely human optimizing ability.
    • You forget that, in real life, at least some of your amazing good ideas are guaranteed not to work as well as they do in your imagination.
    • You start wanting the exact specific Paradise you imagined, and worrying about the disappointment if you don't get that exact thing.

Hope can be a dangerous thing.  And when you've just been hit hard—at the moment when you most need hope to keep you going—that's also when the real world seems most painful, and the world of imagination becomes most seductive.

It's a balancing act, I think.  One needs enough Fun Theory to truly and legitimately justify hope in the future.  But not a detailed vision so seductive that it steals emotional energy from the real life and real challenge of creating that future.  You need "a light at the end of the secular rationalist tunnel" as Roko put it, but you don't want people to drift away from their bodies into that light.

So how much light is that, exactly?  Ah, now that's the issue.

I'll start with a simple and genuine question:  Is what I've already said, enough?

Is knowing the abstract fun theory and being able to pinpoint the exact flaws in previous flawed Utopias, enough to make you look forward to tomorrow?  Is it enough to inspire a stronger will to live?  To dispel worries about a long dark tea-time of the soul?  Does it now seem—on a gut level—that if we could really build an AI and really shape it, the resulting future would be very much worth staying alive to see?

 

Part of The Fun Theory Sequence

Next post: "The Uses of Fun (Theory)"

Previous post: "Justified Expectation of Pleasant Surprises"

New Comment
20 comments, sorted by Click to highlight new comments since: Today at 9:53 AM

Yes. You and Eric Drexler and a few others have sufficiently convinced me that I absolutely look forward to the future. I'm not sure if I did already (I had vague though), but now I do. Thanks, I guess. :)

My optimism about the future has always been inducted from historical trend. It doesn't require the mention of AI for that or most of the fun topics discussed. I would define this precisely as having the justified expectation of pleasant surprise. I don't know the specifics of how the future looks, but can generalize with some confidence that it is likely to be better than today (for people on average, if not necessarily me in particular). If you think the trend now is positive, but the result of this trend somewhere in the future is quite negative, than you have a story to tell about why. And with all stories about the future, you are likely wrong.

I find it hard to conceive of falling into misery because I do not live in a future society where an all-powerful FAI seeking the best interests of each individual and of the species governs perfectly. I am glad that I do not have to work as a subsistence peasant, at risk of starvation if the harvest is poor, and I have some envy of celebrities that I see.

I think a lot of misery comes from wanting the World to be other than it is, without the power to change it. Everybody knows it: I need courage to change what I can change, serenity to accept what I can't change, wisdom to know the difference. It is not easy, but it is simple (this last sentence comes from House MD).

I'd add that one of the strongest imagination-seducers possible is wanting the world to be different in a microcosmic, personal way that one is still not able to deal with. (For example, I have learned that while global-scale worldbuilding is fine, I need to stop worldbuilding on a subcultural or regional-cultural level unless I actually am going to publish fiction.)

I feel that a lot of your discussion about Fun Theory is a bit too abstract to have an emotional appeal in terms of looking forward to the future. I think for at least some people (even smart, rational ones), it may be more effective to point out the possibility of more concrete, primitive, "monkey with a million bananas" type scenarios, even if those are not the most likely to actually occur.

Even if you know that the future probably won't be specifically like that, you can imagine how good that would be in a more direct and emotionally compelling way, and then reason that a Fun Theory compatible future would be even better than that, even if you can't visualize what it would be like so clearly.

that is good for those who indirectly work on AI.

Those who do it directly cannot afford the cost of mis-representation.

Still, great idea.

"monkey with a million bananas"

Has anyone done this experiment? Actually put a monkey in an environment with the equivelant of a million bananas (unlimited food, uncontested mates, whatever puzzles we can think of to make life interesting in the absense of pain and conflict, etc.) and watched how it acted over a period of years for signs of boredom and despair?

Might be useful information about the real effects of certain kinds of "Utopias." Also might be horribly unethical, depending on how you feel about primate experimentation.

If giving a monkey some bananas is wrong, I don't want to be right.

I meant that in the context of the Fun Theory sequence, which I'm currently reading through. It seems to me to implicitly predict that a monkey given unlimited bananas, mates, etc., ought to turn out surprisingly unhappy, at least to the extent that its psych is not-too-dissimilar from humans. It would be interesting to see if that prediction is correct.

My soul got sucked out a long time ago.

[whine] I wanna be a wirehead! Forget eudamonia, I just wanna feel good all the time and not worry about anything! [/whine]

This is an interesting thought. I started out a heroin addict with a passing interest in wireheading, which my atheist/libertarian/programmer/male brain could envision as being clearly possible, and the 'perfect' version of heroin (which has many downsides even if you are able to sustain a 3 year habit without slipping into withdrawal a single time, as I was). I saw pleasure as being the only axiomatic good, and dreamed of co-opting this simple reward mechanism for arbitrarily large amounts of pleasure. This dream led me here (I believe the lesswrong wiki article is at least on the front page of the Google results for 'wireheading'), and when I first read the fun theory sequence, I was skeptical that we would end up actually wanting something other than wireheading. Oh, these foolish AI programmers who have never felt the sheer blaze of pleasure of a fat shot of heroin, erupting like an orgasmic volcano from their head to their toes... No, but I did at least realize that I could bring about wireheading sooner by getting off heroin and starting to study neuroscience at my local (luckily, neuroscience specialized) university.

Once I got clean (which took about two weeks of a massively uncomfortable taper), I realized two things: the main difference between a life of heroin and a life without is having choices. A heroin addict satisfies his food and shelter needs in the cheapest way possible and then spends the rest of his money on heroin. The opportunity cost of something is readily available to your mind, "I could get this much heroin with the money instead", instead of being a vague notion of all the other things you could have bought instead. There is something to be said for this simplicity. Which leads me to the second realization: pleasure is definitely relative. We experience pleasure when we go from less pleasure to more pleasure, not as an absolute value of pleasure. The benefit of heroin is that it's a very sharp spike in pleasure for a minute or two, which then subsides into a state where you probably are experiencing larger absolute pleasure, but you can't actually tell the difference. Eventually, some 6-8 hours later, you start to feel cold, clammy, feverish; definitely you experience pain. I remember times where i'd be at 12 hours since my last shot, and feeling very bad, but I would hold out a little longer just so that when I finally DID dose, the difference between the past state of pleasure and the current state would be as large as possible.

In fact, being in the absolute hell of day 2 withdrawal, 24-48 hours since last dose, puking everywhere and defecating everywhere and lying in a puddle of sweat, and then injecting a dose which brought me up to baseline over the course of five-ten seconds, without any pleasure in the absolute sense, was just as pleasurable as going from baseline to a near-overdose.

I am glad to be free of that terrible addiction, but it taught me such straight forward lessons about how pleasure actually works that I think studying the behavior of, say, heroin-addicted primates, would be useful.

"So how much light is that, exactly? Ah, now that's the issue.

I'll start with a simple and genuine question: Is what I've already said, enough?"

  • Enough for what purpose? There are two distinct purposes that I can think of. Firstly, there is the task of convincing some "elite" group of potential FAI coders that the task is worth doing. I think that enough has been said for this one. How likely is this strategy to work? Well,

Secondly, there is the task of convincing a nontrivial fraction of "ordinary" people in developed countries that the humanity+ movement is worth getting excited about, worth voting for, worth funding. This might be a worthy goal if you think that the path of technological development will be significantly influenced by public opinion and politics. For this task, abstract descriptions are not enough, people will need specifics. If you tell John and Jane public that the AI will implement their CEV, they'll look at you like you're nuts. If you tell them that this will, as a special case, solve almost all of the problems that they currently worry about - like their health, their stressed lifestyles, the problems that they have with their marriage, the dementia that grandpa is succumbing to, etc, then you might be on to something.

I always got emotionally invested in abstract causes, so it was enough for me to perceive the notion of a way to get things better, and not just somewhat better, but as good as it gets. About two years ago, when exhausting routine of University was at an end, I got generally bored, and started idly exploring various potential hobbies, learning Japanese, piano and foundations of mathematics. I was preparing to settle down in the real world. The idea of AGI, and later FAI (understood and embraced only starting this summer, despite availability of all the material) as perceived ideal target gave focus to my life and linked intrinsic worth of the cause to natural enjoyment in the process of research. A new perspective didn't suck out my soul, but nurtured it. I don't spend time contemplating specific stories of the better, I need to understand more of the basic concepts in order to have a chance of seeing any specifics about the structure of goodness. For now, whenever I see a specific story, I prefer an abstract expectation of there being a surprising better way quite unlike the one depicted.

I'm currently reading Global Catastrophic Risks by Nick Bostrom and Cirkovic, and it's pretty scary to think of how arbitrarily everything could go bad and we could all live through very hard times indeed.

That kind of reading usually keeps me from having my soul sucked into this imagined great future...

Firstly, there is the task of convincing some "elite" group of potential FAI coders that the task is worth doing.

Not all the object-level work that needs to be done is (or requires the same skills as) FAI programming – not to mention the importance of donors and advocates.

This might be a worthy goal if you think that the path of technological development will be significantly influenced by public opinion and politics.

...in a desirable way. Effective SL3 "pro-technology" activism seems like it would be very dangerous. I doubt that advocacy (or any activity other than donation) by people who need detailed predictions to sustain their motivation (not just initiate it) has any significant chance of being useful.

@ nick t: I'd be interested to see the justification for the claim that pro technology activism would be very dangerous. Personally, I'm not convinced either way. If it turns out that you're right, then I'd say that this little series on fun theory has probably gone far enough.

One argument in favor of pro-rationalist/technology activism is that we cannot rely upon technology that is conducive to siai or some other small group being able to keep control of things. Robin has argued for a "distributed" singularity based on economic interdependence, probably via a whole host of bci and/or uploading efforts, with the main players being corporations and governments. In this scenario, a small elite group of singularitarian activists would basically be spectators. A much larger global h+ movement would have influence. A possible counterargument is that such a large organization would make bad decisions and have a negative influence due to the poor average quality of its members.

I really liked this post. Not sure if you meant it this way, but for me it mostly applies to imagining / fantasizing about the future. Some kinds of imagining are motivating, and they tend to be more general. The ones you describe as "soul-sucking" are more like an Experience Machine, or William Shatner's Tek (if you've had the misfortune to read any of his books).

For me this brings up the distinction between happiness (Fun) and pleasure. Soul-sucking is very pleasurable, but it is not very Fun. There is no richness, no striving, no intricacy - just getting what you want is boring.

ShardPhoenix - I agree that concreteness is important, but there is still a key distinction between concrete scenarios that motivate people to work to bring them about, and concrete scenarios that people respond to by drifting off into imagination and thinking "yeah, that would be fun."

I briefly remarked on this phenomenon earlier, and someone said, "Define 'suck out your soul'." Well, it's mainly a tactile thing: you can practically feel the pulling sensation, if your dreams wander too far into the Future. It's like something out of H. P. Lovecraft: The Call of Eutopia. A professional hazard of having to stare out into vistas that humans were meant to gaze upon, and knowing a little too much about the lighter side of existence.

Interstingly enough,Lovecraft wrote a story that I think captures this phenomena quite well. See also this story, in which Lovecraft briefly revisits the protagonist of the original story, and elaborates on his fate. Of note, both stories deal with seduction by memories of a idealized past, rather than imaginings of an idealized future, but I think that the same general principle applies.

Very interesting article, and a real "ouch" moment for me when I realised that all my escapism growing up had exactly this effect. By becoming engaged with fictional worlds through films, books and games you can start to disengage with the world, finding nothing so interesting and vibrant in it (this is a particular risk if you are young and haven't found activities and people you value in reality yet). The scary thing was when I was realised the characters in my books felt more real than people in reality. If you have trouble connecting with people books offer ready-made connections that can distract you from getting the social skills you need to form meaningful relationships in real life.

To an extent I think I am still prey to this, so does anyone have advice on ways to balance your escapist pleasures so you can still enjoy them without losing the vibrancy of real life?

It occurs to me that even more seductive than a future world might be a plausible, more formiddable self. (It suddenly occurs to me why many video game player characters are either conspicuously characterless, like Valve protagonists, or rather unlikable people (the 'why do I have to play as this jerk?' problem).