Previously in series: Justified Expectation of Pleasant Surprises
"Vagueness" usually has a bad name in rationality—connoting skipped steps in reasoning and attempts to avoid falsification. But a rational view of the Future should be vague, because the information we have about the Future is weak. Yesterday I argued that justified vague hopes might also be better hedonically than specific foreknowledge—the power of pleasant surprises.
But there's also a more severe warning that I must deliver: It's not a good idea to dwell much on imagined pleasant futures, since you can't actually dwell in them. It can suck the emotional energy out of your actual, current, ongoing life.
Epistemically, we know the Past much more specifically than the Future. But also on emotional grounds, it's probably wiser to compare yourself to Earth's past, so you can see how far we've come, and how much better we're doing. Rather than comparing your life to an imagined future, and thinking about how awful you've got it Now.
Having set out to explain George Orwell's observation that no one can seem to write about a Utopia where anyone would want to live—having laid out the various Laws of Fun that I believe are being violated in these dreary Heavens—I am now explaining why you shouldn't apply this knowledge to invent an extremely seductive Utopia and write stories set there. That may suck out your soul like an emotional vacuum cleaner.
I briefly remarked on this phenomenon earlier, and someone said, "Define 'suck out your soul'." Well, it's mainly a tactile thing: you can practically feel the pulling sensation, if your dreams wander too far into the Future. It's like something out of H. P. Lovecraft: The Call of Eutopia. A professional hazard of having to stare out into vistas that humans were meant to gaze upon, and knowing a little too much about the lighter side of existence.
But for the record, I will now lay out the components of "soul-sucking", that you may recognize the bright abyss and steer your thoughts away:
- Your emotional energy drains away into your imagination of Paradise:
- You find yourself thinking of it more and more often.
- The actual challenges of your current existence start to seem less interesting, less compelling; you think of them less and less.
- Comparing everything to your imagined perfect world heightens your annoyances and diminishes your pleasures.
- You go into an affective death spiral around your imagined scenario; you're reluctant to admit anything bad could happen on your assumptions, and you find more and more nice things to say.
- Your mind begins to forget the difference between fiction and real life:
- You originally made many arbitrary or iffy choices in constructing your scenario. You forget that the Future is actually more unpredictable than this, and that you made your choices using limited foresight and merely human optimizing ability.
- You forget that, in real life, at least some of your amazing good ideas are guaranteed not to work as well as they do in your imagination.
- You start wanting the exact specific Paradise you imagined, and worrying about the disappointment if you don't get that exact thing.
Hope can be a dangerous thing. And when you've just been hit hard—at the moment when you most need hope to keep you going—that's also when the real world seems most painful, and the world of imagination becomes most seductive.
It's a balancing act, I think. One needs enough Fun Theory to truly and legitimately justify hope in the future. But not a detailed vision so seductive that it steals emotional energy from the real life and real challenge of creating that future. You need "a light at the end of the secular rationalist tunnel" as Roko put it, but you don't want people to drift away from their bodies into that light.
So how much light is that, exactly? Ah, now that's the issue.
I'll start with a simple and genuine question: Is what I've already said, enough?
Is knowing the abstract fun theory and being able to pinpoint the exact flaws in previous flawed Utopias, enough to make you look forward to tomorrow? Is it enough to inspire a stronger will to live? To dispel worries about a long dark tea-time of the soul? Does it now seem—on a gut level—that if we could really build an AI and really shape it, the resulting future would be very much worth staying alive to see?
Part of The Fun Theory Sequence
Next post: "The Uses of Fun (Theory)"
Previous post: "Justified Expectation of Pleasant Surprises"
I always got emotionally invested in abstract causes, so it was enough for me to perceive the notion of a way to get things better, and not just somewhat better, but as good as it gets. About two years ago, when exhausting routine of University was at an end, I got generally bored, and started idly exploring various potential hobbies, learning Japanese, piano and foundations of mathematics. I was preparing to settle down in the real world. The idea of AGI, and later FAI (understood and embraced only starting this summer, despite availability of all the material) as perceived ideal target gave focus to my life and linked intrinsic worth of the cause to natural enjoyment in the process of research. A new perspective didn't suck out my soul, but nurtured it. I don't spend time contemplating specific stories of the better, I need to understand more of the basic concepts in order to have a chance of seeing any specifics about the structure of goodness. For now, whenever I see a specific story, I prefer an abstract expectation of there being a surprising better way quite unlike the one depicted.