If I were to make a short list of the most important human qualities—
—and yes, this is a fool's errand, because human nature is immensely complicated, and we don't even notice all the tiny tweaks that fine-tune our moral categories, and who knows how our attractors would change shape if we eliminated a single human emotion—
—but even so, if I had to point to just a few things and say, "If you lose just one of these things, you lose most of the expected value of the Future; but conversely if an alien species independently evolved just these few things, we might even want to be friends"—
—then the top three items on the list would be sympathy, boredom and consciousness.
Boredom is a subtle-splendored thing. You wouldn't want to get bored with breathing, for example—even though it's the same motions over and over and over and over again for minutes and hours and years and decades.
Now I know some of you out there are thinking, "Actually, I'm quite bored with breathing and I wish I didn't have to," but then you wouldn't want to get bored with switching transistors.
According to the human value of boredom, some things are allowed to be highly repetitive without being boring—like obeying the same laws of physics every day.
Conversely, other repetitions are supposed to be boring, like playing the same level of Super Mario Brothers over and over and over again until the end of time. And let us note that if the pixels in the game level have a slightly different color each time, that is not sufficient to prevent it from being "the same damn thing, over and over and over again".
Once you take a closer look, it turns out that boredom is quite interesting.
One of the key elements of boredom was suggested in "Complex Novelty": If your activity isn't teaching you insights you didn't already know, then it is non-novel, therefore old, therefore boring.
But this doesn't quite cover the distinction. Is breathing teaching you anything? Probably not at this moment, but you wouldn't want to stop breathing. Maybe you'd want to stop noticing your breathing, which you'll do as soon as I stop drawing your attention to it.
I'd suggest that the repetitive activities which are allowed to not be boring fall into two categories:
- Things so extremely low-level, or with such a small volume of possibilities, that you couldn't avoid repeating them even if you tried; but which are required to support other non-boring activities. You know, like breathing, or obeying the laws of physics, or cell division—that sort of thing.
- Things so high-level that their "stability" still implies an immense space of specific possibilities, yet which are tied up with our identity or our values. Like thinking, for example.
Let me talk about that second category:
Suppose you were unraveling the true laws of physics and discovering all sorts of neat stuff you hadn't known before... when suddenly you got bored with "changing your beliefs based on observation". You are sick of anything resembling "Bayesian updating"—it feels like playing the same video game over and over. Instead you decide to believe anything said on 4chan.
Or to put it another way, suppose that you were something like a sentient chessplayer—a sentient version of Deep Blue. Like a modern human, you have no introspective access to your own algorithms. Each chess game appears different—you play new opponents and steer into new positions, composing new strategies, avoiding new enemy gambits. You are content, and not at all bored; you never appear to yourself to be doing the same thing twice—it's a different chess game each time.
But now, suddenly, you gain access to, and understanding of, your own chess-playing program. Not just the raw code; you can monitor its execution. You can see that it's actually the same damn code, doing the same damn thing, over and over and over again. Run the same damn position evaluator. Run the same damn sorting algorithm to order the branches. Pick the top branch, again. Extend it one position forward, again. Call the same damn subroutine and start over.
I have a small unreasonable fear, somewhere in the back of my mind, that if I ever do fully understand the algorithms of intelligence, it will destroy all remaining novelty—no matter what new situation I encounter, I'll know I can solve it just by being intelligent, the same damn thing over and over. All novelty will be used up, all existence will become boring, the remaining differences no more important than shades of pixels in a video game. Other beings will go about in blissful unawareness, having been steered away from studying this forbidden cognitive science. But I, having already thrown myself on the grenade of AI, will face a choice between eternal boredom, or excision of my forbidden knowledge and all the memories leading up to it (thereby destroying my existence as Eliezer, more or less).
Now this, mind you, is not my predictive line of maximum probability. To understand abstractly what rough sort of work the brain is doing, doesn't let you monitor its detailed execution as a boring repetition. I already know about Bayesian updating, yet I haven't become bored with the act of learning. And a self-editing mind can quite reasonably exclude certain levels of introspection from boredom, just like breathing can be legitimately excluded from boredom. (Maybe these top-level cognitive algorithms ought also to be excluded from perception—if something is stable, why bother seeing it all the time?)
No, it's just a cute little nightmare, which I thought made a nice illustration of this proposed principle:
That the very top-level things (like Bayesian updating, or attaching value to sentient minds rather than paperclips) and the very low-level things (like breathing, or switching transistors) are the things we shouldn't get bored with. And the mid-level things between, are where we should seek novelty. (To a first approximation, the novel is the inverse of the learned; it's something with a learnable element not yet covered by previous insights.)
Now this is probably not exactly how our current emotional circuitry of boredom works. That, I expect, would be hardwired relative to various sensory-level definitions of predictability, surprisingness, repetition, attentional salience, and perceived effortfulness.
But this is Fun Theory, so we are mainly concerned with how boredom should work in the long run.
Humanity acquired boredom the same way as we acquired the rest of our emotions: the godshatter idiom whereby evolution's instrumental policies became our own terminal values, pursued for their own sake: sex is fun even if you use birth control. Evolved aliens might, or might not, acquire roughly the same boredom in roughly the same way.
Do not give into the temptation of universalizing anthropomorphic values, and think: "But any rational agent, regardless of its utility function, will face the exploration/exploitation tradeoff, and will therefore occasionally get bored with exploiting, and go exploring."
Our emotion of boredom is a way of exploring, but not the only way for an ideal optimizing agent.
The idea of a steady trickle of mid-level novelty is a human terminal value, not something we do for the sake of something else. Evolution might have originally given it to us in order to have us explore as well as exploit. But now we explore for its own sake. That steady trickle of novelty is a terminal value to us; it is not the most efficient instrumental method for exploring and exploiting.
Suppose you were dealing with something like an expected paperclip maximizer—something that might use quite complicated instrumental policies, but in the service of a utility function that we would regard as simple, with a single term compactly defined.
Then I would expect the exploration/exploitation tradeoff to go something like as follows: The paperclip maximizer would assign some resources to cognition that searched for more efficient ways to make paperclips, or harvest resources from stars. Other resources would be devoted to the actual harvesting and paperclip-making. (The paperclip-making might not start until after a long phase of harvesting.) At every point, the most efficient method yet discovered—for resource-harvesting, or paperclip-making—would be used, over and over and over again. It wouldn't be boring, just maximally instrumentally efficient.
In the beginning, lots of resources would go into preparing for efficient work over the rest of time. But as cognitive resources yielded diminishing returns in the abstract search for efficiency improvements, less and less time would be spent thinking, and more and more time spent creating paperclips. By whatever the most efficient known method, over and over and over again.
(Do human beings get less easily bored as we grow older, more tolerant of repetition, because any further discoveries are less valuable, because we have less time left to exploit them?)
If we run into aliens who don't share our version of boredom—a steady trickle of mid-level novelty as a terminal preference—then perhaps every alien throughout their civilization will just be playing the most exciting level of the most exciting video game ever discovered, over and over and over again. Maybe with nonsentient AIs taking on the drudgework of searching for a more exciting video game. After all, without an inherent preference for novelty, exploratory attempts will usually have less expected value than exploiting the best policy previously encountered. And that's if you explore by trial at all, as opposed to using more abstract and efficient thinking.
Or if the aliens are rendered non-bored by seeing pixels of a slightly different shade—if their definition of sameness is more specific than ours, and their boredom less general—then from our perspective, most of their civilization will be doing the human::same thing over and over again, and hence, be very human::boring.
Or maybe if the aliens have no fear of life becoming too simple and repetitive, they'll just collapse themselves into orgasmium.
And if our version of boredom is less strict than that of the aliens, maybe they'd take one look at one day in the life of one member of our civilization, and never bother looking at the rest of us. From our perspective, their civilization would be needlessly chaotic, and so entropic, lower in what we regard as quality; they wouldn't play the same game for long enough to get good at it.
But if our versions of boredom are similar enough —terminal preference for a stream of mid-level novelty defined relative to learning insights not previously possessed—then we might find our civilizations mutually worthy of tourism. Each new piece of alien art would strike us as lawfully creative, high-quality according to a recognizable criterion, yet not like the other art we've already seen.
It is one of the things that would make our two species ramen rather than varelse, to invoke the Hierarchy of Exclusion. And I've never seen anyone define those two terms well, including Orson Scott Card who invented them; but it might be something like "aliens you can get along with, versus aliens for which there is no reason to bother trying".
Part of The Fun Theory Sequence
Next post: "Sympathetic Minds"
Previous post: "Dunbar's Function"
You get boredom when you are attempting to efficiently explore a mostly-unexplored search space - or at least you get a tendency to avoid repeatedly sampling the same region of the search space - which is boredom's primary behavioural manifestation.
In the example in the post - of an optimiser that doesn't get bored - that happens because the search space it is exploring has become exhausted.
That is simply a property of the particular example which was selected. It isn't a general property of efficient optimisers and it doesn't mean that efficient optimisers don't exhibit boredom. They do exhibit boredom - when exploring mostly-unexplored search spaces.
IMO, boredom is best seen as being a universal instrumental value - and not as an unfortunate result of "universalizing anthropomorphic values".
Update 2011-07-27: Yudkowsky responds to roughly this point - 39 minutes in here. He claims
He doesn't give any references, or much of a supporting argument - it is more of an assertion. Maybe he thinks the material about paperclips in this post is sufficiently convincing. Our civilisation maximises entropy - not paperclips - which hardly seems much more interesting. Also, it isn't clear that Yudkowsky realises the extent to which boredom is implemented an instrumental value in modern humans. Nature can't easily wire in some kind of universal "boredom rate" - since boredom is task-specific (we mostly don't get bored of sex and food) and context specific. That's not to say it is entirely instrumental - but it is partly instrumental in humans.
Yudkowsky then goes on immediately to say:
...so it seems to be a significant point.
My position is that we had better wind up approximating the instrumental value of boredom (which we probably do pretty well today anyway - by the wonder of natural selection) - or we are likely to be building a rather screwed-up civilisation. There is no good reason why this would lead to a "worthless, valueless future" - which is why Yudkowsky fails to provide one.
No it doesn't. It tries to maximize fun. It might maximize entropy as a side effect, but saying that we act to maximize entropy is as ludicrous as saying cows act to maximize atmospheric methane content. You're confusing a side-effect with the real goal.
A constant theme I've noticed in your posts is that you take some (usually evolutionary) trend that is occurring in our society or history and then act as if that trend is an actual conscious goal of human ... (read more)