Continuous Improvement
When is it adaptive for an organism to be satisfied with what it has? When does an organism have enough children and enough food? The answer to the second question, at least, is obviously "never" from an evolutionary standpoint. The first proposition might be true if the reproductive risks of all available options exceed their reproductive benefits. In general, though, it is a rare organism in a rare environment whose reproductively optimal strategy is to rest with a smile on its face, feeling happy.
To a first approximation, we might say something like "The evolutionary purpose of emotion is to direct the cognitive processing of the organism toward achievable, reproductively relevant goals". Achievable goals are usually located in the Future, since you can't affect the Past. Memory is a useful trick, but learning the lesson of a success or failure isn't the same goal as the original event—and usually the emotions associated with the memory are less intense than those of the original event.
Then the way organisms and brains are built right now, "true happiness" might be a chimera, a carrot dangled in front of us to make us take the next step, and then yanked out of our reach as soon as we achieve our goals.
This hypothesis is known as the hedonic treadmill.
The famous pilot studies in this domain demonstrated e.g. that past lottery winners' stated subjective well-being was not significantly greater than that of an average person, after a few years or even months. Conversely, accident victims with severed spinal cords were not as happy as before the accident after six months—around 0.75 sd less than control groups—but they'd still adjusted much more than they had expected to adjust.
This being the transhumanist form of Fun Theory, you might perhaps say: "Let's get rid of this effect. Just delete the treadmill, at least for positive events."
Serious Stories
Every Utopia ever constructed—in philosophy, fiction, or religion—has been, to one degree or another, a place where you wouldn't actually want to live. I am not alone in this important observation: George Orwell said much the same thing in "Why Socialists Don't Believe In Fun", and I expect that many others said it earlier.
If you read books on How To Write—and there are a lot of books out there on How To Write, because amazingly a lot of book-writers think they know something about writing—these books will tell you that stories must contain "conflict".
That is, the more lukewarm sort of instructional book will tell you that stories contain "conflict". But some authors speak more plainly.
"Stories are about people's pain." Orson Scott Card.
"Every scene must end in disaster." Jack Bickham.
In the age of my youthful folly, I took for granted that authors were excused from the search for true Eutopia, because if you constructed a Utopia that wasn't flawed... what stories could you write, set there? "Once upon a time they lived happily ever after." What use would it be for a science-fiction author to try to depict a positive Singularity, when a positive Singularity would be...
...the end of all stories?
It seemed like a reasonable framework with which to examine the literary problem of Utopia, but something about that final conclusion produced a quiet, nagging doubt.
Emotional Involvement
Followup to: Evolutionary Psychology, Thou Art Godshatter, Existential Angst Factory
Can your emotions get involved in a video game? Yes, but not much. Whatever sympathetic echo of triumph you experience on destroying the Evil Empire in a video game, it's probably not remotely close to the feeling of triumph you'd get from saving the world in real life. I've played video games powerful enough to bring tears to my eyes, but they still aren't as powerful as the feeling of significantly helping just one single real human being.
Because when the video game is finished, and you put it away, the events within the game have no long-term consequences.
Maybe if you had a major epiphany while playing... But even then, only your thoughts would matter; the mere fact that you saved the world, inside the game, wouldn't count toward anything in the continuing story of your life.
Thus fails the Utopia of playing lots of really cool video games forever. Even if the games are difficult, novel, and sensual, this is still the idiom of life chopped up into a series of disconnected episodes with no lasting consequences. A life in which equality of consequences is forcefully ensured, or in which little is at stake because all desires are instantly fulfilled without individual work—these likewise will appear as flawed Utopias of dispassion and angst. "Rich people with nothing to do" syndrome. A life of disconnected episodes and unimportant consequences is a life of weak passions, of emotional uninvolvement.
Our emotions, for all the obvious evolutionary reasons, tend to associate to events that had major reproductive consequences in the ancestral environment, and to invoke the strongest passions for events with the biggest consequences:
Falling in love... birthing a child... finding food when you're starving... getting wounded... being chased by a tiger... your child being chased by a tiger... finally killing a hated enemy...
The Uses of Fun (Theory)
"But is there anyone who actually wants to live in a Wellsian Utopia? On the contrary, not to live in a world like that, not to wake up in a hygenic garden suburb infested by naked schoolmarms, has actually become a conscious political motive. A book like Brave New World is an expression of the actual fear that modern man feels of the rationalised hedonistic society which it is within his power to create."
—George Orwell, Why Socialists Don't Believe in Fun
There are three reasons I'm talking about Fun Theory, some more important than others:
- If every picture ever drawn of the Future looks like a terrible place to actually live, it might tend to drain off the motivation to create the future. It takes hope to sign up for cryonics.
- People who leave their religions, but don't familiarize themselves with the deep, foundational, fully general arguments against theism, are at risk of backsliding. Fun Theory lets you look at our present world, and see that it is not optimized even for considerations like personal responsibility or self-reliance. It is the fully general reply to theodicy.
- Going into the details of Fun Theory helps you see that eudaimonia is actually complicated —that there are a lot of properties necessary for a mind to lead a worthwhile existence. Which helps you appreciate just how worthless a galaxy would end up looking (with extremely high probability) if it was optimized by something with a utility function rolled up at random.
Free to Optimize
Stare decisis is the legal principle which binds courts to follow precedent, retrace the footsteps of other judges' decisions. As someone previously condemned to an Orthodox Jewish education, where I gritted my teeth at the idea that medieval rabbis would always be wiser than modern rabbis, I completely missed the rationale for stare decisis. I thought it was about respect for the past.
But shouldn't we presume that, in the presence of science, judges closer to the future will know more—have new facts at their fingertips—which enable them to make better decisions? Imagine if engineers respected the decisions of past engineers, not as a source of good suggestions, but as a binding precedent!—That was my original reaction. The standard rationale behind stare decisis came as a shock of revelation to me; it considerably increased my respect for the whole legal system.
This rationale is jurisprudence constante: The legal system must above all be predictable, so that people can execute contracts or choose behaviors knowing the legal implications.
Judges are not necessarily there to optimize, like an engineer. The purpose of law is not to make the world perfect. The law is there to provide a predictable environment in which people can optimize their ownfutures.
I was amazed at how a principle that at first glance seemed so completely Luddite, could have such an Enlightenment rationale. It was a "shock of creativity"—a solution that ranked high in my preference ordering and low in my search ordering, a solution that violated my previous surface generalizations. "Respect the past just because it's the past" would not have easily occurred to me as a good solution for anything.
There's a peer commentary in Evolutionary Origins of Morality which notes in passing that "other things being equal, organisms will choose to reward themselves over being rewarded by caretaking organisms". It's cited as the Premack principle, but the actual Premack principle looks to be something quite different, so I don't know if this is a bogus result, a misremembered citation, or a nonobvious derivation. If true, it's definitely interesting from a fun-theoretic perspective.
Optimization is the ability to squeeze the future into regions high in your preference ordering. Living by my own strength, means squeezing my own future—not perfectly, but still being able to grasp some of the relation between my actions and their consequences. This is the strength of a human.
If I'm being helped, then some other agent is also squeezing my future—optimizing me—in the same rough direction that I try to squeeze myself. This is "help".
A human helper is unlikely to steer every part of my future that I could have steered myself. They're not likely to have already exploited every connection between action and outcome that I can myself understand. They won't be able to squeeze the future that tightly; there will be slack left over, that I can squeeze for myself.
We have little experience with being "caretaken" across any substantial gap in intelligence; the closest thing that human experience provides us with is the idiom of parents and children. Human parents are still human; they may be smarter than their children, but they can't predict the future or manipulate the kids in any fine-grained way.
Even so, it's an empirical observation that some human parents dohelp their children so much that their children don't become strong. It's not that there's nothing left for their children to do, but with a hundred million dollars in a trust fund, they don't need to do much—their remaining motivations aren't strong enough. Something like that depends on genes, not just environment —not every overhelped child shrivels—but conversely it depends on environment too, not just genes.
So, in considering the kind of "help" that can flow from relatively stronger agents to relatively weaker agents, we have two potential problems to track:
- Help so strong that it optimizes away the links between the desirable outcome and your own choices.
- Help that is believedto be so reliable, that it takes off the psychological pressure to use your own strength.
Since (2) revolves around belief, could you just lie about how reliable the help was? Pretend that you're not going to help when things get bad—but then if things do get bad, you help anyway? That trick didn't work too well for Alan Greenspan and Ben Bernanke.
A superintelligence might be able to pull off a better deception. But in terms of moral theory and eudaimonia—we areallowed to have preferences over external states of affairs, not just psychological states. This applies to "I want to really steer my own life, not just believe that I do", just as it applies to "I want to have a love affair with a fellow sentient, not just a puppet that I am deceived into thinking sentient". So if we can state firmly from a value standpoint that we don't want to be fooled this way, then buildingan agent which respects that preference is a mere matter of Friendly AI.
Modify people so that they don't relax when they believe they'll be helped? I usually try to think of how to modify environments before I imagine modifying any people. It's not that I want to stay the same person forever; but the issues are rather more fraught, and one might wish to take it slowly, at some eudaimonic rate of personal improvement.
(1), though, is the most interesting issue from a philosophicalish standpoint. It impinges on the confusion named "free will". Of which I have already untangled; see the posts referenced at top, if you're recently joining OB.
Let's say that I'm an ultrapowerful AI, and I use my knowledge of your mind and your environment to forecast that, if left to your own devices, you will make $999,750. But this does not satisfice me; it so happens that I want you to make at least $1,000,000. So I hand you $250, and then you go on to make $999,750 as you ordinarily would have.
How much of your own strength have you just lived by?
The first view would say, "I made 99.975% of the money; the AI only helped 0.025% worth."
The second view would say, "Suppose I had entirely slacked off and done nothing. Then the AI would have handed me $1,000,000. So my attempt to steer my own future was an illusion; my future was already determined to contain $1,000,000."
Someone might reply, "Physics is deterministic, so your future is already determined no matter what you or the AI does—"
But the second view interrupts and says, "No, you're not confusing me that easily. I am within physics, so in order for my future to be determined by me, it must be determined by physics. The Past does not reach around the Present and determine the Future before the Present gets a chance—that is mixing up a timeful view with a timeless one. But if there's an AI that really does look over the alternatives before I do, and really does choose the outcome before I get a chance, then I'm really not steering my own future. The future is no longer counterfactually dependent on my decisions."
At which point the first view butts in and says, "But of course the future is counterfactually dependent on your actions. The AI gives you $250 and then leaves. As a physical fact, if you didn't work hard, you would end up with only $250 instead of $1,000,000."
To which the second view replies, "I one-box on Newcomb's Problem, so my counterfactual reads 'if my decision were to not work hard, the AI would have given me $1,000,000 instead of $250'."
"So you're saying," says the first view, heavy with sarcasm, "that if the AI had wanted me to make at least $1,000,000 and it had ensured this through the general policy of handing me $1,000,000 flat on a silver platter, leaving me to earn $999,750 through my own actions, for a total of $1,999,750—that this AI would have interfered lesswith my life than the one who just gave me $250."
The second view thinks for a second and says "Yeah, actually. Because then there's a stronger counterfactual dependency of the final outcome on your own decisions. Every dollar you earned was a real added dollar. The second AI helped you more, but it constrained your destiny less."
"But if the AI had done exactly the same thing, because it wantedme to make exactly $1,999,750—"
The second view nods.
"That sounds a bit scary," the first view says, "for reasons which have nothing to do with the usual furious debates over Newcomb's Problem. You're making your utility function path-dependent on the detailed cognition of the Friendly AI trying to help you! You'd be okay with it if the AI only could give you $250. You'd be okay if the AI had decided to give you $250 through a decision process that had predicted the final outcome in less detail, even though you acknowledge that in principle your decisions may already be highly deterministic. How is a poor Friendly AI supposed to help you, when your utility function is dependent, not just on the outcome, not just on the Friendly AI's actions, but dependent on differences of the exact algorithm the Friendly AI uses to arrive at the same decision? Isn't your whole rationale of one-boxing on Newcomb's Problem that you only care about what works?"
"Well, that's a good point," says the second view. "But sometimes we only care about what works, and yet sometimes we do care about the journey as well as the destination. If I was trying to cure cancer, I wouldn't care how I cured cancer, or whether I or the AI cured cancer, just so long as it ended up cured. This isn't that kind of problem. This is the problem of the eudaimonic journey—it's the reason I care in the first place whether I get a million dollars through my own efforts or by having an outside AI hand it to me on a silver platter. My utility function is not up for grabs. If I desire not to be optimized too hard by an outside agent, the agent needs to respect that preference even if it depends on the details of how the outside agent arrives at its decisions. Though it's also worth noting that decisions areproduced by algorithms— if the AI hadn't been using the algorithm of doing just what it took to bring me up to $1,000,000, it probably wouldn't have handed me exactly $250."
The desire not to be optimized too hard by an outside agent is one of the structurally nontrivial aspects of human morality.
But I can think of a solution, which unless it contains some terrible flaw not obvious to me, sets a lower bound on the goodness of a solution: any alternative solution adopted, ought to be at least this good or better.
If there is anything in the world that resembles a god, people will try to pray to it. It's human nature to such an extent that people will pray even if there aren't any gods—so you can imagine what would happen if there were! But people don't pray to gravity to ignore their airplanes, because it is understood how gravity works, and it is understood that gravity doesn't adapt itself to the needs of individuals. Instead they understand gravity and try to turn it to their own purposes.
So one possible way of helping—which may or may not be the best way of helping—would be the gift of a world that works on improved rules, where the rules are stable and understandable enough that people can manipulate them and optimize their own futures together. A nicer place to live, but free of meddling gods beyond that. I have yet to think of a form of help that is less poisonous to human beings—but I am only human.
Added: Note that modern legal systems score a low Fail on this dimension—no single human mind can even know all the regulations any more, let alone optimize for them. Maybe a professional lawyer who did nothing else could memorize all the regulations applicable to them personally, but I doubt it. As Albert Einstein observed, any fool can make things more complicated; what takes intelligence is moving in the opposite direction.
Part of The Fun Theory Sequence
Next post: "Harmful Options"
Previous post: "Living By Your Own Strength"
Dunbar's Function
The study of eudaimonic community sizes began with a seemingly silly method of calculation: Robin Dunbar calculated the correlation between the (logs of the) relative volume of the neocortex and observed group size in primates, then extended the graph outward to get the group size for a primate with a human-sized neocortex. You immediately ask, "How much of the variance in primate group size can you explain like that, anyway?" and the answer is 76% of the variance among 36 primate genera, which is respectable. Dunbar came up with a group size of 148. Rounded to 150, and with the confidence interval of 100 to 230 tossed out the window, this became known as "Dunbar's Number".
It's probably fair to say that a literal interpretation of this number is more or less bogus.
There was a bit more to it than that, of course. Dunbar went looking for corroborative evidence from studies of corporations, hunter-gatherer tribes, and utopian communities. Hutterite farming communities, for example, had a rule that they must split at 150—with the rationale explicitly given that it was impossible to control behavior through peer pressure beyond that point.
But 30-50 would be a typical size for a cohesive hunter-gatherer band; 150 is more the size of a cultural lineage of related bands. Life With Alacrity has an excellent series on Dunbar's Number which exhibits e.g. a histogram of Ultima Online guild sizes—with the peak at 60, not 150. LWA also cites further research by PARC's Yee and Ducheneaut showing that maximum internal cohesiveness, measured in the interconnectedness of group members, occurs at a World of Warcraft guild size of 50. (Stop laughing; you can get much more detailed data on organizational dynamics if it all happens inside a computer server.)
Amputation of Destiny
Followup to: Nonsentient Optimizers, Can't Unbirth a Child
From Consider Phlebas by Iain M. Banks:
In practice as well as theory the Culture was beyond considerations of wealth or empire. The very concept of money—regarded by the Culture as a crude, over-complicated and inefficient form of rationing—was irrelevant within the society itself, where the capacity of its means of production ubiquitously and comprehensively exceeded every reasonable (and in some cases, perhaps, unreasonable) demand its not unimaginative citizens could make. These demands were satisfied, with one exception, from within the Culture itself. Living space was provided in abundance, chiefly on matter-cheap Orbitals; raw material existed in virtually inexhaustible quantities both between the stars and within stellar systems; and energy was, if anything, even more generally available, through fusion, annihilation, the Grid itself, or from stars (taken either indirectly, as radiation absorbed in space, or directly, tapped at the stellar core). Thus the Culture had no need to colonise, exploit, or enslave.
The only desire the Culture could not satisfy from within itself was one common to both the descendants of its original human stock and the machines they had (at however great a remove) brought into being: the urge not to feel useless. The Culture's sole justification for the relatively unworried, hedonistic life its population enjoyed was its good works; the secular evangelism of the Contact Section, not simply finding, cataloguing, investigating and analysing other, less advanced civilizations but—where the circumstances appeared to Contact to justify so doing—actually interfering (overtly or covertly) in the historical processes of those other cultures.
Raise the subject of science-fictional utopias in front of any halfway sophisticated audience, and someone will mention the Culture. Which is to say: Iain Banks is the one to beat.
Devil's Offers
Previously in series: Harmful Options
An iota of fictional evidence from The Golden Age by John C. Wright:
Helion had leaned and said, "Son, once you go in there, the full powers and total command structures of the Rhadamanth Sophotech will be at your command. You will be invested with godlike powers; but you will still have the passions and distempers of a merely human spirit. There are two temptations which will threaten you. First, you will be tempted to remove your human weaknesses by abrupt mental surgery. The Invariants do this, and to a lesser degree, so do the White Manorials, abandoning humanity to escape from pain. Second, you will be tempted to indulge your human weakness. The Cacophiles do this, and to a lesser degree, so do the Black Manorials. Our society will gladly feed every sin and vice and impulse you might have; and then stand by helplessly and watch as you destroy yourself; because the first law of the Golden Oecumene is that no peaceful activity is forbidden. Free men may freely harm themselves, provided only that it is only themselves that they harm."
Phaethon knew what his sire was intimating, but he did not let himself feel irritated. Not today. Today was the day of his majority, his emancipation; today, he could forgive even Helion's incessant, nagging fears.
Phaethon also knew that most Rhadamanthines were not permitted to face the Noetic tests until they were octogenerians; most did not pass on their first attempt, or even their second. Many folk were not trusted with the full powers of an adult until they reached their Centennial. Helion, despite criticism from the other Silver-Gray branches, was permitting Phaethon to face the tests five years early...
Harmful Options
Previously in series: Living By Your Own Strength
Barry Schwartz's The Paradox of Choice—which I haven't read, though I've read some of the research behind it—talks about how offering people more choices can make them less happy.
A simple intuition says this shouldn't ought to happen to rational agents: If your current choice is X, and you're offered an alternative Y that's worse than X, and you know it, you can always just go on doing X. So a rational agent shouldn't do worse by having more options. The more available actions you have, the more powerful you become—that's how it should ought to work.
For example, if an ideal rational agent is initially forced to take only box B in Newcomb's Problem, and is then offered the additional choice of taking both boxes A and B, the rational agent shouldn't regret having more options. Such regret indicates that you're "fighting your own ritual of cognition" which helplessly selects the worse choice once it's offered you.
But this intuition only governs extremely idealized rationalists, or rationalists in extremely idealized situations. Bounded rationalists can easily do worse with strictly more options, because they burn computing operations to evaluate them. You could write an invincible chess program in one line of Python if its only legal move were the winning one.
Of course Schwartz and co. are not talking about anything so pure and innocent as the computing cost of having more choices.
If you're dealing, not with an ideal rationalist, not with a bounded rationalist, but with a human being—
Say, would you like to finish reading this post, or watch this surprising video instead?
Living By Your Own Strength
Followup to: Truly Part of You
"Myself, and Morisato-san... we want to live together by our own strength."
Jared Diamond once called agriculture "the worst mistake in the history of the human race". Farmers could grow more wheat than hunter-gatherers could collect nuts, but the evidence seems pretty conclusive that agriculture traded quality of life for quantity of life. One study showed that the farmers in an area were six inches shorter and seven years shorter-lived than their hunter-gatherer predecessors—even though the farmers were more numerous.
I don't know if I'd call agriculture a mistake. But one should at least be aware of the downsides. Policy debates should not appear one-sided.
In the same spirit—
Once upon a time, our hunter-gatherer ancestors strung their own bows, wove their own baskets, whittled their own flutes.
And part of our alienation from that environment of evolutionary adaptedness, is the number of tools we use that we don't understand and couldn't make for ourselves.
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)