An iota of fictional evidence from The Golden Age by John C. Wright:
Helion had leaned and said, "Son, once you go in there, the full powers and total command structures of the Rhadamanth Sophotech will be at your command. You will be invested with godlike powers; but you will still have the passions and distempers of a merely human spirit. There are two temptations which will threaten you. First, you will be tempted to remove your human weaknesses by abrupt mental surgery. The Invariants do this, and to a lesser degree, so do the White Manorials, abandoning humanity to escape from pain. Second, you will be tempted to indulge your human weakness. The Cacophiles do this, and to a lesser degree, so do the Black Manorials. Our society will gladly feed every sin and vice and impulse you might have; and then stand by helplessly and watch as you destroy yourself; because the first law of the Golden Oecumene is that no peaceful activity is forbidden. Free men may freely harm themselves, provided only that it is only themselves that they harm."
Phaethon knew what his sire was intimating, but he did not let himself feel irritated. Not today. Today was the day of his majority, his emancipation; today, he could forgive even Helion's incessant, nagging fears.
Phaethon also knew that most Rhadamanthines were not permitted to face the Noetic tests until they were octogenerians; most did not pass on their first attempt, or even their second. Many folk were not trusted with the full powers of an adult until they reached their Centennial. Helion, despite criticism from the other Silver-Gray branches, was permitting Phaethon to face the tests five years early...
Then Phaethon said, "It's a paradox, Father. I cannot be, at the same time and in the same sense, a child and an adult. And, if I am an adult, I cannot be, at the same time, free to make my own successes, but not free to make my own mistakes."
Helion looked sardonic. "'Mistake' is such a simple word. An adult who suffers a moment of foolishness or anger, one rash moment, has time enough to delete or destroy his own free will, memory, or judgment. No one is allowed to force a cure on him. No one can restore his sanity against his will. And so we all stand quietly by, with folded hands and cold eyes, and meekly watch good men annihilate themselves. It is somewhat... quaint... to call such a horrifying disaster a 'mistake.'"
Is this the best Future we could possibly get to—the Future where you must be absolutely stern and resistant throughout your entire life, because one moment of weakness is enough to betray you to overwhelming temptation?
Such flawless perfection would be easy enough for a superintelligence, perhaps—for a true adult—but for a human, even a hundred-year-old human, it seems like a dangerous and inhospitable place to live. Even if you are strong enough to always choose correctly—maybe you don't want to have to be so strong, always at every moment.
This is the great flaw in Wright's otherwise shining Utopia—that the Sophotechs are helpfully offering up overwhelming temptations to people who would not be at quite so much risk from only themselves. (Though if not for this flaw in Wright's Utopia, he would have had no story...)
If I recall correctly, it was while reading The Golden Age that I generalized the principle "Offering people powers beyond their own is not always helping them."
If you couldn't just ask a Sophotech to edit your neural networks—and you couldn't buy a standard package at the supermarket—but, rather, had to study neuroscience yourself until you could do it with your own hands—then that would act as something of a natural limiter. Sure, there are pleasure centers that would be relatively easy to stimulate; but we don't tell you where they are, so you have to do your own neuroscience. Or we don't sell you your own neurosurgery kit, so you have to build it yourself—metaphorically speaking, anyway—
But you see the idea: it is not so terrible a disrespect for free will, to live in a world in which people are free to shoot their feet off through their own strength—in the hope that by the time they're smart enough to do it under their own power, they're smart enough not to.
The more dangerous and destructive the act, the more you require people to do it without external help. If it's really dangerous, you don't just require them to do their own engineering, but to do their own science. A singleton might be justified in prohibiting standardized textbooks in certain fields, so that people have to do their own science—make their own discoveries, learn to rule out their own stupid hypotheses, and fight their own overconfidence. Besides, everyone should experience the joy of major discovery at least once in their lifetime, and to do this properly, you may have to prevent spoilers from entering the public discourse. So you're getting three social benefits at once, here.
But now I'm trailing off into plots for SF novels, instead of Fun Theory per se. (It can be fun to muse how I would create the world if I had to order it according to my own childish wisdom, but in real life one rather prefers to avoid that scenario.)
As a matter of Fun Theory, though, you can imagine a better world than the Golden Oecumene depicted above—it is not the best world imaginable, fun-theoretically speaking. We would prefer (if attainable) a world in which people own their own mistakes and their own successes, and yet they are not given loaded handguns on a silver platter, nor do they perish through suicide by genie bottle.
Once you imagine a world in which people can shoot off their own feet through their own strength, are you making that world incrementally better by offering incremental help along the way?
It's one matter to prohibit people from using dangerous powers that they have grown enough to acquire naturally—to literally protect them from themselves. One expects that if a mind kept getting smarter, at some eudaimonic rate of intelligence increase, then—if you took the most obvious course—the mind would eventually become able to edit its own source code, and bliss itself out if it chose to do so. Unless the mind's growth were steered onto a non-obvious course, or monitors were mandated to prohibit that event... To protect people from their own powers might take some twisting.
To descend from above and offer dangerous powers as an untimely gift, is another matter entirely. That's why the title of this post is "Devil's Offers", not "Dangerous Choices".
And to allow dangerous powers to be sold in a marketplace—or alternatively to prohibit them from being transferred from one mind to another—that is somewhere in between.
John C. Wright's writing has a particular poignancy for me, for in my foolish youth I thought that something very much like this scenario was a good idea—that a benevolent superintelligence ought to go around offering people lots of options, and doing as it was asked.
In retrospect, this was a case of a pernicious distortion where you end up believing things that are easy to market to other people.
I know someone who drives across the country on long trips, rather than flying. Air travel scares him. Statistics, naturally, show that flying a given distance is much safer than driving it. But some people fear too much the loss of control that comes from not having their own hands on the steering wheel. It's a common complaint.
The future sounds less scary if you imagine yourself having lots of control over it. For every awful thing that you imagine happening to you, you can imagine, "But I won't choose that, so it will be all right."
And if it's not your own hands on the steering wheel, you think of scary things, and imagine, "What if this is chosen for me, and I can't say no?"
But in real life rather than imagination, human choice is a fragile thing. If the whole field of heuristics and biases teaches us anything, it surely teaches us that. Nor has it been the verdict of experiment, that humans correctly estimate the flaws of their own decision mechanisms.
I flinched away from that thought's implications, not so much because I feared superintelligent paternalism myself, but because I feared what other people would say of that position. If I believed it, I would have to defend it, so I managed not to believe it. Instead I told people not to worry, a superintelligence would surely respect their decisions (and even believed it myself). A very pernicious sort of self-deception.
Human governments are made up of humans who are foolish like ourselves, plus they have poor incentives. Less skin in the game, and specific human brainware to be corrupted by wielding power. So we've learned the historical lesson to be wary of ceding control to human bureaucrats and politicians. We may even be emotionally hardwired to resent the loss of anything we perceive as power.
Which is just to say that people are biased, by instinct, by anthropomorphism, and by narrow experience, to underestimate how much they could potentially trust a superintelligence which lacks a human's corruption circuits, doesn't easily make certain kinds of mistakes, and has strong overlap between its motives and your own interests.
Do you trust yourself? Do you trust yourself to know when to trust yourself? If you're dealing with a superintelligence kindly enough to care about you at all, rather than disassembling you for raw materials, are you wise to second-guess its choice of who it thinks should decide? Do you think you have a superior epistemic vantage point here, or what?
Obviously we should not trust all agents who claim to be trustworthy—especially if they are weak enough, relative to us, to need our goodwill. But I am quite ready to accept that a benevolent superintelligence may not offer certain choices.
If you feel safer driving than flying, because that way it's your own hands on the steering wheel, statistics be damned—
—then maybe it isn't helping you, for a superintelligence to offer you the option of driving.
Gravity doesn't ask you if you would like to float up out of the atmosphere into space and die. But you don't go around complaining that gravity is a tyrant, right? You can build a spaceship if you work hard and study hard. It would be a more dangerous world if your six-year-old son could do it in an hour using string and cardboard.
"I flinched away from that thought's implications, not so much because I feared superintelligent paternalism myself, but because I feared what other people would say of that position."
This is basically THE reason I always advocate increased comfort with lying. It seems to me that this fear of believing what they don't want to say if they only believe truth is the single largest seemingly removable barrier to people becoming rationalists at all, or passing that barrier, to becoming the best rationalists they can be.
Can you expound on this just a bit. The second sentence is slightly difficult to parse, but sound like an interesting notion, so I'd like to be sure I understand what you said.