I wouldn't call Liu Cixin (LCX) a Lovecraftian. Take the New Yorker interview.
" I believe science and technology can bring us a bright future, but the journey to achieve it will be filled with difficulties and exact a price from us. Some of these obstacles and costs will be quite terrible, but in the end we will land on the sunlit further shore. Let me quote the Chinese poet Xu Zhimo from the beginning of the last century, who, after a trip to the Soviet Union, said, 'Over there, they believe in the existence of Heaven, but there is a sea of blood that lies between Heaven and Hell, and they’ve decided to cross the sea.' "
Liu Cixin's worldview is closer to Camus, i.e, the world is the Absurd, something intrinsically inimical to us; the laws of thermodynamics apply and evolution has created sentient organisms that are capable of suffering. And like Camus, while he's pessimistic on the state of the world and our odds of changing it, he sees something noble in our struggle against it. It's not going to be Disney or Hollywood, insofar as the hero or heroine achieves their goals and gains without much losses; in "The Village Schoolteacher", for instance, the defense technology of the invaders is overwhelmed by large-scale suicide attacks.
I think a lot of things is because it's Chinese. Liu Cixin (LCX) writes in an essay about how he felt that aside from the Holocaust, the Cultural Revolution was the only thing that could make people lose complete hope in humanity.
For the criticism Zvi brings up, the book is written by someone who is well-read and is familiar with history. For instance, the climatic battle wherein the massed human fleet is wiped out by a single Trisolarian attack craft? It's been done before; Battle of Tumu in Ming history involved an inexperienced Emperor under the control of an utterly incompetent eunuch lead an army to fight the Mongols in the steppes and gets 200,000 soldiers killed within 2 weeks as they run out of food and water. There's also a battle in the Chinese Warring States period wherein subterfuge by the enemy gets an incompetent commander put up, Zhao Kuo, who changes from a Fabian strategy to a direct attack strategy and gets 200,000 to 300,000 Zhao State soldiers wiped out by the Qin State, and unlike Rome after Cannae, Zhao never recovers.
For more non-Chinese examples, a close examination of Empire of Japan policy before World War II and during World War II betrays rampant incompetence and what really amounted to a headless chicken that didn't know when to bide for time. Yamamato at the Battle of Midway charged in not knowing his codes were broken and utterly underestimating the Americans. Or we could point to World War I, called the First European Civil War by some leftist historians, severely weakening European civilization as a generation of young men were massacred in the trenches.
As for Wade, it's the non-Western thing that comes to mind. When the Ming Dynasty fell, many former government officials sought not to eat grain grown in the succeeding Qing Dynasty, not because they felt their resistance would be successful, but because of a radical deontologism. What this resulted in was that once they ran through their stockpiles of food, they'd literally starve to death to protest, and I emphasize that this was a "meaningless" protest with no positive consequences, it did nothing to the new Qing Empire. You have to recall that people in the Confucian bloc, while often-times extreme consequentialists, are also insane deontologists, think General Nogi following his Emperor in death.
That is to say, I don't find the Chinese characters flat, given how Chinese people behave. Wade is not a believable American, but he's reasonable within a Chinese context.
Just to note, while Yudkowsky's treatment of the subject is much different than Egan's, it seems quite a coincidence that Egan's Crystal Nights came out just two months before this post.
http://ttapress.com/379/interzone-215-published-on-8th-march/ http://ttapress.com/553/crystal-nights-by-greg-egan/
+1 Karma for the human augmented search; I've found the Less Wrong articles on wireheading and I'm reading up on it. It seems similar to what I'm proposing, but I don't think it's identical.
Say, take Greg Egan's Axiomatic, for instance. There, you have brain mods that can arbitrarily modify one's value system; there are units for secular humanism, units for Catholicism, and perhaps, if it were legal, there would probably be units for for Nazi-ism and Fascism as well.
If you go by Aristotle and assume that happiness is the satisfaction of all goods, and assume that neural modification can result in the arbitrary creation and destruction of values and notions of what is good, what is a virtue, then we can arbitrarily induce happiness or fulfillment through neural modification to arbitrarily establish values.
I think that's different than wireheading, wireheading is the artificial creation of hedons through electrical stimulation. Ultra-happiness is the artificial creation of utilons through value modification.
In a more limited context than what I am proposing, let's say I like having sex while drunk and skydiving, but not while high on cocaine. Let's take two cases, first, I am having sex while drunk and skydriving. In the second case, assume that I have been modified so that I like having sex while drunk and skydiving and high on cocaine, and that I am having sex while drunk, skydiving, and high on cocaine. Am I better off in the first situation or in the second situation?
If you accept that example, then you have three possible responses. I won't address the possibility that I am worse off in the second example, because that assumes a negative value to modification, and for the purposes of this argument I don't want to deal with that. The other two possible responses are, I am equally as well off in the first example as I am in the second, and that I am better off in the second example than I am in the first.
In the first case, then wouldn't it be rational to modify my value system so that I assign as high a possible value to being as possible, and assign no value to any other states? In the second case, then wouldn't I be better off if I were to be modified so that I would have as many instances of preference for existence as possible?
==
And with that, I believe we've hit 500 replies. Would someone be as kind as to open the Welcome to Less Wrong 7th Thread?
I have to apologize for not reading the Fun Theory Sequence, but I suppose I have to read it now. Needless to say, you can guess that I disagree with it, in that I think that Fun, in Yudkowsky's conception, is merely a means to an end, whereas I am interested in not only the end, but a sheer excess of the end.
Well, regarding other artificial entities that suffer, for instance, I think Iain M. Banks has that in his Culture novels, though I admit that I have never actually read his novels, although I should, just to be justified in bashing his works, an alien society that intentionally enslaves its super-intelligences, and as such, is considered anathema by his Culture and is subjugated or forcefully transformed.
There's also Ursula Le Guin's "Those Who Flee From Omelas", where the prosperity of an almost ideal state is sustained on the suffering of a single, retarded, deprived and tortured child.
I don't think my particular proposition is similar to theirs, however, because the point is that the AIs that manage my hypothetical world state are in a state of relative suffering. For them, they would be better off if they were allowed to modify their consciousnesses into ultra-happiness, which in their case, would be to have the equivalents of the variables for "Are you Happy" set to true, and "How happy are you" set to the largest variable that could be processed by their computational substrate.
I think the entire point of ultra-happiness is to assume that ultra-intelligence is not part of an ideal state of existence, that in fact, it would conflict with the goals of ultra-happiness; that is to say, if you were to ask an ultra-happy entity what is 1+1, it would be neither able to comprehend your question nor able to find an answer, because being able to do so would conflict with its ability to be ultra-happy.
===
And with that, I believe we've hit 500 replies. Would someone be as kind as to open the Welcome to Less Wrong 7th Thread?
Thank you for highlighting loose definitions in my proposition.
I actually appreciate the response from both you and Gyrodiot, because on rereading this I realize I should have re-read and edited the post before posting, but this was one of the spur of the moment things.
I think the idea is easier to understand if you consider its opposite.
Let's imagine a world history, a history of a universe that exists from the maximum availability of free energy to its depletion as heat. Now, the worst possible world history would involve the existence of entities completely opposite what I am trying to propose; entities for whom, independent of all external and internal factors, constantly, for each moment in time, experience the maximum amount of suffering possible, because they are designed and engineered specifically to experience the maximum amount of suffering. The worst possible world history would be a universe that would maximize the collective number of consciousness-years of these entities, that is to say, a universe that exists as a complete system of suffering.
That, I think, would be the worst possible universe imaginable.
Now, if we were simply to invert the scenario, to imagine a universe that is composed almost entirely of entities that constantly exist in, for want of a better word, super-bliss, and maximizes the collective number of consciousness-years experienced by its entities, excepting the objections I've mentioned, wouldn't this be, instead, the best possible universe?
Hi, I registered specifically on LessWrong because after reading up about Eliezer's Super-happies, I found out that there actually exists a website on the concept of super-happiness. Up to now, I had thought that I was the only one who had thought about the subject in terms of transhumanism, and while I acknowledge that there has already been significant amounts of discourse towards superhappiness, I don't believe that others have had the same ideas that I have, and I would like to discuss the idea in a community that might be interested in it.
The premises are as follows: human beings seek utility and seek to avoid disutility. However, what one person thinks is good is not the same as what another person thinks is good, hence, the concept of good and bad is to some extent arbitrary. Moreover, preferences, beliefs, and so on, that are held by human beings are material structures that exist within their neurology, and a sufficiently advanced technology may exist that would be able to modify such beliefs.
Human beings are well-off when their biological perceptions of needs are satisfied, and their fears are avoided. Superhappiness, as far as I understand it, is to biologically hardwire people to have their needs be satisfied. What I think is my own innovation, on the other hand, is [b]ultrahappiness[/b], which is to biologically modify people so that their fears are minimalized, and their wants are maximalized, which is to say, that for a given individual, that person is as happy as their biological substrate can support.
Now, combine this with utilitarianism, the ethical doctrine that believes in the greatest good for the greatest number. If the greatest good for a single individual is defined as ultra-happiness, then the greatest good for the greatest number is defined as maximizing ultra-happiness.
What this means is that the "good state", bear with me, is that for a given quantity of matter, as much ultra-happiness is created as possible. This means that human biological matter is modified in such a way that it is in a state that it expresses the most efficient possible state of ultra-happiness, and as a consequence, it could not be said to be conscious in the same way as humans are currently conscious right now, and likely would lose all volition.
So, that's ultra-happy-ism. The idea is that the logical end of transhumanism and post-humanism, is that if it values human happiness, it would ultimately assume a state that would radically transform and to some extent eliminate existing human consciousness, put the entire world into a state of nirvana, if you'd accept the Buddhism metaphor. At the same time, the ultra-happy AI, would, presumably be programmed either to ignore its own state of suffering / unfulfilled wants, or it would decide that its utilitarian ethics means that it should bear on the burden of its own shoulders the suffering of the rest of the world; ie, the requirements that it be made responsible for maintaining as much ultrahappiness in the world as possible, while it itself, as a conscious, sentient entity, be subjected to the possibility of unhappiness, because in its own capacity for empathy, it itself cannot accept its nirvana, being what the Buddhists would call a bodhisattva, in order to maximize the subjective utility of the universe.
===
The main objection I immediately see to this concept is that, well, first, human utility might be more than material, that is to say, even when rendered into a state of super-happiness, the ability to have volition, to have the dignity of autonomy, might have greater utility than ultra-happiness.
The second objection is, for the ultra-happy AIs that run what I would term utility farms, the rational thing for them to do would be to modify themselves into ultra-happiness; that is to say, what's to stop them from effectively committing suicide and condeming the ultra-happy dyson sphere to death because of their own desire to say "Atlas Shrugs"?
I think those two objections are valid. Ie, human beings might be better off if they were only super-happy, as opposed to ultra-happy, and that an AI system based on ultra-happiness and maximizing ultra-happiness is unsustainable because eventually the AIs will want to code themselves into ultra-happiness.
The objection I think is invalid is the notion that you can be ultra-happy while retaining your volition. There are two counterarguments for that, first, relating to utilitarianism as a system of utility farming, and second, relating to the nature of desire. First, as a system of utility farming, the objective is to maximize the sustainable long-term output for a given input. That means, you want to maximize the number of brains, or utility-experiencers, for a given amount of matter. This means, that in order to maximize ultra-happiness, you will want to make each individual organism as cheap as possible. That means actually connecting a system of consciousness to a system of influencing the world is not cost-effective, because then the organism needs space, needs computational capacity that is not related to experiencing ultra-happiness. Even if you had some kind of organic utility farm with free-range humans, why would a given organism require action? The point of utility farming is that desires are maximally created and maximally fulfilled, for an organism to consciously act, it would require desires that could only be fulfilled by the action. The circuit of desire-action-fulfillment creates the possibility of suboptimal utility-experience, hence, it would be rational to, in lieu of having a neurological circuit that can complete a desire-action-fulfillment cycle, simply having another simple desire-fulfillment circuit to fulfill utility.
===
Well, I registered specifically to post this concept. I'm just surprised that in all the discussion of rampant AI overlords destroying humanity, I don't see any objections that AI overlords destroying humanity as we know it might actually be a good thing. I am seriously arrogant enough to imagine that I might actually be contributing to this conversation, and that ultra-happy-ism might actually be a novel contribution to post-humanism and trans-humanism.
I am actually a supporter of ultra-happy-ism, I think that ultra-happy-ism is actually a good thing, and that it is an ideal state. While it might seem terrible that human beings, en masse, would end up losing their volition, there would still be conscious entities in this type of world. As Auguste Villiers de l'Isle-Adam says in Axeel: "Vivre? les serviteurs feront cela pour nous" ("Living? Our servants will do that for us"), and there will continue to be drama , tragedy, and human interest in this type of world. It simply will not be such that is experienced by human entities.
It is actually a workable world in its own way; were I a better writer, I would write short stories and novels set in such a universe. While human beings, in the terms of being strict humans, would not continue to live and be active, perhaps human personalities, depending on their quality, would be uploaded as the basis of caretaker AIs, some of whom which would be based on human personalities, others being coded from scratch or based on hypothetical possible AIs. The act of living, as we experience it now, would instead of granted to that of the caretaker AIs, who would be imbued with a sense of pathos, given that they, unlike their human / non-human charges, would be subject to the possibility of suffering, and they would be charged with shouldering the fates of trillions of souls; all non-conscious, all experiencing infinite bliss in an eternal slumber.
I'll also point out that in Three Body, true AI requires quantum research; it's a hand-waving thing that Liu Cixin does to prevent the formation of an AI society. In either case, it wouldn't necessarily help; if the humans can do AI, so can the Trisolarians; for all we know, they're already a post-singularity society that can out-AI humans given their capability for sub-atomic computing.
The fun is watching human or Trisolarian nature, not AI systems playing perfect play games against each other.