All of Inst's Comments + Replies

I'll also point out that in Three Body, true AI requires quantum research; it's a hand-waving thing that Liu Cixin does to prevent the formation of an AI society. In either case, it wouldn't necessarily help; if the humans can do AI, so can the Trisolarians; for all we know, they're already a post-singularity society that can out-AI humans given their capability for sub-atomic computing.

The fun is watching human or Trisolarian nature, not AI systems playing perfect play games against each other.

I wouldn't call Liu Cixin (LCX) a Lovecraftian. Take the New Yorker interview.

" I believe science and technology can bring us a bright future, but the journey to achieve it will be filled with difficulties and exact a price from us. Some of these obstacles and costs will be quite terrible, but in the end we will land on the sunlit further shore. Let me quote the Chinese poet Xu Zhimo from the beginning of the last century, who, after a trip to the Soviet Union, said, 'Over there, they believe in the existence of Heaven, but there is a sea of... (read more)

I think a lot of things is because it's Chinese. Liu Cixin (LCX) writes in an essay about how he felt that aside from the Holocaust, the Cultural Revolution was the only thing that could make people lose complete hope in humanity.

For the criticism Zvi brings up, the book is written by someone who is well-read and is familiar with history. For instance, the climatic battle wherein the massed human fleet is wiped out by a single Trisolarian attack craft? It's been done before; Battle of Tumu in Ming history involved an inexperienced Emperor under t... (read more)

Just to note, while Yudkowsky's treatment of the subject is much different than Egan's, it seems quite a coincidence that Egan's Crystal Nights came out just two months before this post.

http://ttapress.com/379/interzone-215-published-on-8th-march/ http://ttapress.com/553/crystal-nights-by-greg-egan/

+1 Karma for the human augmented search; I've found the Less Wrong articles on wireheading and I'm reading up on it. It seems similar to what I'm proposing, but I don't think it's identical.

Say, take Greg Egan's Axiomatic, for instance. There, you have brain mods that can arbitrarily modify one's value system; there are units for secular humanism, units for Catholicism, and perhaps, if it were legal, there would probably be units for for Nazi-ism and Fascism as well.

If you go by Aristotle and assume that happiness is the satisfaction of all goods, and assu... (read more)

6Richard_Kennaway
Those are some large assumptions. One might instead assume (what Aristotle argues for — Nicomachean Ethics chs. 8–9) that happiness is to be found in an objectively desirable state of eudaemonia, achieved by using reason to live a virtuous life. (Add utilitarianism to that and you get the EA movement.) One might also assume (what Plato argues for — Republic, book 8) that neural modification cannot result in the arbitrary creation and destruction of values, only the creation and destruction of notions of values, but the values that those notions are about remain unchanged. Those are also large assumptions, of course. How would you decide between them, or between them and other possible assumptions?
-2ChristianKl
That's a mistake. You wouldn't ask in a discussion about physics to go back to the mistaken notions of Aristotle. There's no reason to do it here. Electrical stimulation changes values.

I have to apologize for not reading the Fun Theory Sequence, but I suppose I have to read it now. Needless to say, you can guess that I disagree with it, in that I think that Fun, in Yudkowsky's conception, is merely a means to an end, whereas I am interested in not only the end, but a sheer excess of the end.

Well, regarding other artificial entities that suffer, for instance, I think Iain M. Banks has that in his Culture novels, though I admit that I have never actually read his novels, although I should, just to be justified in bashing his works, an alie... (read more)

Thank you for highlighting loose definitions in my proposition.

I actually appreciate the response from both you and Gyrodiot, because on rereading this I realize I should have re-read and edited the post before posting, but this was one of the spur of the moment things.

I think the idea is easier to understand if you consider its opposite.

Let's imagine a world history, a history of a universe that exists from the maximum availability of free energy to its depletion as heat. Now, the worst possible world history would involve the existence of entities comple... (read more)

2ChristianKl
That's basically wireheading. Apart from that your basic frame of mind is that there a one dimensional variable that goes from maximum suffering on the other hand to maximum bliss on the other hand. I doubt that's true. You treat fear as synonymous with suffering. That clouds the issue. People who go parachuting do experience fear. It's creates a rush of emotions. It doesn't make them suffer but makes them feel alive. He have multiple times witnessed people in NLP with happiness made strong enough that it was too much for the person. It takes good hypnotic suggestibility to get a person to that point by simply strengthening an emotion but it does happen from time to time. When wishing in front of an almightly AGI it's very important to be clear about one is asking for.

Hi, I registered specifically on LessWrong because after reading up about Eliezer's Super-happies, I found out that there actually exists a website on the concept of super-happiness. Up to now, I had thought that I was the only one who had thought about the subject in terms of transhumanism, and while I acknowledge that there has already been significant amounts of discourse towards superhappiness, I don't believe that others have had the same ideas that I have, and I would like to discuss the idea in a community that might be interested in it.

The premises... (read more)

0ChristianKl
Without a good definition of "fear" and "want" that's not a very useful definition. Both words are quite complex when you get to actual cognition.
0Gyrodiot
Hi, and welcome to Less Wrong ! There are indeed few works about truly superintelligent entities including happy humans. I don't recall any story where human beings are happy... while there are other artificial entities that suffer. This is definitely a worthy thought experiment, that raises some morality issues : should we apply human morality to non-human conscious entities ? Are you familiar with the Fun Theory Sequence?