Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

KaiTeorn comments on Three Worlds Collide (0/8) - Less Wrong

48 Post author: Eliezer_Yudkowsky 30 January 2009 12:07PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (96)

Sort By: Old

You are viewing a single comment's thread.

Comment author: KaiTeorn 27 February 2016 10:06:46PM 0 points [-]

An interesting ethical exercise. It seems to me that it would benefit from cutting some slack, such as the entire pseudoreligious Confessor's line (I understand he's one of the more alive protagonists, but hey, it's largely drama out of nothing) and the superfluous "markets" (I understand the author is fond of prediction markets, but here they add nothing to the story's core, only distract). The core, on the other hand - the two alien races and their demands - is drama out of something and would do well with some elaboration.

For one thing, while the Babyeaters are pretty well established (and have some historic analog in the Holocaust, as mentioned in the story itself), the Superhappies look much more muddled to me. Why exactly should I be outraged by them? An overabundance of sex? Sorry, doesn't work. Lord Acon is somehow disgusted to look at their true bodies, all slimy and tangled? Pardon me, have you looked at your own guts or brain? They're pretty slimy too. Lack of humour and inability to lie? Well, that may be something to marvel at but hardly something to find morally unacceptable. I think the author missed a good opportunity here - he could have called the second aliens Babyfuckers (which they most likely are, it's just not highlighted enough in the story) instead of the bland "Superhappies," so that the humans' moral outrage looks more justified - and the story's premise becomes more nicely symmetric.

The only real reason to abhor the Superhappies does not appear until much later in the story when they reveal their plan to rework the human race. That, at least, is genuine conundrum. Is pain always bad? If not, when and why can it be good? If it's only good because it helps us understand sufferings of others and therefore be altruistic, will pain become useless in a world where (sentient) others don't suffer anymore? Where's the line between improving someone and killing-and-recreating-from-scratch? Is this line drawn differently for the body and for the brain? There's a lot to ponder.

Author's non-solution of "run, you fools" (which is the same in both endings, only one ending's escape is more successful than the other's) is sad and silly, but at least it's believable. We people are just like that, alas. We so want to improve the imperfect Others, yet we're so horrified at the thought that some still more perfect Others may want to improve us. Today's world is abrim with examples. Apparently centuries of mandated rationalism didn't do much to change that in the crew of the Impossible.

But the biggest problem I have with this story is not with the specific solution the author offers; rather, it is with the conception of "solution" itself. I am not an ethical realist (or at least not an ethical naturalist), so I don't believe ethical dilemmas work as mathematical puzzles where one answer is correct and all others are wrong. Ethics only exists within and between ethics-capable beings, and it only works via constant deliberation, negotiation, experimentation, building up trust. It's slow, it's painful, it's highly uncertain, but that's how live ethics works in real life. Going to space or conversing with aliens will hardly change that; if history shows us anything, it's that the more advanced a culture has become, the less likely it is to speak in ultimatums. So, I think I reject the very premise of this story; something vaguely like that may happen, but in real life it would be much less drastic, much more boring (in general, real life is more boring than fiction), with lots and lots of openings for compromise that all three parties will try to exploit. It doesn't mean the end result would be rosy and mutually satisfying; it may well happen that some of the civilizations will gobble up, or transform unrecognizably, others. But that's not going to happen overnight, and it's not something you can ensure or avoid too far in advance. Rationalism helps you think but it can't make the world completely predictable.