Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to Formative Youth
Comment author: Tyrrell_McAllister2 25 February 2009 04:43:26PM 2 points [-]

michael vassar:

[Nerds] perceive abstractions handed to them explicitly by other people more easily than patterns that show up around them. Oddly, this seems to me to be a respect in which nerds are more feminine rather than being more masculine as they are in many other ways.

Would you elaborate on this? What is the generally-feminine behavior of which the first sentence describes an instance?

My first inclination would be to think that your first sentence describes something stereotypically masculine. It's an example of wanting things to come in pre-structured formats, which is part of wanting to operate in a domain that is governed by explicit pre-established rules. That is often seen as a stereotypically-masculine desire that manifests in such non-nerdy pursuits as professional sports and military hierarchies.

In response to An African Folktale
Comment author: Tyrrell_McAllister2 17 February 2009 07:41:38PM 1 point [-]

George Weinberg:

Does it occur to anyone else that the fable is not a warning against doing favors in general but of siding with "outsiders" against "insiders"?

Wow; now that you mention it, that is a blatant recurring theme in the story. I now can't help but think that that is a major part, if not the whole, of the message. Each victim betrays an in-group to perform a kindness for a stranger. It's pretty easy to see why storytellers would want to remind listeners that their first duty is to the tribe. Whatever pity they might feel for a stranger, they must never let that pity lead them to betray the interests of their tribe.

Can't believe I missed that :).

In response to An African Folktale
Comment author: Tyrrell_McAllister2 16 February 2009 04:51:15PM 2 points [-]

Some here seem to think it significant that the good-doers in the story are not naive fools over whom the audience can feel superior. It is argued that that sense of superiority explains stories like the Frog and the Scorpion in the West. The inference seems to be that since this sense of superiority is lacking in this African tale, the intent could only have been to inform the audience that this is how the world works.

However, I don't think that the "superiority" explanation can be so quickly dismissed. To me, this story works because the audience keeps having their expectations of gratitude violated. Hence, the storyteller gets to feel superior to the audience by proving him or herself to be wiser in the ways of the world. The closing lines ---

That's all. For so it always has been - if you see the dust of a fight rising, you will know that a kindness is being repaid! That's all. The story's finished.

--- read to me like a self-satisfied expression of condescension towards an audience so naive as to expect some justice in this world.

Comment author: Tyrrell_McAllister2 11 February 2009 07:00:48PM 0 points [-]

Paul Crowley:

One trivial example of signalling here is the way everyone still uses the Computer Modern font. This is a terrible font, and it's trivial to improve the readability of your paper by using, say, Times New Roman instead, but Computer Modern says that you're a serious academic in a formal field.

I don't think that these people are signaling. Computer Modern is the default font for LaTeX. Learning how to change a default setting in LaTeX is always non-trivial.

You might argue that people are signaling by using LaTeX instead of Word or whatever, but switching from LaTeX to some other writing system is also not a trivial matter.

Comment author: Tyrrell_McAllister2 11 February 2009 05:26:30PM 0 points [-]

Eliezer, the link in your reply to nazgulnarsil links to this very post. I'm assuming that you intended to link to that recent post of yours on SJG, but I'll leave it to you to find it :).

Comment author: Tyrrell_McAllister2 09 February 2009 07:12:31PM 3 points [-]

I think that you make good points about how fiction can be part of a valid moral argument, perhaps even an indispensable part for those who haven't had some morally-relevant experience first-hand.

But I'm having a hard time seeing how your last story helped you in this way. Although I enjoyed the story very much, I don't think that your didactic purposes are well-served by it.

My first concern is that your story will actually serve as a counter-argument for rationality to many readers. Since I'm one of those who disagreed with the characters' choice to destroy Huygens, I'm pre-disposed to worry that your methods could be discredited by that conclusion. A reader who has not already been convinced that your methods are valid could take this as a reductio ad absurdum proof that they are invalid. I don't think that your methods inexorably imply your conclusion, but another reader might take your word for it, and one person's modus ponens is another's modus tollens. Of course, all methods of persuasion carry this risk. But it's especially risky when you are actively trying to make the "right answer" as difficult as possible to ascertain for dramatic purposes.

Another danger of fictional evidence is that it can obscure what exactly is the structure and conclusion of the argument. For example, why were we supposed to conclude that evading the Super-Happies was worth killing 15 billion at Huygens but was not worth destroying Earth and fragmenting the colonies? Or were we necessarily supposed to conclude that? Were you trying to persuade the reader that the Supper-Happies' modifications fell between those two choices? As far as I could tell, there was no argument in the story to support this. Nor did I see anything in your preceding "rigorous" posts to establish that being modified fell in this range It appeared to be a moral assertion for which no argument was given. Or perhaps it was just supposed to be a thought-provoking possibility, to which you didn't mean to commit yourself. You subsequent comments don't lead me to think that, though. This uncertainty about your intended conclusion would be less likely if you were relying on precise arguments.

Comment author: Tyrrell_McAllister2 05 February 2009 07:58:12PM 0 points [-]

Psy-Kosh: Yeah, I meant to have a "as Psy-Kosh has pointed out" line in there somewhere, but it got deleted accidentally while editing.


How many humans are there not on Huygens?

I'm pretty sure that it wouldn't matter to me. I generally find on reflection that, with respect to my values, doing bad act A to two people is less than twice as bad as doing A to one person. Moreover, I suspect that, in many cases, the badness of doing A to n people converges to a finite value as n goes to infinity. Thus, it is possible that doing some other act B is worse than doing A to arbitrarily many people. At this time, I believe that this is the case when A = "allow the Super-Happies to re-shape a human" and B = "kill fifteen billion people".

Comment author: Tyrrell_McAllister2 05 February 2009 06:38:44PM 2 points [-]

If the Super-Happies were going to turn us into orgasmium, I could see blowing up Huygens. Nor would it necessarily take such an extreme case to convince me to take that extreme measure. But this . . . ?

"Our own two species," the Lady 3rd said, "which desire this change of the Babyeaters, will compensate them by adopting Babyeater values, making our own civilization of greater utility in their sight: we will both change to spawn additional infants, and eat most of them at almost the last stage before they become sentient."


"It is nonetheless probable," continued the Lady 3rd, "that the Babyeaters will not accept this change as it stands; it will be necessary to impose these changes by force. As for you, humankind, we hope you will be more reasonable. But both your species, and the Babyeaters, must relinquish bodily pain, embarrassment, and romantic troubles. In exchange, we will change our own values in the direction of yours. We are willing to change to desire pleasure obtained in more complex ways, so long as the total amount of our pleasure does not significantly decrease. We will learn to create art you find pleasing. We will acquire a sense of humor, though we will not lie. From the perspective of humankind and the Babyeaters, our civilization will obtain much utility in your sight, which it did not previously possess. This is the compensation we offer you. We furthermore request that you accept from us the gift of untranslatable 2, which we believe will enhance, on its own terms, the value that you name 'love'. This will also enable our kinds to have sex using mechanical aids, which we greatly desire. At the end of this procedure, all three species will satisfice each other's values and possess great common ground, upon which we may create a civilization together."

Sure, I would turn this down if it were simply offered as a gift. But I really, really, cannot see preferring the death of fifteen billion people over it. Although I value the things that the Super-Happies would take away, and I even value valuing them, I don't value valuing them all that much. Or, if I do, it is very far from intuitively obvious to me. And the more I think about it, the less likely it seems.

I hope that Part 8 somehow makes this ending seem more like the "right" one. Maybe it will be made clear that the Super-Happies couldn't deliver on their offer without imposing significant hidden downsides. It wouldn't stretch plausibility too much if such downsides were hidden even from them. They are portrayed as not really getting how we work. As I said in this comment to Part 3, we might expect that they would screw us up in ways that they don't anticipate.

But unless some argument is made that their offer was much worse than it seemed at first, I can't help but conclude that the crew made a colossal mistake by destroying Huygens, to understate the matter.

In response to Value is Fragile
Comment author: Tyrrell_McAllister2 31 January 2009 02:05:31AM 0 points [-]

Wei Dai: Consider a program which when given the choices (A,B) outputs A. If you reset it and give it choices (B,C) it outputs B. If you reset it again and give it choices (C,A) it outputs C. The behavior of this program cannot be reproduced by a utility function.

I don't know the proper rational-choice-theory terminology, but wouldn't modeling this program just be a matter of describing the "space" of choices correctly? That is, rather than making the space of choices {A, B, C}, make it the set containing

(1) = taking A when offered A and B, (2) = taking B when offered A and B,

(3) = taking B when offered B and C, (4) = taking C when offered B and C,

(5) = taking C when offered C and A, (6) = taking A when offered C and A.

Then the revealed preferences (if that's the way to put it) from your experiment would be (1) > (2), (3) > (4), and (5) > (6). Viewed this way, there is no violation of transitivity by the relation >, or at least none revealed so far. I would expect that you could always "smooth over" any transitivity-violation by making an appropriate description of the space of options. In fact, I would guess that there's a standard theory about how to do this while still keeping the description-method as useful as possible for purposes such as prediction.

Comment author: Tyrrell_McAllister2 30 January 2009 04:27:51PM 2 points [-]

It's good. Not baby-eatin' good, but good enough ;).

View more: Next