Will_Sawin comments on Rationality Quotes: June 2011 - Less Wrong

4 Post author: Oscar_Cunningham 01 June 2011 08:17AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (470)

You are viewing a single comment's thread. Show more comments above.

Comment author: Will_Sawin 14 June 2011 02:34:24PM *  2 points [-]

Nietzsche seems to always see the project of self-improvement in opposition to the project of building a functional society out of multiple people who don't kill each other, and the second one always seemed more important to me.

It's hard for me to understand what he's saying because he doesn't engage (much? at all?) with Actually True Morality, that is the utilitarian/"group is just a sum of individuals" paradigm. The question of whether it's OK for the strong to bully the weak almost doesn't seem to interest him.

One man is not a whole lot better than one ape, but a group of men is infinitely superior to a group of apes.

ETA: I often like to think of FAI as not the ultimate transhuman, but the ultimate institution/legal system/moral code.

Comment author: orthonormal 14 June 2011 03:14:06PM 6 points [-]

You might say that Nietzsche takes opposition to the Repugnant Conclusion to an extreme: his philosophy values humanity by the $L^\infty$ norm rather than the $L^1$ norm.

Comment author: Will_Sawin 14 June 2011 04:53:38PM 0 points [-]

(Assuming that individual value is nonnegative.)

Comment author: orthonormal 14 June 2011 06:45:57PM -1 points [-]

That's an emendation, not the original; in most of his mid-to-late works, he really does mean that the absolute magnitude of a character, without reference to its direction, is of value.

Comment author: Will_Sawin 14 June 2011 06:49:34PM 0 points [-]

But certainly the people who believe in the $L^1$ norm don't take the absolute value...

Comment author: [deleted] 14 June 2011 07:33:52PM *  2 points [-]

What? The L^1 norm is the integral of the absolute value of the function.

In this thread: people using mathematics where it doesn't belong.

Comment author: Will_Sawin 14 June 2011 08:07:05PM 1 point [-]

I should say:

No one believes in the $L^1$ norm. There is only Nietzsche, who believes in $L_\infty$, and utilitarians, who believe in the integral.

In this thread: people using mathematics where it doesn't belong.

I suppose. It's a more efficient and fun form of communication then writing it out in English, but it loses big on the number of people who can understand it.

Comment author: orthonormal 14 June 2011 11:37:55PM *  1 point [-]

No one believes in the $L^1$ norm. There is only Nietzsche, who believes in $L_\infty$, and utilitarians, who believe in the integral.

Yes, that's what I should have written.

Comment author: orthonormal 14 June 2011 11:39:45PM 0 points [-]

I know how it looked when you jumped in (presumably from the Recent Comments page), but both of us did know the proper math- it's the analogy that we were ironing out.

Comment author: [deleted] 14 June 2011 11:48:23PM 0 points [-]

I read from the start of the L^p talk to now, and I can't think why both of you bothered to speak in that language. The major point of contention occurs in a lacuna in the L^p semantic space, so continuing in that vein is... hmmm.

It's like arguing whether the moon is pale-green or pale-blue, and deciding that since plain English just doesn't cut it, why not discuss the issue in Japanese?

Comment author: Vladimir_Nesov 15 June 2011 02:29:39AM *  0 points [-]

deciding that since plain English just doesn't cut it, why not discuss the issue in Japanese?

Why not, if you know Japanese, and it has more suitable means of expressing the topic? (I see your point, but don't think the analogy stands as stated.)

Comment author: [deleted] 15 June 2011 02:59:25AM *  0 points [-]

If we extend the analogy to the above conversation, it's an argument between non-Japanese otaku.

Comment author: MixedNuts 14 June 2011 03:28:26PM 1 point [-]

No offense to Fred, but he's a bitter loner. Idealistic nerd wants to make the world awesome, runs out and tells everyone, everyone laughs at him, idealistic nerd gives up in disgust and walks away muttering "I'll show them! I'll show them all!".

Also, he thinks this project is really really important, worth declaring war against the rest of the world and killing whoever stands in the way of becoming cooler. (As you say, whether he thinks we can also kill people who don't actively oppose it is unclear.) This is a dangerous idea (see the zillion glorious revolutions that executed critics and plunged happily into dictatorship) - though it is less dangerous when your movement is made of complete individualists. As it happens, becoming superhumans will not require offing any Luddites (though it does require offending them and coercing them by legal means), but I can't confidently say it wouldn't be worth it if it were the only way - even after correcting for historical failures.

By the same token, group rationality is in fact the way to go, but individual rationality does require telling society to take a hike every now and then.

FAI as not the ultimate transhuman, but the ultimate institution/legal system/moral code

It certaintly shouldn't be a transhuman. Eliezer's preferred metaphor is more like "the ultimate laws of physics", which says quite a bit about how individualistic you and he are.