chaosmosis

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

I like the vibes.

But worse, the path is not merely narrow, but winding, with frequent dead ends requiring frequent backtracking. If ever you think you're closer to the truth - discard that hubris, for it may inhibit you from leaving a dead end, and there your search for truth will end. That is the path of the crank.

I don't like this part. First, thinking that you're closER to the truth is not really a problem, it's thinking you've arrived at the truth that arguably is. Second, I think sometimes human beings can indeed find the truth. Underconfidence is just as much a sin as overconfidence, but referring to hubris in the way that you did seems like it would encourage false humility. I think you should say something more like "for every hundred ides professed to be indisputable truths, ninety nine are false", and maybe add something about how there's almost never good justification to refuse to even listen to other people's points of view.

The path of rationality is a path without destination.

I don't agree with this either, or most of the paragraph before it: there are strong trends.

I think that even very small amounts of x-risk increases are significant. I also think that lone LWers have the most impact when they're dealing with things like community attitudes.

I think neither of those things. This isn't about stupidity or intelligence. This is about how people will behave within a conversation. More intelligence granted to a debator set on winning an argument and securing status does not make them better at accepting and learning from information in the context. It makes them better at defending themselves from needing to. It makes them better and creating straw men and clever but irrelevant counter-arguments.

I agree that tone can provide useful information. The difference between our positions is perhaps more one of emphasis than anything else, despite the stupid and superficial squabbling above. I'm focused on the dangers of relying on tone, whereas you're focused on the benefits.

I'm focused on the dangers of tone since I think that our intuitions about such an inherently slippery concept are untrustworthy and I also think that it's human nature to perceive neutral differences in things like tone as hostile differences. As previously mentioned, I also thing that LessWrongers allow tonal differences to cloud their judgement, and they feel justified in doing so because they are offended by other tones. Tone should be secondary to substance by a very long margin.

I am unsure to what extent you really disagree with any of this. You don't seem to attempt to refute my arguments about how a reliance on tone can be dangerous. Instead, you take pot shots at my credibility, and you say that tone also has legitimate uses. I don't want to deny or preclude legitimate uses of tone, so your position here doesn't clash much with mine.

We also both seem to perceive norms on LessWrong surrounding tone differently. I see a lot of the dangerous type of attitude towards tone going on in this site, the above comment with someone who apparently strawmanned my comment 3 times being a good example. Judging from your overall position, you seem to perceive this as less common. I don't know what could be done to resolve this aspect of our disagreement.

I'm not comfortable identifying with any group 'us' unless I know how that group is identified. I'd be surprised if I even willingly put myself in the same group as you (making a quoted-from-you 'us' unlikely). For better or worse I do not believe I relate to words, argument or communication in general the same way that you do. (And yes, I do believe that my 'us' would refer to the 'smart ones'---or at least ones that are laudable in some way that I consider significant.)

I was using that language tongue-in-cheek, to display the sort of perspective that I perceive as dangerous and that I think you might be trying to justify, not as something that I actually believe. I also thought it was ironic and amusing to place myself in the same category as you, I did so with the belief that you would reject that association, which was exactly what made it funny to me.

Giving one future self u=10 and another u=0 is equally as good as giving one u=5 and another u=5.

So, to give a concrete example, you have $10 dollars. You can choose between gaining 5 utilons today and five tomorrow by spending half of the money today and half of the money tomorrow, or between spending all of it today and gaining 10 utilons today and 0 tomorrow. These outcomes both give you equal numbers of utilons, so they're equal.

Phil says that the moral reason they're both equal is because they both have the same amount of average utility distributed across instances of you. He then uses that as a reason that average utilitarianism is correct across different people, since there's nothing special about you.

However, an equally plausible interpretation is that the reason they are morally equal in the first instance is because the aggregate utilities are the same. Although average utilitarianism and aggregate utilitarianism overlap when N = 1, in many other cases they disagree. Average utilitarianism would rather have one extremely happy person than twenty moderately happy people, for example. This disagreement means that average and aggregate utilitarianism are not the same (as well as the fact that they have different metaethical justifications which are used as support), which means he's not justified in either his initial privileging of average utilitarianism or his extrapolation of it to large groups of people.

I said that length was useful insofar as it added to communication. Was I particularly inefficient? I don't think so. As is, it's somewhat ironic, but I think only superficially so because there isn't any real clash between what I claim as ideal and what I engage in (because, again, I think I was efficient). And there's not stupidly there at all, or at least none that I see. You'll need to go into more detail here.

I understand what you're getting at, but what specifically is important about this change? I see the added resource intensity as one thing but that's all I can think of whereas I'm reading your comment as hinting at some more fundamental change that's taking place.

(A few seconds later, my thoughts.)

One change might be that the goals have shifted. It becomes about status and not about solving problems. Maybe that is what you had in mind? Or something else?

Load More