Lumifer comments on Harper's Magazine article on LW/MIRI/CFAR and Ethereum - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (153)
An interesting problem. There are a few things that can be said about this.
1) Neoreaction is not the only tendency that combines universalism and elitism -- for that matter, it consistently rejects universalism, so it's one way of resolving the tension you're perceiving. Another way is to embrace both: this could be done by belief in a heritable factor of general intelligence (which strikes me as the rationalist thing to do, and which necessarily entails some degree of elitism), but that's merely the most visible option. An alternative is to say that some cultures are superior to others (the North to the South for a common political example, aspiring-rationalist culture to the culture at large for a local one), which also necessarily entails elitism: at the very least, the inferiors must be uplifted.
2) The coexistence of universalism and elitism (and technocratic progressivism) is reminiscent of the later days of the British Empire. They believed that they could figure out a universal morality -- and beyond that, a universally proper culture -- but, of course, only the more developed and rational among even their own people could play a part in that. I suspect that LW draws disproportionately from communities that contain ideological descent from the British Empire, and that its surrounding baggage uncritically reflects that descent -- in fact, this is provably true for one aspect of LW-rationality unless utilitarianism was independently developed somewhere else. (The last line from the last point sounds familiar.)
3) Neoreaction is probably partially an exercise in constructing new narratives and value-systems that are at least as plausible as the ones that are currently dominant. This isn't incompatible with the generation of true insights -- in fact, the point can't be made with false ones. (Obviously false ones, at least, but if the epistemic sanity waterline isn't high enough around here to make that almost as difficult a constraint, rationalism has probably failed.) There's also some shock-jock countersignaling stuff, especially with Moldbug.
4) The comparative study of civilizations leads (at least when taken in conjunction with the point that technological progress and political progress cannot be assumed to be the same, or even driven by the same factors, except insofar as technology can make possible beneficial things that would not have been possible otherwise -- though it can do the same for harmful things, like clickbait or the nuclear bomb) to two insights: first, that civilizations keep collapsing, and second, that they tend to think they're right. No two fundamentally disagreeing civilizations can be right at the same time -- so either value-systems cannot be compared (which is both easily dismissed and likely to contain a grain of truth for the simple reason that, if any of our basic moral drives come neither from culture nor about facts about the outside world, what else could they be but innate? Even the higher animals show signs of a sense of morality in lab tests, I've heard.) or one of them is wrong. It's the same argument as the atheist one against religion, just fully generalized. (I don't think the argument works for atheism, since, if you grant that the God or gods of divine-containing religions want humans to follow them, Christianity and the various paganisms can't be seriously compared -- but I digress.) Hence the utility of generating alternative narratives for the cause of seeking truth.
5) People concerned about civilizational risk would do well to take the possibility of collapse seriously, as per the fourth point. People who want to hurry up and solve morality and build it into a friendly AI, even more so. Those who believe that every civilization would come to the same moral solution should want there to be as many people likely to support this goal and do good and useful work toward it as possible, before a government or a business beats them to it, which seems to imply that they should either want there to be as many not-unfriendly and likely-to-be-useful civilizations as possible or that they should at least want Western civilization (i.e. the USA, Canada, and some amount of Europe depending on who you talk to) not to collapse, since it's generated by far the highest proportion of people who take that task seriously. (IIRC the last part is close to the reasoning Anissimov went through, but I could be misremembering.)
(There's likely to be at least one part of this that's completely wrong, especially since it's two in the morning and I'm rushing through this so I can sleep. A slice of virtual Stollen to anyone who finds one -- it's that time of the year.)
Think it's a bit more complicated. The issue is that while value systems can be compared, there are many different criteria by which they can be measured against each other. In different comparison frameworks the answer as to which is superior is likely to be different, too.
Consider e.g. a tapir and a sloth. Both are animals which live in the same habitat. Can they be compared? They "fundamentally disagree" about whether it's better to live on the ground or up in the trees -- is one of them "right" and the other "wrong"?
This, by the way, probably argues for your point that generating alternative narratives is useful.
Good point -- you have to take into account technological, genetic, geographic, economic, geopolitical, etc. conditions as well.
(Which poses an interesting question: what sort of thing is America or any one of its component parts to be compared to? Or is there a more general rule -- something with a similar structure to "if the vast majority of other civilizations would disagree up to their declining period, you're probably wrong"?)
Steppe hordes, sea empires, and hill tribes may be alike enough that similar preconditions for civilization would be necessary. (cf. hbdchick's inbreeding/outbreeding thing, esp. the part about the Semai: same effect, totally different place)