Cornelius Dybdahl

Wiki Contributions

Comments

Sorted by

A messy onset featuring transient beating caused by a piano key being out of tune with itself is usually insignificant, but it is not necessarily insignificant if it occurs during a mellow, legato passage, where that particular note plays an especially central role. It can ruin the phrase completely. Still only in the ears of skilled musicians, but if you say this is unimportant because skilled musicians are vastly outnumbered by the general population, then you wind up creating a strong disincentive from advancing in skill beyond a certain point, and you wind up giving least consideration to those people who have most to do with music.

That exquisite piano solo on that close-to-perfectly tuned piano (wow, "god's joke on musicians" must drive folks like that nuts:) is high art, but equally so is the juxtaposition of multiple notes and lyrics to produce an emotional effect.

Not equally so, but moreso. Singing, dancing, figure skating, etc. are the highest performance arts because less mediated. They place greater psychological demands on the performers; strain their spirits to the utmost. There is something divine in it, to a degree beyond the divinity in instrumentalism. The emotional depth is greater because the performer needs by necessity to embody the emotions, and is faced with the audience without the protection of an instrument in the way. Psychologically it is a different caliber of performance. Even the greatest concert pianists (Horowitz, for example), can never quite match the olympian quality of the greatest singers.

I personally find the art of rock and roll more impressive

And for that reason, you would be among those harmed if quality distinctions were eroded in rock and roll. Popular audiences who have only a transient interest and might switch to Billie Eilish the next day will not love rock and roll the way you do, and so they will not care if good rock and roll becomes replaced with total garbage that sounds superficially similar. They will not know the difference. You would, and you would mourn the loss, but when it comes to classical, you side with the unknowing masses, for all that they could just as well be kept occupied by any other entertainment. Netflix, for example.

along with multiple interacting musical themes.

This is a strange statement. Rock is much more monodic than common practice period music. Even music from the classical period, which basically invented monody, was more polyphonic than most rock.

High art is gravy

High art (theatre in particular), is the centrepiece of just about every great civilisation in known history. The works of Aristotle, as they were preserved and studied by the Catholic church, were not what sparked the Renaissance. The humanistic works were.

and there are so many ways to make high art that losing one particularly type shouldn't concern us much.

The arts are connected and many things you take for granted (novels and rock music) could not have arisen except out of a canon with high art at its centre. Novels came out of chronicles and epics, and rock music features chords, which are not such an obvious idea as they might seem. Chordal music came very gradually out of a very long tradition of polyphonic choral music. The discovery of antique classics was what sparked the renaissance, so it should be obvious at a glance (or at the very least from Chesterton's fence esque reasoning), that losing connection with that canon would be a very serious loss.

Edited to add:

Incidentally, I think it's only intellectuals who would question the value of exquisite quality and the fine discernment of a skilled craftsman. To regular people, the value of these would be obvious. It is precisely to intellectuals that it is not obvious.

It is part Ayn Rand, part Curtis Yarvin. Ultimately it all comes from Thomas Carlyle anyway.

And there is no need to limit yourself to potential obligations. Unless you have an exceedingly blessed life, then there should be no shortage of friends and loved ones in need of help.

That does not even come close to cancelling out the reduced ability to get a detailed view of the impact, let alone the much less honest motivations behind such giving. 

And lives are not of equal value. Even if you think they have equal innate value, surely you can recognise that a comparatively shorter third-world life with worse prospects for intellectual and artistic development and greater likelihood of abject poverty is much less valuable (even if only due to circumstances) than the lives of people you are surrounded with, and surely you will also recognise that it is the latter that form the basis for your intuitions about the value of life.

By giving your "charity" (actually, the word "charity" stems from Latin caritas meaning care, as in giving to people you care about, whereas "altruism" is cognate with alter, meaning basically otherism, and in practice meaning giving to people you don't care about) to less worthwhile recipients, you behaving in an anti-meritocratic way and cheapening your act of giving.

Moreover, people obviously don't have equal innate value, and there is a distinct correlation between earning potential and being a utility monster, which at least partially cancels out the effect of diminishing marginal utility.

And the whole reason people care so much about morality is because the moral virtues and shortcomings of your friends and associates are going to have a huge impact on your life. If you're redirecting the virtue by giving money to random foreigners, you are basically defaulting on the debt to your friends. One of your closest friend could wind up in deep trouble and need as much help as he can possibly get. He will need virtuous friends he can rely on to help him, and any money you have given to some third worlders you will never meet is money you cannot give to a friend in need. Therefore, any giving to Effective Altruism is inherently unjust and disloyal. By all means, be charitable and give what you can. But not to strangers.

Imagine an alternate version of the Effective Altruism movement, whose early influences came from socialist intellectual communities such as the Fabian Society, as opposed to the rationalist diaspora.

That's a lot closer to the truth than you might think. There are plenty of lines going from the Fabian society (and from Trotsky, for that matter) into the rationalist diaspora. On the other hand, there is very little influence from eg. Henry Regnery or Oswald Spengler.

“A real charter city hasn’t been tried!” I reply.

Lee Kuan Yew's Singapore is close enough, surely.

“Real socialism hasn’t been tried either!” the Effective Samaritan quips back. “Every attempt has always been co-opted by ruling elites who used it for their own ends. The closest we’ve gotten is Scandinavia which now has the world’s highest standards of living, even if not entirely socialist it’s gotta count for something!”

This argument sounds a lot more Trotskyist than Fabian to me, but it is worth noting that said ruling elites have both been nominally socialist and been widely supported by socialists throughout the world. The same cannot be said in the case of charter cities and their socialist oppositions.

For every logical inference I make, they make the opposite. Every thoughtful prior of mine, they consider to be baseless prejudice. My modus ponens, their modus tollens.

Because your priors are baseless prejudices. The Whig infighting between liberals and socialists is one of many cases where both sides are awful and each side is almost exactly right about the other side. Your example about StarCraft shows that you are prone to using baseless prejudices as your priors, and other parts of your post show that you are indeed doing the very same thing when it comes to politics.

Of all the possible intellectuals I was exposed to, surely it is suspicious that the ones whose conclusions matched my already held beliefs were the ones who stuck.

Your evaluation of both, as well as your selection of opposition (Whig opposition in the form of socialism, rather than Tory opposition in the form of eg. paleoconservatism), shows that your priors on this point are basically theological, or more precisely, eschatological. You implicitly see history as progressing along a course of growing wisdom, increasing emancipation, and widening empathy (Peter Singer's Ever-Expanding Circle). It is simply a residue from your Christian culture. The socialist is also a Christian at heart, but being of a somewhat more dramatic disposition, he doesn't think of history as a steady upwards march to greater insight, but as a series of dramatic conflicts that resolve with the good guys winning.

(unless of course he is a Trotskyist, in which case we are perpetually at a turning point where history could go either way; towards communism or towards fascism)

Yet, the combined efforts of our charity has added up to exactly nothing! I want to yell at the Samaritan whose efforts have invalidated all of mine. Why are they so hellbent on tearing down all the beauty I want to create? Surely we can do better than this.

Sure, I can tell you how to do better: focus your efforts on improving institutions and societies that you are close to and very knowledgeable about. You can do a much better job here, and the resultant proliferation of healthy institutions will, as a pleasant side effect, spread much more prosperity in the third world than effective altruism ever will.

This is the position taken by sensible people (eg. paleocons), and notably not by revolutionaries and utopian technocrats. This is fortunate because it gives the latter a local handicap and enables good, judicious people to achieve at least some success in creating sound institutions and propagating genuine wisdom. This fundamental asymmetry is the reason why there is any functional infrastructure left anywhere, despite the utopian factions far outnumbering the realists.

We both believe in doing the most good, whatever that means, and we both believe in using evidence to inform our decision making.

No, you actually don't. If your intentions really were that good, they would lead you naturally into the right conclusions, but as Robin Hanson has pointed out, even Effective Altruism is still ultimately about virtue signalling, though perhaps directed at yourself. Sorta like HJPEV's desperate effort to be a good person after the sorting hat's warning to him. This is a case of Effective Altruists being mistaken about what their own driving motives actually are.

For us to collaborate we need to agree on some basic principles which, when followed, produces knowledge that can fit into both our existing worldviews.

The correct principle is this: fix things locally (where it is easier and where you can better track the actual results) before you decide to take over the world. There are a lot of local things that need fixing. This way, if your philosophy works, your own community, nation, etc. will flourish, and if it doesn't work, it will fall apart. Interestingly, most EA's are a lot more risk averse when it comes to their own backyard than when it comes to some random country in Africa.

To minimize the chance of statistical noise or incorrect inference polluting our conclusions, we create experiments with randomly chosen intervention and control groups, so we are sure the intervention is causally connected to the outcome.

This precludes a priori any plans that involve looking far ahead, reacting judiciously to circumstances as they arise, or creating institutions that people self-select into. In the latter case, using comparable geographical areas would introduce a whole host of confounders, but having both the intervention and control groups be in an overlapping area would change the nature of the experiment, because the structure of the social networks that result would be quite different. Basically, the statistical method you propose has technocratic policymaking built into its assumptions, and so it is not surprising that it will wind up favouring liberal technocracy. You have simply found another way of using a baseless prejudice as your prior.

But this is the most telling paragraph:

Like my beliefs about Starcraft, it seems so arbitrary. Had my initial instinct been the opposite, maybe I would have breezed past Hanson’s contrarian nonsense to one day discover truth and beauty reading Piketty.

Read both. The marginal clarity you will get from immersing yourself still deeper into your native canon is enormously outshadowed by the clarity you can get from familiarising yourself with more canons. Of course, Piketty is really just another branch of the same canon, with Piketty and Hanson being practically cousins, intellectually. Compare Friedrich List, to see the point.

My initial instinct was social democracy. Later I became a communist, then, after exposure to LessWrong, I became a libertarian. Now I'm a monarchist, and it occurs to me in hindsight that social democracy, communism, and libertarianism are all profoundly Protestant ideologies, and what I thought was me being widely read was actually still me being narrowminded and parochial.

The issue at hand is not whether the "logic" was valid (incidentally, you are disputing the logical validity of an informal insinuation whose implication appears to be factually true, despite the hinted connection — that Scott's views on HBD were influenced by Murray's works — being merely probable)

The issues at hand are:

1. whether it is a justified "weapon" to use in a conflict of this sort

2. whether the deed is itself immoral beyond what is implied by "minor sin"

That is an unrealistic and thoroughly unworkable expectation.

World models are pre-conscious. We may be conscious of verbalised predictions that follow from our world models, and various cognitive processes that involve visualisation (in the form of imagery, inner monologue, etc.), since these give rise to qualia. We do not however possess direct awareness of the actual gear-level structures of our world models, but must get at these through (often difficult) inference.

When learning about any sufficiently complex phenomenon, such as pretty much any aspect of psychology or sociology, there are simply too many gears for it to be possible to identify all of them; a lot of them are bound to remain implicit and only be noticed when specifically brought into dispute. This is not to say that there can be no standard by which to expect "theory gurus" to prove themselves not to be frauds. For example, if they have unusual worldviews, they should be able to pinpoint examples (real or invented) that illustrate some causal mechanism that other worldviews give insufficient attention to. They should be able to broadly outline how this mechanism relates to their worldview, and how it cannot be adequately accounted for by competing worldviews. This is already quite sufficient, as it opens up the possibility for interlocutors to propose alternate views of the mechanism being discussed and show how they are, after all, able to be reconciled with other worldviews than the one proposed by the theorist.

Alternatively, they should be able to prove their merit in some other way, like showing their insight into political theory by successfully enacting political change, into crowd psychology by being successful propagandists, into psychology and/or anthropology by writing great novelists with a wide variety of realistic characters from various walks of life, etc.

But expecting them to be able to explicate to you the gears of their models is somewhat akin to expecting a generative image AI to explain its inner workings to you. It's a fundamentally unreasonable request, all the more so because you have a tendency to dismiss people as bluffing whenever they can't follow you into statistical territory so esoteric that there are probably less than a thousand people in the world who could.

Trouble is that even checking the steelman with the other person does not avoid the failure modes I am talking about. In fact, some moments ago, I made slight changes to the post to include a bit where the interlocutor presents a proposed steelman and you reject it. I included this because many redditors objected that this is by definition part of steelmanning (though none of the cited definitions actually included this criterion), and so I wanted to show that it makes no difference at all to my argument whether the interlocutor asks for confirmation of the steelman versus you becoming aware of it by some other mechanism. What's relevant is only that you somehow learn of the steelman attempt, reject it as inadequate, and try to redirect your interlocutor back to the actual argument you made. The precise social forms by which this happens (the ideal being something like "would the following be an acceptable steelman [...]") are only dressing, not substance.

I have in fact had a very long email conversation spanning several months with another LessWronger who kept constructing would-be steelmen of my argument that I kept having to correct.

As it was a private conversation, I cannot give too many details, but I can try to summarize the general gist

I and this user are part of a shared IRL social network, which I have been feeling increasingly alienated from, but which I cannot simply leave without severe consequences. Trouble is that this social network generally treats me with extreme condescension, disdain, patronisation, etc, and that I am constrained in my ability to fight back in my usual manner. I am not so concerned about the underlying contempt, except for its part in creating the objectionable behaviour. It seems to me that they must subconsciously have extreme contempt for me, but since I do not respect their judgement of me, my self-esteem is not harmed by this knowledge. The real problem is that situations where I am treated with contempt and cannot defend myself from it, but must remain polite and simply take it, provide a kind of evidence to my autonomous unconscious status tracking processes (what JBP claims to be the function of the serotoninergic system, though idk if this is true at all), and that this is not so easily overridden by my own contempt for their poor judgement as my conscious reasoning about their disdain for me is.

I repeatedly explained to this LessWrong user that the issue is that these situations provide evidence for contempt for me, and that since I am constrained in my ability to talk back, they also provide systematically false evidence about my level of self respect and about how I deserve to be treated. Speaking somewhat metaphorically, you could say that this social network is inadvertently using black magic against me and that I want them to stop. It might seem that this position could be easily explained, and indeed that was how it seemed to me too at the outset of the conversation, but it was complicated by the need to demonstrate that I was in fact being treated contemptuously, and that I was in fact being constrained in my ability to defend myself against it. It was not enough to give specific examples of the treatment, because that led my interlocutor to overly narrow abstractions, so I had to point out that the specific instances of contemptuous treatment demonstrated the existence of underlying contempt, and that this underlying contempt should a priori be expected to generate a large variety of contemptuous behaviour. This in turn led to a very tedious argument over whether that underlying contempt exists at all, where it would've come from, etc.

Anyway, I eventually approached another member of this social network and tried to explain my predicament. It was tricky, because I had to accuse him of an underlying contempt giving rise to a pattern of disrespectful behaviour, but also explain that it was the behaviour itself I was objecting to and not the underlying contempt, all without telling him explicitly that I do not respect his judgement. Astonishingly, I actually made a lot of progress anyway.

Well, that didn't last long, because the LW user in question took it into his own hands to attempt to fix the schism, and told this man that if I am objecting to a pattern of disrespectful behaviour, then it is unreasonable to assume that I am objecting to the evidence of disrespect, rather than the underlying disrespect itself. You will notice that this is exactly the 180 degree opposite of my actual position.  It also had the effect of cutting off my chance at making any further progress with the man in question, since it is now to my eyes impossible to explain what I actually object to without telling him outright that I have no respect for his judgement.

I am sure he thought he was being reasonable. After all, absent the context, it would seem like a perfectly reasonable observation. But as there were other problems with his behaviour that made it seem smug and self righteous to me, and as the whole conversation up to that point had already been so maddening and let to so much disaster (it seems in fact to have played a major part in causing extreme mental harm to someone who was quite close to me), I decided to cut my losses and not pursue it any further, except for scolding him for what seemed to me like the breach of an oath he had given earlier.

Anyway, the point is not to generalise too much from this example. What I described in the post was actually inspired by other scenarios. The point of telling you this story is simply that even if you are presented with the interlocutor's proposed steelman and given a chance to reject it, this does not save you, and the conversation can still go on for literally months and not get out of the trap I described. I have had other examples of this trap being highly persistent, even with people who were more consistent in explicitly asking for confirmation of each proposed steelman, but what was special about this case was that it was the only one that lasted for literally months with hundreds of emails, that my interlocutor started out with a stated intent to see the conversation through to the end, and that my interlocutor was a fairly prolific LessWrong commenter and poster, whom I would rate as being at least in the top 5% and probably top 1% of smartest LessWrongers

I should mention for transparency that the LessWrong user in question did not state outright that he was steelmanning me, but having been around in this community for a long time, I think I am able to tell which behaviours are borne out of an attempt to steelman, or more broadly, which behaviours spring from the general culture of steelmanning and of being habituated to a steelman-esque mode of discourse. As my post indicated, I think steelmanning is a reasonable way to get to a more expedient resolution between people who broadly speaking "share base realities", but as someone with views that are highly heterodox relative to the dominant worldviews on LessWrong, I can say that my own experience with steelmanning has been that it is one of the nastiest forms of argumentation I know of.

I focused on the practice of steelmanning as emblematic of a whole approach to thinking about good faith that I believe is wrongheaded more generally and not only pertaining to steelmanning. In hindsight, I should have stated this. I considered doing so, but decided to make it the subject of a subsequent post, and I didn't notice that making a more in-depth post about the abstract pattern does not preclude me from making a brief mention in this post that steelmanning is only one instance of a more general pattern I am trying to critique.

The pattern is simply to focus excessively on behaviours and specific arguments as being in bad faith, and paying insufficient attention to the emotional drivers of being in bad faith, which also tend to make people go into denial about their bad faith.

Indeed, that was the purpose of steelmanning in its original form, as it was pioneered on Slate Star Codex.

Interestingly, when I posted it on r/slatestarcodex, a lot of people started basically screaming at me that I am strawmanning the concept of steelmanning, because a steelman by definition requires that the person you're steelmanning accepts the proposed steelman as accurate. Hence, your comment provides me some fresh relief and assures me that there is still a vestige left of the rationalist community I used to know.

I wrote my article mostly concerning how I see the word colloquially used today. I intended it as one of several posts demonstrating a general pattern of bad faith argumentation that disguises itself as exceptionally good faith. 

But setting all that aside, I think my critique still substantially applies to the concept in its original form. It is still the case, for example, that superficial mistakes will tend to be corrected automatically just from the general circulation of ideas within a community, and that the really persistent errors have to do with deeper distortions in the underlying worldview. 

Worldviews are however basically analogous to scientific paradigms as described by Thomas Kuhn. People do not adopt a complicated worldview without it seeming vividly correct from at least some angle, however parochial that angle might be. Hence, the only correct way to resolve a deep conflict between worldviews is by the acquisition of a broader perspective that subsumes both. Of course, either worldview, or both, may be a mixture of real patterns coupled with a bunch of propaganda, but in such a case, the worldview that subsumes both should ideally be able to explain why that propaganda was created and why it seems vividly believable to its adherents. 

At first glance, this might not seem to pose much of a problem for the practice of steelmanning in its original form, because in many cases it will seem like you can completely subsume the "grain of truth"  from the other perspective into your own without any substantial conflict. But that would basically classify it as a "superficial improvement", the kind that is bound to happen automatically just from the general circulation of ideas, and therefore less important than the less inevitable improvements. But if an improvement of this sort is not inevitable, it indicates that your current social network cannot generate the improvement on its own, but instead can only generate it through confrontations with conflicting worldviews from outside your main social network, and that means that your existing worldview cannot properly explain the grain of truth from the opposing view, since it could not predict it in advance, which means there is more to learn from this outside perspective than can be learned by straightforwardly integrating its apparent grain of truth.

This is basically the same pattern I am describing in the post, but just removed from the context of conversations between individuals, and instead applied to confrontations between different social networks with low-ish overlap. The argument is substantially the same, only less concrete.

No, the reasoning generalises to those fields too. The problem with those areas driving their need to have measurement of cognitive abilities is excessive bureaucratisation and lack of a sensible top-down structure with responsibilities and duties in both directions. A wise and mature person can get a solid impression of an interviewee's mental capacities from a short interview, and can even find out a lot of useful details that are not going to be covered by an IQ test. For example, mental health, maturity, and capacity to handle responsibility.

Or consider it from another angle: suppose I know someone to be brilliant and extremely capable, but when taking an IQ test, they only score 130 or so. What am I supposed to do with this information? Granted, it's pretty rare — normally the IQ would reflect my estimation of their brilliance, but in such cases, it adds no new information. But if the score does not match the person's actual capabilities as I have been able to infer them, I am simply left with the conclusion that IQ is not a particularly useful metric for my purposes. It may be highly accurate, but an experienced human judgement is considerably more accurate still.

Of course, individualised judgements of this sort are vulnerable to various failure modes, which is why large corporations and organizations like the military are interested in giving IQ tests instead. But this is often a result of regulatory barriers or other hindrances to simply requiring your job interviewers to avoid those failure modes and holding them accountable to it, with the risk of demotion or termination if their department becomes corrupt and/or grossly incompetent.

This issue is not particular to race politics. It is a much more general matter of fractal monarchy vs procedural bureaucracy.

Edit: or, if you want a more libertarian friendly version, it is a general matter of subsidiarity vs totalitarianism.

The measuring project is symptomatic of scientism and is part of what needs to be corrected.

That is what I meant when I said that the HBD crowd is reminiscent of utilitarian technocracy and progressive-era eugenics. The correct way of handling race politics is to take an inventory of the current situation by doing case studies and field research, and to develop a no-bullshit commonsense executive-minded attitude for how to go about improving the conditions of racial minorities from where they're currently at.

Obviously, more policing is needed, so as to finally give black business-owners in black areas a break and let them develop without being pestered by shoplifters, riots, etc. Affirmative action is not working, and nor is the whole paradigm of equity politics. Antidiscrimination legislation was what crushed black business districts that had been flourishing prior to the sixties.

Whether the races are theoretically equal in their genetic potential or not is utterly irrelevant. The plain fact is that they are not equal at present, and that is not something you need statistics in order to notice. If you are a utopian, then your project is to make them achieve their full potential as constrained by genetics in some distant future, and if they are genetically equal, then that means you want equal outcomes at some point. But this is a ridiculous way of thinking, because it extrapolates your policy goals unreasonably far into the future, never mind that genetic inequalities do not constrain long-term outcomes in a world that is rapidly advancing in genetic engineering tech.

The scientistic, statistics-driven approach is clearly the wrong tool for the job, as we can see from just looking at what outcomes it has achieved. Instead it is necessary to have human minds thinking reasonably about the issue, instead of trying to replace human reason with statistics "carried on by steam" as Carlyle put it. These human minds thinking reasonably about the issue should not be evaluating policies by whether they can theoretically be extrapolated to some utopian outcome in the distant future, but simply about whether they actually improve things for racial minorities or not. This is one case where we could all learn something from Keynes' famous remark that "in the long run, we are all dead".

In short: scientism is the issue, and statistics by steam are part of it. Your insistence on the measurement project over discussing the real issues is why you do not have much success with these people. You are inadvertently perpetuating the very same stigma on informal reasoning about weighty matters that is the cause of the issue.

Load More