The concept of "deserve" can be harmful. We like to think about whether we "deserve" what we get, or whether someone else deserves what he/she has. But in reality there is no such mechanism. I prefer to invert "deserve" into the future: deserve your luck by exploiting it.
Of course, "deserve" can be a useful social mechanism to increase desired actions. But only within that context.
Also "need". There's always another option, and pretending sufficiently bad options don't exist can interfere with expected value estimations.
And "should" in the moralizing sense. Don't let yourself say "I should do X". Either do it or don't. Yeah, you're conflicted. If you don't know how to resolve it on the spot, at least be honest and say "I don't know whether I want X or not X". As applied to others, don't say "he should do X!". Apparently he's not doing X, and if you're specific about why it is less frustrating and effective solutions are more visible. "He does X because it's clearly in his best interests, even despite my shaming. Oh..." - or again, if you can't figure it out, be honest about it "I have no idea why he does X"
This is a fact about you, not about "should". If "should" is part of the world, you shouldn't remove it from your map just because you find other people frustrating.
It's not about having conveniently blank maps. It's about having more precise maps.
I realize that you won't be able to see this as obviously true, but I want you to at least understand what my claim is: after fleshing out the map with specific details, your emotional approach to the problem changes and you become aware of new possible actions without removing any old actions from your list of options - and without changing your preferences. Additionally, the majority of the time this happens, "shoulding" is no longer the best choice available.
One common, often effective strategy is to tell people they should do the thing.
Sometimes, sure. I still use the word like that sometimes, but I try to stay aware that it's short hand for "you'd get more of what you want if you do"/"I and others will shame you if you don't". It's just that so often that's not enough.
...The correct response to meeting a child murderer is "No, Stop! You should not do that!", not "
(Thinking about this for a bit, I noticed that it was more fruitful for me to think of "concepts that are often used unskillfully" rather than "bad concepts" as such. Then you don't have to get bogged down thinking about scenarios where the concept actually is pretty useful as a stopgap or whatever.)
That's well-known as the mindslaver problem in MTG
Can you explain more how that problem relates to the mindslaver card in the MTG community? (Or provide a link? The top results on google were interesting but I think not the meme you were referring to.)
I think this is a slightly different issue. In Magic there's a concept of "strictly better" where one card is deemed to be always better than another (eg Lightning Bolt over Shock), as opposed to statistically better (eg Silver Knight is generally considered better than White Knight but the latter is clearly preferable if you're playing against black and not red). However, some people take "strictly better" too, um, strictly, and try to point out weird cases where you would prefer to have the seemingly worse card. Often these scenarios involve Mindslaver (eg if you're on 3 life and your opponent has Mindslaver you'd rather have Shock in hand than Lightning Bolt).
The lesson is to not let rare pathological cases ruin useful generalizations (at least not outside of formal mathematics).
The word "is" in all its forms. It encourages category thinking in lieu of focussing on the actual behavior or properties that make it meaningful to apply. Example: "is a clone really you?" Trying to even say that without using "is" poses a challenge. I believe it should be treated the same as goto: occasionally useful but usually a warning sign.
So some, like Lycophron, were led to omit 'is', others to change the mode of expression and say 'the man has been whitened' instead of 'is white', and 'walks' instead of 'is walking', for fear that if they added the word 'is' they should be making the one to be many. -Aristotle, Physics 1.2
ETA: I don't mean this as either criticism or support, I just thought it might be interesting to point out that the frustration with 'is' has a long history.
Implicitly assuming that you mapped out/classified all possible realities. One of the symptoms is when someone writes "there are only two (or three or four...) possibilities/alternatives..." instead of "The most likely/only options I could think of are..." This does not always work even in math (e.g. the statement "a theorem can be either true or false" used to be thought of as self-evidently true), and it is even less reliable in a less rigorous setting.
In other words, there is always at least one more option than you have listed! (This statement itself is, of course, also subject to the same law of flawed classification.)
There's a Discordian catma to the effect that if you think there are only two possibilities — X, and Y — then there are actually Five possibilities: X, Y, both X and Y, neither X nor Y, and something you haven't thought of.
There is a cultural heuristic (especially in Eastern cultures) that we should respect older people by default. Now, this is not a useless heuristic, as the fact that older people have had more life experiences is definitely worth taking into account. But at least in my case (and I suspect in many other cases), the respect accorded was disproportionate to their actual expertise in many domains.
The heuristic can be very useful when respecting the older person is not really a matter of whether he/she is right or wrong, but more about appeasing power. It can be very useful to distinguish between the two situations.
How old is the "older" person? 30? 60? 90? In the last case, respecting a 90-years old person is usually not about appeasing power.
It seems more like retirement insurance. A social contract that while you are young, you have to respect old people, so that while you are old, you will get respect from young people. Depends on what specifically "respecting old people" means in given culture. If you have to obey them in their irrational decisions, that's harmful. But if it just means speaking politely to them and providing them hundred trivial advantages, I would say it is good in most situations.
Specifically, I am from Eastern Europe, where there is a cultural norm of letting old people sit in the mass transit. As in: you see an old person near you, there are no free places to sit, so you automatically stand up and offer the old person to sit down. The same for pregnant women. (There are some seats with a sign that requires you to do this, but the cultural norm is that you do it everywhere.) -- I consider this norm good, because for some people the difference in utility between standing and sitting is greater than for average people. (And of course, if you have a broken leg or something, that's an obvious exception.) So it was rather shocking for me to hear about cultures where this norm does not exist. Unfortunately, even in my country in recent decades this norm (and the politeness is general) is decreasing.
Bad Concept: Obviousness
Consider this - what distinguishes obviousness from a first impression? Like some kind of meta semantic stop sign, "it's obvious!" can be used as an excuse to stop thinking about a question. It can be shouted out as an argument with an implication to the effect of "If you don't agree with me instantly, you're an idiot." which can sometimes convince people that an idea is correct without the person actually supporting their points. I sometimes wonder if obviousness is just an insidious rationalization that we cling to when what we really want is to avoid thinking or gain instant agreement.
I wonder how much damage obviousness has done?
I've found the statement "that does not seem obvious to me" to be quite useful in getting people to explain themselves without making them feel challenged. It's among my list of "magic phrases" which I'm considering compiling at posting at some point.
"Your true self", or "your true motivations". There's a tendency sometimes to call people's subconscious beliefs and goals their "true" beliefs and goals, e.g. "He works every day in order to be rich and famous, but deep down inside, he's actually afraid of success." Sometimes this works the other way and people's conscious beliefs and goals are called their "true" beliefs and goals in contrast to their unconscious ones. I think this is never really a useful idea, and the conscious self should just be called the conscious self, the subconscious self should just be called the subconscious self, and neither one of them needs to be privileged over the other as the "real" self. Both work together to dictate behavior.
"Rights". This is probably obvious to most consequentialists, but framing political discussions in terms of rights, as in "do we have the right to have an ugly house, or do our neighbors not have the right not to look at an ugly house if they don't want to?" is usually pretty useless. Similarly, "freedom" is not really a good terminal value, because pretty much anything can be defined as freedom, e.g. "by making smoking in restaurants illegal, the American people have the freedom not to smell smoke in a restaurant if they don't want to."
Within my lifetime, a magic genie will appear that grants all our wishes and solves all our problems.
For example, many Christians hold this belief under the names the Kingdom, the Rapture, and/or the second coming (details depend on sect). It leads to excessive discounting of the future, and consequent poor choices. In Collapse Jared Diamond writes about how apocalyptic Christians who control a mining company cause environmental problems in the United States.
Belief in a magic problem solving genie also causes people to fail to take effective action to improve their lives and help others, because they can just wait for the genie to do it for them.
I am not sure I am comfortable with the idea of an entirely context-less "bad concept". I have the annoying habit of answering questions of the type "Is it good/bad, useful/useless, etc." with a counter-question "For which purpose?"
Yes, I understand that rare pathological cases should not crowd out useful generalizations. However given the very strong implicit context (along with the whole framework of preconceived ideas, biases, values, etc.) that people carry around in their heads, I find it useful and sometimes necessary to help/force people break out of their default worldview and consider other ways of looking at things. In particular, ways where good/bad evaluation changes the sign.
To get back to the original point, a concept is a mental model of reality, a piece of a map. A bad concept would be wrong and misleading in the sense that it would lead you to incorrect conclusions about the territory. So a "bad concept" is just another expression for a "bad map". And, um, there are a LOT of bad maps floating around in the meme aether...
"Harmony" -- specifically the idea of root) progressions -- in music theory. (EDIT: That's "music theory", not "music". The target of my criticism is a particular tradition of theorizing about music, not any body of actual music.)
This is perhaps the worst theory I know of to be currently accepted by a mainstream academic discipline. (Imagine if biologists were Lamarckians, despite Darwin.)
Could you expand on that? It has never been clear to me what music theory is — what constitutes true or false claims about the structure of a piece of music, and what constitutes evidence bearing on such claims.
You're in good company, because it's never been clear to music theorists either, even after a couple millennia of thinking about the problem.
However, I do have my own view on the matter. I consider the music-theoretical analogue of "matching the territory" to be something like data compression. That is, the goodness of a musical theory is measured by how easily it allows one to store (and thus potentially manipulate) musical data in one's mind.
Ideally, what you want is some set of concepts such that, when you have them in your mind, you can hear a piece of music and, instead of thinking "Wow! I have no idea how to do that -- it must be magic!", you think "Oh, how nice -- a zingoban together with a flurve and two Type-3 splidgets" , and -- most importantly -- are then able to reproduce something comparable yourself.
Entitlement and Anti-entitlement, especially in the context of: 1. the whole Nice Guy thing and 2. the discourse on the millennial generation. It becomes a red herring, and in the former case leads to ambiguity between 'a specific person must do something' and 'this should be easier than it is. Plus it seems to turn semi-utilitarians deontologist. In the case of millennials, it tends to involve big inferential distance problems.
This one is well known, but having an identity that is too large can make you more susceptible to being mind killed.
Within my lifetime, the world will end.
This too is a common belief of fundamentalist Christians (though by no means limited to them), and has many of the same effects as the belief that "Within my lifetime, a magic genie will appear that grants all our wishes and solves all our problems." For instance, no one will save for retirement if they think the world will end before they retire. And it's not important to worry about the state of the environment in 50 years, if the world ends in 25.
However this belief has an important distinction from the ...
There's one hallmark of truly bad concepts: they actively work against correct induction.
Sir Karl Popper (among others) made some strong arguments that induction is a bad concept.
"It isn't fair."
Ask someone to what "it" refers, and they'll generally be shocked by the notion that their words should have referents. When the shock wears off, it will be that "the situation" is unfair, which is a category error. The state of the universe is unfair? Is gravity unfair too? How about the fact that it rained yesterday?
Fairness is a quality of a moral being or rules enforced by moral beings. But there is rarely any particular unfair being or rule enforced by beings behind "it isn't fair".
"It isn't fair" empirically means "I don't like it and I approve of and support taking something out of someone's hide to quell my discomfort."
The concept that forgiveness is a good thing. This is a bad concept because the word "forgive" suggests holding a grudge and then forgiving someone. It's simpler and better to just never hold grudges in the first place.
Retracted my previous comment, because it was agreeing with your claim that it's better to never hold grudges in the first place, which I quickly realized I also disagreed with.
A grudge is an act of retaliation against someone who has harmed you. They hurt you, so you now retract your cooperation - or even engage in active harm against them - until they have made sufficient amends. If they hurt you by accident or it was something minor, then yes, probably better not to hold a grudge. But if they did something sufficiently bad, then it is better to hold a grudge to show them that you will not accept such behavior, and that you will only engage in further cooperation once they have made some sign of being trustworthy. Otherwise you are encouraging them to do it again, since you've shown that they can do it with impunity - and by this you are also harming others, by not punishing untrustworthy people and making it more profitable to be untrustworthy. You do not forgive DefectBot, nor do you avoid developing a grudge in the first place, you hold a grudge against it and will no longer cooperate.
In this context, "forgiveness is a good thing" can be seen as a heuristic that encourages us to err on the side of punishing leniently, because too eager punishment will end up alienating people who would've otherwise been allies, because we tend to overestimate the chance of somebody having done a bad thing on purpose, because holding grudges is psychologically costly, or for some other reason.
Even worse: "Forgive and forget" as advice. It combines the problem with forgiveness with explicitly advising people not to update on the bad behavior of others.
In the lies-to-children/simplification for the purpose of heuristics department there are a largish reference class of these concepts that are basically built into the mind, that there is no way to remotely explain the proper replacement in words due to large amounts of math and technicality and so unknown to almost everyone, but that none the less can be very dangerous to take at face value. Some examples include (with approximate name for replacement concept in parenthesis): "real"(your utility function), "truth"(provability), "free will"(optimizing agent), "is-a"(configuration spaces)
"Come to terms with." Just update already. See also "seeking closure", "working through", "processing", all of which pieces of psychobabble are ways of clinging to not updating already.
I do have one question is the presence of the borrowing operation the only significant difference between Westergaardian and Schenkerian theory?
The short answer is: definitely not. The long answer (a discussion of the relationship between Schenkerian and Westergaardian theory) is too long for this comment, but is something I plan to write about in the future. For now, be it noted simply that the two theories are quite distinct (for all that Westergaardian theory owes to Schenker as a predecessor) -- and, in particular, a criticism of Schenker can by no means necessarily be taken as a criticism of Westergaard, and vice-versa (see below).
For me the most distinctive advantage of Westergaardian analyses is that it respects the fact that notes do not have to "line up" according to a certain chord structure. Notes that are sounding at the same time may be performing different functions, whereas harmonic theory dictates that notes sounding at the same time are usually "part of a chord" which is performing some harmonic function.
The way I like to put it is that in Westergaardian theory, the function of a note is defined by its relationship to other notes in its line (and to the local tonic, of course), and not by its relationship to the "root" of the "chord" to which it belongs (as in harmonic theory).
A corollary of this seems to be that Harmonic analyses work fine when the notes do consistently line up according to their function
If by "work fine" you mean that it is in fact possible to identify the "appropriate" Roman numerals to assign in such cases, sure, I'll give you that. But what is such an "analysis" telling you? Taken literally, it means that you should understand the notes in the passage in terms of the indicated progression of "roots". Which, in turn, implies that in order to hear the passage in your head, you should first, according to the analyst, imagine the succession of roots (which often, indeed typically, move by skip), and only then imagine the other notes by relating them to the roots -- with the connection of notes in such a way as to form lines being a further, third step. To me, this is self-evidently a preposterously circuitous procedure when compared with the alternative of imagining lines as the fundamental construct, within which notes move by step -- without any notion of "roots" entering at all.
Having said that, my biggest worry with Westergaardian theory is that it is almost too powerful. Whereas Harmonic theory constrains you to producing notes that do sound in some sense tonal (for a very powerful example of this see here)
I am as profoundly unimpressed with that "demonstration" as I am with that whole book and its author -- of which, I must say, this example is entirely characteristic, in its exclusive obsession with the most superficial aspects of musical hearing and near-total amputation of the (much deeper) musical phenomena that I care most about and find most interesting. As far as I am concerned, there is no aesthetic difference between any of the passages (a) through (d) for the simple reason that all four of them are too short to possess much of any aesthetic characteristics in the first place: they all consist of three bars of four chords each. They are stylistically distinct, I suppose (though not actually very much, in the scheme of things), but any of them could be continued into something interesting or something less than interesting. One thing, however, is certain: if any of them were to be continued in the way they were generated (i.e. at random), the result would be nothing short of awful -- and equally so in all four cases.
The essence of musical composition -- at least its most fundamental and "elusive" aspect -- has to do with projecting coherent (i.e. recognizably human-designed) gestures over long time spans. (How long "long" is depends on context: even if you're writing a ten-second piece, you will want to carefully design its global structure.) The point being that multileveled thinking -- control of all the various degrees of locality and globality and their interrelationships -- is at the core of this art form. For that, you need a hierarchical or "reductive" theory (the very thing that our author explicitly says he doesn't want, even claiming that to hear this way is beyond human cognitive capacities -- I'm not making this up, see the last part of Chapter 7), which harmonic theory isn't. To be impressed by the difference between (a) and (d) -- as readers are apparently expected to be -- is to miss most of the point of what music is about.
Westergaardian theory seems to allow you to do almost anything whether it sounds musical or not.
Not as Westergaard sees it (see e.g. the last paragraph of p. 294 of ITT). I actually think he's wrong about this, and that the theory should allow any note to happen at any time; the theory after all is supposed to constrain analytical choices, not compositional ones. A composer can write anything, and the question for the theorist or analyst is how a given listener understands what the composer writes.
While it is very easy to come up with a Westergaardian analysis, it is very difficult for me to understand why someone who had a certain framework in mind would have performed the operations that would have led them to the music in its actual form. The main culprits of this seem to be anticipatory notes and borrowing.
It's hard to address this without a specific example to discuss.
One more thing: Have you read "why I am not a Schenkerian" by Lodewidjk Muns? Here is the link: http://lmuns.home.xs4all.nl/WhyIamNotaSchenkerian.pdf
That's not an interesting critique of Schenker, let alone Westergaard (who is not mentioned or cited even once). It basically goes like this:
(1) Schenker did not adhere to rigorous philosophical standards in his rhetoric.
(2) I disagree with (or don't understand) some of Schenker's analyses and those of his disciples.
(3) Therefore, harmonic theory is correct.
I'll also note that while some of the criticisms of Schenker are legitimate (if boring), others are completely wrong (e.g. the idea that the highest structural dominant is necessarily the final one).
Here is my attempt at fleshing out a more specific example: http://i.imgur.com/ruEYlhD.png I can't figure out how to generate those using using Westergaardian theory
Use octave transfer (ITT sec. 7.7).
Use octave transfer (ITT sec. 7.7).
Thanks, this operation being notably absent in Schenkerian theory (I think).
The short answer is: definitely not
I suppose I will have to live with that for now.
If by "work fine" you mean that it is in fact possible to identify the "appropriate" Roman numerals to assign in such cases, sure, I'll give you that
By work fine, I mean the the theory is falsifiable, and has predictive power. If you are given half of the bars in a Mozart piece, using harmonic theory can give a reasonable guess as to t...
We recently established a successful Useful Concepts Repository. It got me thinking about all the useless or actively harmful concepts I had carried around for in some cases most of my life before seeing them for what they were. Then it occurred to me that I probably still have some poisonous concepts lurking in my mind, and I thought creating this thread might be one way to discover what they are.
I'll start us off with one simple example: The Bohr model of the atom as it is taught in school is a dangerous thing to keep in your head for too long. I graduated from high school believing that it was basically a correct physical representation of atoms. (And I went to a *good* high school.) Some may say that the Bohr model serves a useful role as a lie-to-children to bridge understanding to the true physics, but if so, why do so many adults still think atoms look like concentric circular orbits of electrons around a nucleus?
There's one hallmark of truly bad concepts: they actively work against correct induction. Thinking in terms of the Bohr model actively prevents you from understanding molecular bonding and, really, everything about how an atom can serve as a functional piece of a real thing like a protein or a diamond.
Bad concepts don't have to be scientific. Religion is held to be a pretty harmful concept around here. There are certain political theories which might qualify, except I expect that one man's harmful political concept is another man's core value system, so as usual we should probably stay away from politics. But I welcome input as fuzzy as common folk advice you receive that turned out to be really costly.