I think the best name for the term is "paranoid theory". That is, a theory that only a paranoid person could seriously believe. Perhaps "paranoid conspiracy theory" if the particular theory involves a conspiracy and you want to emphasize that fact.
It doesn't give any specific criteria for determining whether a theory is a paranoid theory or not, other than having a trusted non-paranoid person (perhaps yourself) investigate it seriously and decide whether they believe it. I think that's as it should be. There can be lots of heuristics, such as the one outlined in this post, which are often useful and worth knowing, but I doubt there's any heuristic that is all three of: 100% accurate, usually applicable, and cheap to perform.
Incidentally, Penn & Teller have said something related about overkill in magic. Googling:
"Sometimes, magic is just someone spending more time on something than anyone else might reasonably expect." — Teller
"The only secret of magic is that I’m willing to work harder on it than you think it’s worth." — Penn Jillette
A good example:
THE EFFECT: I cut a deck of cards a couple of times, and you glimpse flashes of several different cards. I turn the cards facedown and invite you to choose one, memorize it, and return it. Now I ask you to name your card. You say (for example), “The queen of hearts.” I take the deck in my mouth, bite down, and groan and wiggle to suggest that your card is going down my throat, through my intestines, into my bloodstream, and finally into my right foot. I lift that foot and invite you to pull off my shoe and look inside. You find the queen of hearts. You’re amazed. If you happen to pick up the deck later, you’ll find it’s missing the queen of hearts.
THE SECRET(S): First, the preparation: I slip a queen of hearts into my right shoe, an ace of spades into my left, and a three of clubs into my wallet. Then I manufacture an entire deck out of duplicates of those three cards. That takes 18 decks, which is costly and tedious (No. 2: More trouble than it’s worth).
When I cut the cards, I let you glimpse a few different faces. You conclude the deck contains 52 different cards (No. 1: Pattern recognition). You draw and think you’ve made a choice (No. 7: Choice is not freedom).
Now I wiggle the card to my shoe (No. 3: If you’re laughing …). When I lift whichever foot has your card, or invite you to take my wallet from my back pocket, I turn away (No. 4: Outside the frame) and swap the deck for a normal one from which I’ve removed all three possible selections (No. 5: Combine two tricks). Then I set the deck down to tempt you to examine it later and notice your card is missing (No. 6: The lie you tell yourself).
It's possible that "teaching to the test" tends to refer to something a bit more specific. Here is John Holt in "How Children Fail", which some upstanding citizen has put onto the internet in easily googleable form:
This past year I had some terrible students. I failed more kids, mostly in French and Algebra, than did all the rest of the teachers in the school together. I did my best to get them through, good-ness knows. Before every test we had a big cram session of practice work, politely known as "review." When they failed the exam, we had post mortems, then more review, then a makeup test (always easier than the first), which they almost always failed again.
Much later:
We teachers, from primary school through graduate school, all seem to be hard at work at the business of making it look as if our students know more than they really do. Our standing among other teachers, or of our school among other schools, depends on how much our students seem to know; not on how much they really know, or how effectively they can use what they know, or even whether they can use it at all. The more material we can appear to "cover" in our course, or syllabus, or curriculum, the better we look; and the more easily we can show that when they left our class our students knew what they were "supposed" to know, the more easily can we escape blame if and when it later appears (and it usually does) that much of that material they do not know at all.
When I was in my last year at school, we seniors stayed around an extra week to cram for college boards. Our ancient-history teacher told us, on the basis of long experience, that we would do well to prepare ourselves to write for twenty minutes on each of a list of fifteen topics that he gave us. We studied his list. We knew the wisdom of taking that kind of advice; if we had not, we would not have been at that school. When the boards came, we found that his list comfortably covered every one of the eight questions we were asked. So we got credit for knowing a great deal about ancient history, which we did not, he got credit for being a good teacher, which he was not, and the school got credit for being, as it was, a good place to go if you wanted to be sure of getting into a prestige college. The fact was that I knew very little about ancient history; that much of what I thought I knew was misleading or false; that then, and for many years afterwards, I disliked history and thought it pointless and a waste of time; and that two months later I could not have come close to passing the history college boards, or even a much easier test, but who cared?
I have played the game myself. When I began teaching I thought, naively, that the purpose of a test was to test, to find out what the students knew about the course. It didn't take me long to find out that if I gave my students surprise tests, covering the whole material of the course to date, almost everyone flunked. This made me look bad, and posed problems for the school. I learned that the only way to get a respectable percentage of decent or even passing grades was to announce tests well in advance, tell in some detail what material they would cover, and hold plenty of advance practice in the kind of questions that would be asked, which is called review. I later learned that teachers do this everywhere. We know that what we are doing is not really honest, but we dare not be the first to stop, and we try to justify or excuse ourselves by saying that, after all, it does no particular harm. But we are wrong; it does great harm.
It does harm, first of all, because it is dishonest and the students know it. My friends and I, breezing through the ancient-history boards, knew very well that a trick was being played on someone, we were not quite sure on whom. Our success on the boards was due, not to our knowledge of ancient history, which was scanty, but to our teacher's skill as a predictor, which was great. Even children much younger than we were learn that what most teachers want and reward are not knowledge and understanding but the appearance of them. The smart and able ones, at least, come to look on school as something of a racket, which it is their job to learn how to beat. And learn they do; they become experts at smelling out the unspoken and often unconscious preferences and prejudices of their teachers, and at taking full advantage of them. My first English teacher at prep school gave us Macaulay's essay on Lord Clive to read, and from his pleasure in reading it aloud I saw that he was a sucker for the periodic sentence, a long complex sentence with the main verb at the end. Thereafter I took care to construct at least one such sentence in every paper I wrote for him, and thus assured myself a good mark in the course.
Not only does the examination racket do harm by making students feel that a search for honest understanding is beside the point; it does further harm by discouraging those few students who go on making that search in spite of everything. The student who will not be satisfied merely to know "right answers" or recipes for getting them will not have an easy time in school, particularly since facts and recipes may be all that his teachers know. They tend to be impatient or even angry with the student who wants to know, not just what happened, but why it happened as it did and not some other way. They rarely have the knowledge to answer such questions, and even more rarely have the time; there is all that material to cover.
In short, our "Tell-'em-and-test-'em" way of teaching leaves most students increasingly confused, aware that their academic success rests on shaky foundations, and convinced that school is mainly a place where you follow meaningless procedures to get meaningless answers to meaningless questions.
And also:
It begins to look as if the test-examination-marks business is a gigantic racket, the purpose of which is to enable students, teachers, and schools to take part in a joint pretense that the students know everything they are supposed to know, when in fact they know only a small part of it--if any at all. Why do we always announce exams in advance, if not to give students a chance to cram for them? Why do teachers, even in graduate schools, always say quite specifically what the exam will be about, even telling the type of questions that will be given? Because otherwise too many students would flunk. What would happen at Harvard or Yale if a prof gave a surprise test in March on work covered in October? Everyone knows what would happen; that's why they don't do it.
Regarding egalitarian-like arguments, I suspect many express opposition to embryo selection not because it’s a consequence of a positive philosophy that they state and believe and defend, but because they have a negative philosophy that tells them what positions are to be attacked.
I suspect that if you put together the whole list of what they attack, there would be no coherent philosophy that justifies it (or perhaps there would be one, but they would not endorse it).
There is more than zero logic to what is to be attacked and what isn’t, but it has more to do with “Can you successfully smear your opponent as an oppressor, or as one who supports doctrines that enable oppression; and therefore evil or, at best, ignorant if they immediately admit fault and repent; in other words, can you win this rhetorical fight?” than with “Does this argument, or its opposite, follow from common moral premises, data, and logical steps?”.
In this case, it’s like, if you state that humans with blindness or whatever have less moral worth than fully healthy humans, then you are to be attacked; and at least in the minds of these people, selecting embryos of the one kind over the other is close enough that you are also to be attacked.
(Confidence: 75%)
On the Ubuntu-based machines I use at work, SIZE defaults to the ridiculously small 1MB. It's somewhat hidden in the manpage: the documentation about the -S
option doesn't mention it, but later it says (bold added):
SIZE may be followed by the following multiplicative suffixes: % 1% of memory, b 1, K 1024 (default), and so on for M, G, T, P, E, Z, Y.
You can verify this by running seq 1 1000000000000 | sort
and, while that's happening, ls -lh /tmp/sort*
.
I actually generally don't want it to write its data to disk, because I'm usually using machines where the data fits very comfortably in RAM (data size is maybe up to 1GB) and writing to disk (even SSD) just adds slowness. Though splitting it unnecessarily also adds slowness. For my use case, specifying a much bigger buffer is appropriate.
Relevant meanings of "litany":
1 : a prayer consisting of a series of invocations and supplications by the leader with alternate responses by the congregation
2a: a resonant or repetitive chant
The point of reciting this kind of litany is to calm yourself and remind yourself of the right way to approach things. This particular litany seems like it would be used in situations where you've learned something horrible and are thinking, "Oh god, why did I look there/why did he tell me? I regret learning this." It would be unfortunate if emotional reactions like that led people to punish themselves or other people for passing on true knowledge.
I would say that, in the worst case, if you think for a bit and can't figure out anything useful to do with the new knowledge, then you can just continue acting the same way you were before, and thus be no worse off.[1] If you can gain control of yourself, that is. If you can control yourself, then you won't be worse off, and then the litany is true. In that sense, I think it's meant to be a self-fulfilling prophecy.
Compare:
I will face my fear.
I will permit it to pass over me and through me.
And when it has gone past, I will turn the inner eye to see its path.
Where the fear has gone there will be nothing. Only I will remain.
This portion of the "Litany Against Fear" is very obviously meant to be a self-fulfilling prophecy, where the process of reciting it helps make it true.
Possible exception: if you have an honesty obligation to pass the information on to someone else, who you expect to then do something like break up a relationship. There are a few ways to approach this. One is to say, well, if you didn't know, then you'd be living a lie, and is that what you want? You think you wouldn't find out eventually? Another is to say, if the parties involved want to have honesty obligations, then might it also be reasonable to have "Try to reward me and not over-punish me when I tell you a difficult truth" as a principle? (It might not be enough in all cases to successfully incentivize truth-telling, but I suspect a lot of people could do with some steps in that direction.)
easy: we already do this. Definitionally, 2 percent of people are <70 IQ. I don't think we would commonly identify this as one of the biggest problems with democracy.
But those people are distributed fairly evenly throughout society. Each one is surrounded by lots of people of >100 IQ, and probably knows at least a few of >115 IQ, etc. Whereas if it's an entire indigenous population, and integration is far from complete, then there are likely whole villages that are almost entirely aboriginal. That's an important difference.
One consequence: I expect that, in order to do a good job at various important management roles (managing a power plant, a sewer system, etc.), you basically need a high enough IQ. A hard cutoff is an oversimplification, but, to illustrate, Google results suggest that doctors' average IQ is between 120 and 130, and there might be villages of 1000 people with no one fitting that description. (And even if you think the IQ test results are, say, more reflective of a "Western Quotient"—the ability+willingness to work well with Western ideas and practices—it seems that lots of these jobs require precisely that. Using and maintaining Western machines; negotiating on behalf of the village with mostly-Western cities and higher levels of government; evaluating land development proposals; and so on.)
Then, running with the above scenario, either the village doesn't have modern infrastructure, or it has modern infrastructure managed badly, or it has modern infrastructure managed by Westerners. The first two are bad, and the third might be a constant source of ethnic grievances if anyone is unhappy with the arrangement. (Exercise: ask an AI for historical examples of each of the above, and see if they're genuine.) Thus: a problem with democracy. And voting, in particular, might turn the third case into the second case.
I think this demonstrates a failure mode of the 'is it true?' heuristic as a comprehensive methodology for evaluating statements.
I didn't call it comprehensive. It's a useful tool, and often the first one I reach for. but not always the only tool.
I can string together true premises (and omit others) to support a much broader range of conclusions than are supported by the actual preponderance of the evidence.
Then your opponent can counter-argue that your statements are true but cherry-picked, or that your argument skips logical steps xyz and those steps are in fact incorrect. If your opponent instead chooses to say that for you to make those statements is unacceptable behavior, then it's unfortunate that your opposition is failing to represent its side well. As an observer, depending on my purposes and what I think I already know, I have many options, ranging from "evaluating the arguments presented" to "researching the issue myself".
the suggestion that letting members of a certain racial group vote is a threat to democracy completely dissolves with the introduction of one additional observation
OP didn't use the word "threat". He said he was "very curious about aboriginals" and asked how do you live with them. You can interpret it as a rhetorical question, meaning he's saying it's impossible to live with them, and his "very curious" was disingenuous; or you can interpret it as a genuine question. I think I've countered your argument about "completely dissolves"; for illustration, you can even forget IQ and substitute "familiarity with Western technology", and imagine a village consisting of 10% Westerners and 90% indigenous people who have never owned a car or a computer. Surely that has the potential to cause problems; and it could indeed be interesting to know more specifics about what has gone wrong in practice, how people have addressed it, and how well it's working.
When someone criticizes a statement as offensive, bad, or other negative terms besides "false", I ask myself, "Is the statement true or false?" (I tend to ask that about any statement, really, but I think I make a point of doing so in emotionally-charged circumstances.)
He does make word choices like "dullards" and say some things that one could call unnecessarily insulting. But most of it sounds like factual data that he got from reading scientific literature (clicking through to the comment—yup). Is it true or false that there was a set of IQ tests given to aboriginals and the average score was <70? Is it true or false that the (Australian, I assume) government put out a PSA for the purpose of getting aboriginals to not sleep in the road—caused, presumably, by cases of them doing it? (Make a prediction, then google it.)
And if all the above is true, then that seems like a potentially important problem, at least for anyone who cares about the people involved. Are the low IQ test results caused by difficulties in testing people from a very different culture and language, or do they mostly reflect reality? If the latter, what causes it, and can anything be done about it? (Have the aboriginals grown up in a very nutrient-poor or idea-poor environment? If so, then it should be reasonably straightforward to fix that in future generations. If, on the other hand, it's mostly genetic, then we can add that to the list of reasons it's important to develop genetic technologies like embryo selection.)
If it's both true and important, then, taking "important" as roughly implying "necessary", that means it passes the rule of "At least 2 of 3: necessary, kind, true".
The question "How do you have a peaceable democracy (or society in general) with a population...?"—if you take it as a rhetorical question, then that sounds pretty bad. But if you assume the premise is correct (that there's a subpopulation whose "full-grown adults had the cognitive ability of young children"), then it does seem like a genuine question. Are there basic assumptions about democracy, or in our implementation of it, that break down in the presence of such a population? (If not at that level, then is there some level where it does?) What accommodations can be made?
(Whether the question is rhetorical or not—I wonder if this is a case where, if you have a negative prior about someone, you'll take an ambiguous signal and decide it's bad, and use that to justify further lowering your opinion of them, whereas someone with a positive prior will do the opposite.)
The upthread statement I disagreed with is "his posts did not seem to me to embody the virtues of rationality". Looking at the full comment, he brings in data, mentions caveats, makes some calculations and cross-checks them against other sources.
There's more than zero inflammatory rhetoric. But the ratio of facts to inflammatory rhetoric seems ok to me, and I don't see strong evidence that he's operating in bad faith (although the plagiarism thing seems somewhat bad) or that he's in favor of forcibly sterilizing the aboriginals. I note that the comment was posted on a subreddit for people who enjoy arguing.
I thought you were going to conclude by saying that, since it’s unviable to assume you’ll never get exposed to anything new that’s farther to the right of this spectrum, it’s important to develop skills of bouncing off such things, unaddicting yourself, or otherwise dealing with it.
To that end: I think it helps to perceive the creators of a thing as being malicious manipulators trying to exploit you, and to think of certain things as being Skinner boxes or other known exploits. Why does this game or app do this thing this way? If they wanted me to get maximum value out of it and waste minimal time, they would do it another way. Therefore they’re trying to screw with me. I’m not gonna put up with that.
By the way, I do in fact avoid trying out things like skiing, “just to see what it’s like”, partly because I do not want to discover that I really like it, and then spend all kinds of money and inconvenience and risk on it. (A friend of mine has gotten like three concussions skiing, the cumulative effects of which have serious neurological consequences that are disrupting his daily life, and my impression is that he still wants to ski more. (It’s not his profession—he’s a programmer.)) Likewise I’m not interested in “trying out” foods like ice cream that I’m confident I don’t want to incorporate into my regular diet; if it’s a social event then I’ll relax this attitude, but if such events start happening too frequently in a short period then I resume frowning at foods I think are too, erm, high in the calories:nutrition and especially sugar:nutrition ratio.
For example, in ancient Greece it would have been taboo to say that women should have the same political rights as men.
Would it have been taboo? Or would people have just laughed at you? (Paul Graham said, e.g.: "[O]bviously false statements might be treated as jokes, or at worst as evidence of insanity, but they are not likely to make anyone mad. The statements that make people mad are the ones they worry might be believed." Also relevant: "I suspect the biggest source of moral taboos will turn out to be power struggles in which one side only barely has the upper hand. That's where you'll find a group powerful enough to enforce taboos, but weak enough to need them.")
Investigating taboos is the harder problem, so if you solve that, then that's probably sufficient.
Maybe. I think it's common for someone smarter than you to come up with ideas or solutions that you don't think you could or would have come up with yourself (in any reasonable time, at least), but which, once you've seen the answer, its merits are clear to you. Metaphorically: nondeterministic polynomial problems.
If the merits are not clear to you, then the first explanation that comes to mind is "That person has knowledge you don't"; chemical recipes are an obvious category where I wouldn't understand why a recipe worked even after being told the recipe.
But I feel like the phrase "wisdom generating process operating on a level above yours" has to refer to a deeper difference than just "they have some knowledge you don't". But I suppose, in this context, with parents vs children, it really just is "they have a lot of knowledge you don't, and much more practice at reasoning, and results therefrom".