Apparent poorly grounded belief in SI's superior general rationality
I found this complaint insufficiently detailed and not well worded.
Average people think their rationality is moderately good. Average people are not very rational. SI affiliated people think they are adept or at least adequate at rationality. SI affiliated people are not complete disasters at rationality.
SI affiliated people are vastly superior to others in generally rationality. So the original complaint literally interpreted is false.
An interesting question might be on the level of: "Do SI affiliates have rationality superior to what the average person falsely believes his or her rationality is?"
Holden's complaints each have their apparent legitimacy change differently under his and my beliefs. Some have to do with overconfidence or incorrect self-assessment, others with other-assessment, others with comparing SI people to others. Some of them:
Insufficient self-skepticism given how strong its claims are
Largely agree, as this relates to overconfidence.
...and how little support its claims have won.
Moderately disagree, as this relies on the rationality of others.
Being too selective (in terms of looking for people who share its preconceptions) when determining whom to hire and whose feedback to take seriously.
Largely disagree, as this relies significantly on the competence of others.
Paying insufficient attention to the limitations of the confidence one can have in one's untested theories, in line with my Objection 1.
Largely agree, as this depends more on accurate assessment of one's on rationality.
Rather than endorsing "Others have not accepted our arguments, so we will sharpen and/or reexamine our arguments," SI seems often to endorse something more like "Others have not accepted their arguments because they have inferior general rationality," a stance less likely to lead to improvement on SI's part.
There is instrumental value in falsely believing others to have a good basis for disagreement so one's search for reasons one might be wrong is enhanced. This is aside from the actual reasons of others.
It is easy to imagine an expert in a relevant field objecting to SI based on something SI does or says seeming wrong, only to have the expert couch the objection in literally false terms, perhaps ones that flow from motivated cognition and bear no trace of the real, relevant reason for the objection. This could be followed by SI's evaluation and dismissal of it and failure of a type not actually predicted by the expert...all such nuances are lost in the literally false "Apparent poorly grounded belief in SI's superior general rationality."
Such a failure comes to mind and is easy for me to imagine as I think this is a major reason why "Lack of impressive endorsements" is a problem. The reasons provided by experts for disagreeing with SI on particular issues are often terrible, but such expressions are merely what they believe their objections to be, and their expertise is in math or some such, not in knowing why they think what they think.
However the reaction of some lesswrongers to the title I initially chose for the post was distinctly negative. The title was "Most rational programming language?"
Many people have chosen similar titles for their posts. Many. It is very unusual to respond to criticism by writing a good post like "Avoid Inflationary use of Terms."
How did you do it?
Perhaps you initially had a defensive reaction to criticism just as others have had, and in addition have a way of responding to criticism well. Alternatively, perhaps your only advantage over the others was not having as much of a defensive impulse, and those others aren't necessarily missing any positive feature that turns criticism into useful thought. The phrase "channeling criticism" seems to assume the later is the case.
Was there a feature of the criticism that made its indirect result your post? Perhaps it was convincing from its unanimity, or non-antagonism, or humor, or seeming objectivity, or other?
Do not ask whether it is “the Way” to do this or that. Ask whether the sky is blue or green. If you speak overmuch of the Way you will not attain it.
I still believe in Global Warming. Do you?
-Ted Kaczynski, The Unabomber
-Heartland Institute billboard
From the press release:
1. Who appears on the billboards?
The billboard series features Ted Kaczynski, the infamous Unabomber; Charles Manson, a mass murderer; and Fidel Castro, a tyrant. Other global warming alarmists who may appear on future billboards include Osama bin Laden and James J. Lee (who took hostages inside the headquarters of the Discovery Channel in 2010).
These rogues and villains were chosen because they made public statements about how man-made global warming is a crisis and how mankind must take immediate and drastic actions to stop it.
2. Why did Heartland choose to feature these people on its billboards?
Because what these murderers and madmen have said differs very little from what spokespersons for the United Nations, journalists for the “mainstream” media, and liberal politicians say about global warming. They are so similar, in fact, that a Web site has a quiz that asks if you can tell the difference between what Ted Kaczynski, the Unabomber, wrote in his “Manifesto” and what Al Gore wrote in his book, Earth in the Balance.
The point is that believing in global warming is not “mainstream,” smart, or sophisticated. In fact, it is just the opposite of those things. Still believing in man-made global warming – after all the scientific discoveries and revelations that point against this theory – is more than a little nutty. In fact, some really crazy people use it to justify immoral and frightening behavior.
Interestingly, science is the first thing mentioned in the next section:
3. Why shouldn’t I still believe in global warming?
Because the best available science says about two-thirds of the warming in the 1990s was due to natural causes, not human activities; the warming trend of the second half of the twentieth century century already has stopped and forecasts of future warming are unreliable; and the benefits of a moderate warming are likely to outweigh the costs. Global warming, in other words, is not a crisis.
Thank you very much. I'm all set for now.
Do you need a particular article/chapter out of this book? I am more easily able to get that then the whole book.
One problem is that I can't find the table of contents, so I am not exactly sure.
Google books has preview available for pages 1-4 and 11-22. I know pages 5-10 would be very helpful for me, probably the rest of chapter one, but maybe not. It is likely everything I need is in pages 5-10.
Thank you for your help.
Please help me find: Fallacies and Judgments of Reasonableness: Empirical Research Concerning the Pragma-Dialectical Discussion Rules, by Frans H. van Eemeren, Garssen, Bart, Meuffels, Bert
The main problem is that a test tests ability to take the test, independently of what its makers intended. The more similar tests are to each other, the more taking the first is training for the second, and the easier it is to teach directly to the test rather than to the skill that inspired the test. The less similar the before and after tests are, the less comparable they are.
Rationality training is particularly tricky because one is to learn formal models of both straight and twisted thinking, recognize when real-life situations resemble those patterns, and then decide how much formal treatment to give the situation, as well as how much weight to give to one's formal model as against one's feelings, reflexive thoughts, and so on.
Traditional classroom tests are set up to best test the first bit, knowledge of the formal models, if one did solve the problems inherent in testing. Even to the extent one can ask people about how one ought to react in the field, e.g. when to use which sort of calculation, that is still a question with a correct answer according to a formal model and one is still not testing the ability to apply it!
These problems resemble those the military has faced in its training and testing. They use indoctrination, simulations, and field tests. Decision making is tested under uncomfortable conditions, ensuring probable good decision making under most circumstances. In general, knowing what they do is likely to be helpful.
The problems with tests are not intractable. One can limit the gain on the second test from having taken the first test by saturating the test taker with knowledge of the test before it is taken the first time, though few would be motivated. One can try to make a test similar to the skill tested, so ability at the test is well correlated with the skill one intends to test. One can try to devise very different sorts of tests that measure the same thing (I doubt that will work here).
One component of a useful classroom test might resemble the classic research on correspondence bias. In it, people judge individuals' support for positions based off an essay they supposedly wrote. Some subjects are told that the writer chose the thesis, others that the writer had it assigned. (The theses were either pro- or anti-Castro.) People inferred that the essay's author significantly agreed with the thesis even when they were told it was assigned to them. The quality of an essay a person produces is some evidence of what they believe, as is their willingness to write it at all, etc., but in general people overly infer others' dispositions from actions they take under social constraint, even when they know of the constraint.
Here is how the framework could translate into a useful rationality test: the test would give people some evidence for something they are biased to overly believe, and the quantity and quality of legitimate evidence in the test would vary widely. One would not be able to pass the test by simply detecting the bias and then declare oneself unmoved in that wrong direction, as one might be able to do for, say, sunk costs. Instead, the valid evidence and invalid inclination would be along the same vector such that one would have to distinguish the bias from the rest of the evidence in the environment.
This solves the problem of having a classroom test be an easy exercise of spotting the biased thought pattern and quashing it. Videos or essays of various people with known beliefs arguing for or against those beliefs could be used to train and test people in this. It's actually probably a skill one could learn without any idea of how one was doing it.
Expressed abstractly, the idea is to test for ability to quantify wrong thinking by mixing it with legitimate evidence, all of which increases confidence in a particular conclusion. This is hard to game because the hard part isn't recognizing the bias. The material's being media from real life prevents testers from imposing an unrealistic model that ignores actual evidence (e.g., a strongly pro-Castro person really might refuse to write an anti-Castro essay).
I can see why you would consider what you call "mysticism", or metaphysical belief systems, a warning sign. However, the use of mystical text forms, which is what I was referring to in my comment, is quite unrelated to this kind of metaphysical and cosmological rigidity. Compare, say, Christian fundamentalists versus Quakers or Unitarian Universalists, or Islamic Wahabis and Qutbis versus Sufis: the most doctrinal and memetically dangerous groups make only sparing use of mystical practices, or forbid them outright.
Atheists and agnostics are obviously a more challenging case, but it appears that at least some neopagans comfortably identify as such, using their supposed metaphysical beliefs as functionally useful aliefs, to be invoked through a ritual whenever the psychical effects of such rituals are desired. There is in fact an account of just such a ritual practice on LW itself involving the Winter Solstice, which is often celebrated as a festival by neopagan groups. It's hard to describe that account as anything other than a mystical ritual aiming to infuence the participants in very specific ways and induce a desirable stance of mind among them. In fact, that particular practice may be regarded as extremely foolish and memetically dangerous (because it involves a fairly blatant kind of happy-death-spiral) in a way that other mystical practices are not. I now see that post as a cautionary tale about the dangers of self-mindhacking, but that does not justify its wholesale rejection, particularly in an instructional context where long-term change is in fact desired.
the most...memetically dangerous groups
What are your criteria for this?
Consider giving an example of the sort of decision making procedure that is taught in camp, with the subject of the example whether one should attend the camp.
E.g.:
Write down all the reasons you think you are considering on a sheet of paper, in pro and con columns. Circle those that do not refer to consequences of going or not going to camp. Then shut your eyes to think for two minutes and think of at least five alternatives that you are likely to do instead of camp. Make pro and con lists for the most likely three of these. Then circle non-consequences. Generate consequences you should be considering but aren't by imagining what is likely to happen if you go to camp. Be sure not to think that compelling stories with many features are most likely, and give greater consideration to self-generated stories with fewer contingent parts. Generate at least four seemingly likely stories of what will likely happen. Put a star next to each alternative for which the time and/or money is spent acquiring an experience, rather than material goods, as the science of happiness consistently shows that such acquisitions are more uplifting...etc.
Alternatively, a sample VOI calculation on how much time people should spend considering it would do.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
From the paper: "We find little evidence that our results are driven by employers inferring something other than race, such as social class, from the names." Section 5 deals with this; "Carrie" and "Neil" (low-status white) do just as well as "Emily" and "Geoffrey", while "Kenya" and "Jamal" (high-status black) do just as poorly as "Latonya" and "Leroy".
Absolutely--I should have said "equally competent" or "reasonably competent".
I don't have a particularly strong opinion on your example, though; I've rolled it around in my head a bit and can't quite see how to fit it into the same framework. There are, I believe, organizations and affinity groups advocating for better treatment of fat people, at least. I don't perceive 'ugly' or 'fat' as being the same sort of grouping as race, though, and I'm not sure where the difference comes from, exactly.
This makes me think that you are right.
There was a weakness in the method, though. In appendix table one they not only show how likely it actually is that a baby with a certain name is white/black, they show the results from an independent field survey that asked people to pick names as white or black. In table eight, they only measure the likelihood someone with a certain name is in a certain class (as approximated by mother's education). Unfortunately, they don't show what people in general, or employers in particular, actually think. If they don't know about class differences between "Kenya" and "Latonya," or the lack of one between "Kenya" and "Carrie," they can't make a decision based on class differences as they actually are.