How not to move the goalposts
There are a lot of bad arguments out there. Fortunately, there are also plenty of people who stand up against these arguments, which is good.
However, there is a pattern I observe quite often in such counter-arguments, which, while strictly logically valid, can become problematic later. It involves fixing all of one's counter-arguments on countering one, and only one, of the original arguer's points. I suspect that this tendency can, at best, weaken one's argument, and, at worst, allow oneself to believe things one has no intention of believing.
Let's assume, without much loss of generality, that the Wrong Argument can be expressed in the following form:
A: Some statement.
B: Some other statement.
A & B -> C: A logical inference, which, from the way B is constructed, is a fairly obvious tautology.
C: The conclusion.
Unfortunately, most of the arguments I could choose for this discussion are either highly trivial or highly controversial. I'll choose one that I hope won't cause too much trouble. Bear in mind that this is the Wrong Argument, the thing that the counter-arguer, the person presenting the good, rational refutation, is trying to demonstrate to be false. Let's designate this rational arguer as RA. The person presenting the Wrong Argument will be designated WA (Wrong Arguer).
WA: "Men have better technical abilities than women, so they should get paid more for the same engineering jobs."
WA relates a terrible sentiment, yet a pervasive one. I don't know anyone who actually espouses it in my workplace, but it was certainly commonplace not so long ago (musical evidence). Let's hope that RA has something persuasive to say against it.
Based on what I've seen of gender discussions on other forums, here's the most likely response I'd expect from RA:
RA: Don't be ridiculous! Men and women are just as well suited to technical careers as each other!
... and that's usually as far as it goes. Now, RA is right, as far as anyone knows (IANAPsychologist, though).
However, WA's argument can be broken down into the following steps:
A: Men, on average, have better technical skills than women.
B: If members of one group, on average, are better at a task than members of another group, then members of that first group should be paid more than members of the second group for performing the same work.
C: Men should be paid more than women for the same work in technical fields such as engineering.
Trivially, A & B -> C. Thus RA only needs to disprove A or B in order to break the argument. (Yes, ~A doesn't imply ~C, but WA will have a hard time proving C without A.) Both A and B are unpleasant statements that decent, rational people should probably disagree with, and C is definitely problematic.
So RA sets about attacking A. He starts by simply stating that men and women have equal potential for technical talent, on average. If WA doesn't believe that, then RA presents anecdotal evidence, then starts digging up psychological studies. Every rational discourse weapon at RA's disposal may be deployed to show that A is false. Maybe WA will be convinced, maybe he won't.
But what about B? RA has ignored B entirely in his attack on A. Now, from a strictly logical point of view, RA doesn't need to do anything with B - if he disproves A, then he disproves A & B. Attacking A doesn't mean that he accepts B as true...
... except that it kind of does.
What if WA manages to win the argument over A, by whatever means? What if WA turns out to be an evolutionary psychology clever arguer, with several papers worth of "evidence" that "proves" that men have better technical skills than women? RA might simply not have the skills or resources to refute WA's points, leading to the following exchange:
WA: Men are better engineers than women, and should be paid more!
RA: That's ridiculous. Men and women have identical potentials for technical skill!
WA: No they don't! Here are ten volumes' worth of papers proving me right!
RA: Well, gee, who am I to argue with psychology journals? I guess you're right.
WA: Glad we agree. I'll go talk to the CTO about Wanda's pay cut, shall I?
RA: Hang on a minute! Even if men are better engineers than women, that's no reason for pay inequity! Equal work for equal pay is the only fair way. If men really are better, they'll get raises and promotions on their own merit, not merely by virtue of being male.
WA: What? I spent hours getting those references together, and now you've moved the goalposts on me! I thought you weren't meant to do that!
RA: But... it's true...
WA: I think you've just taken your conclusion, "Men and women should get equal pay for the same work", and figured out a line of reasoning that gets you there. What are you, some kind of clever arguer for female engineers? Wait, isn't your mother an engineer too?
Nobody wants to be in this situation. RA really has moved the goalposts on WA, which is one of those Dark Arts that we're not supposed to employ, even unintentionally.
The problem goes deeper than simply violating good debating etiquette, though. If this debate is happening in public, then onlookers might get the impression that RA supports B. It will then be more difficult for RA to argue against B in later arguments, especially ones of the form D & B, where D is actually true. (For example, D might be "Old engineers have better technical skills than younger engineers", which is true-ish because of the benefits of long experience in an industry, but it still shouldn't mean that old engineers automatically deserve higher pay for the same work.)
Furthermore, and again IANAP, but it seems possible to me that if RA keeps arguing against A and ignoring B, he might actually start believing B. Alternatively, he might not specifically believe B, but he might stop thinking about B at all, and start ignoring the B step in his own reasoning and other people's.
So, the way to avoid all of this, is to raise all of your objections simultaneously, thusly:
WA: Men are better engineers than women, and should be paid more!
RA: Woah. Okay, first? There's no evidence to suggest that that's actually true. But secondly, even pretending for the moment that it were true, that would be no excuse for paying women less for the same work.
WA: Oh. Um. I'm pretty confident about that first point, but I never actually thought I'd have to defend the other bit. I'll go away now.
That's a best-case scenario, but it does avoid the problems above.
This post has already turned out longer than I intended, so I'll end it here. The last point I wanted to raise, though, is that an awful lot of Wrong Arguments (or good arguments, for that matter) take a form where A is an assertion of fact ("men are better engineers than women"), and B is an expression of morality ("... and therefore they should get paid more"). There are some important implications to this, for which I have a number of examples to present if people are interested.
To summarise: If someone says "A and B are true!", don't just say "A isn't true!". Say "A isn't true, and even if it were, B isn't true either!". Otherwise people might think you believe B, and they might even be right.
About addition and truth
This is intended to explore a a thought I had, rather than making any particular argument about truth.
The canonical example of a thing which is true without any obvious physical referent is the statement 2+2=4. It is true about fingers, sheep, particles, and galaxies; but intuitively it does not seem that any of those truths encapsulates the full meaning of the statement. Moreover, it certainly seems that there is nothing anyone could do to make the statement untrue; it seems that it would have to hold in any universe whatsoever.
Now my thought: How do we know that the physical universe operates on this sort of arithmetic, and not arithmetic modulo some obscenely large number? Suppose we repeat the experiment that convinces us 2+2=4 (and let's note that babies are presumably not born knowing this; they learn it by counting on their fingers, even if they do so at too young an age to express it in words), but with much larger integers. Perhaps we might find that, when we take 3^^^^3 particles, and add 1, we are left with 3^^^^3 particles without any awareness that any particles have disappeared. And what is more, if we take three sets of 3^^^^3 particles, and measure their mass separately and then together, we find that we get the same mass. After some long sequence of such experiments, perhaps we might convince ourselves that physics actually operates on integer arithmetic modulo 3^^^^3. (Which would be unexpected in that the physics we know operates on complex numbers, not integers, but perhaps that's an approximation to some fantastically-finegrained two-dimensional integer grid.)
What would this mean, if anything, for the truth of such statements as 2+2=4? It seems that it would then be a contingent truth, not a universal one; that there could in principle exist a universe whose physics operated on arithmetic modulo 3, so that 2+2=1. (Presumably such a universe would not have any sentient beings in it.) What if 2+2=4 is an observed fact about our universe on the same order as the electromagnetic constant or the speed of light?
What is Metaethics?
Part of the sequence: No-Nonsense Metaethics
When I say I think I can solve (some of) metaethics, what exactly is it that I think I can solve?
First, we must distinguish the study of ethics or morality from the anthropology of moral belief and practice. The first one asks: "What is right?" The second one asks: "What do people think is right?" Of course, one can inform the other, but it's important not to confuse the two. One can correctly say that different cultures have different 'morals' in that they have different moral beliefs and practices, but this may not answer the question of whether or not they are behaving in morally right ways.
My focus is metaethics, so I'll discuss the anthropology of moral belief and practice only when it is relevant for making points about metaethics.
So what is metaethics? Many people break the field of ethics into three sub-fields: applied ethics, normative ethics, and metaethics.
Applied ethics: Is abortion morally right? How should we treat animals? What political and economic systems are most moral? What are the moral responsibilities of businesses? How should doctors respond to complex and uncertain situations? When is lying acceptable? What kinds of sex are right or wrong? Is euthanasia acceptable?
Normative ethics: What moral principles should we use in order to decide how to treat animals, when lying is acceptable, and so on? Is morality decided by what produces the greatest good for the greatest number? Is it decided by a list of unbreakable rules? Is it decided by a list of character virtues? Is it decided by a hypothetical social contract drafted under ideal circumstances?
Metaethics: What does moral language mean? Do moral facts exist? If so, what are they like, and are they reducible to natural facts? How can we know whether moral judgments are true or false? Is there a connection between making a moral judgment and being motivated to abide by it? Are moral judgments objective or subjective, relative or absolute? Does it make sense to talk about moral progress?
The benefits of madness: A positive account of arationality
This post originated in a comment I posted about a strange and unpleasant experience I had when pushing myself too hard mentally. People seemed interested in hearing about it, so I sat down to write. In the process, however, it became something rather different (and a great deal longer) than what I originally intended. The incident referred to in the above comment was a case of manic focus gone wrong; but the truth is, often in my life it's gone incredibly right. I've gotten myself into some pretty strange headspaces, but through discipline and quick thinking I have often been able to turn them to my advantage and put them to good use.
Part 1, then, lays out a sort of cognitive history, focusing on the more extreme states I've been in. Part 2 continues the narrative; this is where I began to learn to ride them out and make them work for me. Part 3 is the incident in question: where I overstepped myself and suffered the consequences.
Some of you, however, may want to skip ahead to part 4 (unless you find my autobiographical writings interesting as a case study). There, I've written a proposal for a series of posts about how to effectively use the full spectrum of somatic and cognitive states to one's advantage. I have vacillated for a long time about this, for reasons that will be discussed below, but I decided that if I was already laying this much on the line, I might as well take it a step further. Read if you will; and if you're interested, please say so.
Genes are overrated
This is hardly news, but this Guardian article reminded me of it - genes are really overrated, both among unwashed masses, and also here on Less Wrong.
I don't want to repeat things which have been said by so many before me, so I'll just link a lot.
Summary of evidence against genes being important:
- Almost no genes correlating with anything interesting been found. This is totally crushing evidence. If genes were important, Bayesian surprise of this lack of results would be in the land of impossible.
- Massive very fast changes of various supposedly highly hereditary characteristic with time in same populations. To name a few - Flynn effect, changes in people's height, obesity epidemic.
- Plenty of evidence of very large very reliable associations of various environmental factors with various important outcomes. For example unlike with genes and cancer where we get just noise, we know very well how much smoking increases chance of lung cancer.
Summary of evidence for genes being important:
- Some twin and adoption studies - which rely on very tiny highly atypical samples and a lot of statistical manipulation to get results they want. To make matters worse, results they got were wildly inconsistent.
And there's nothing more. Decades ago, before we had direct evidence of lack of correlation between genes and outcomes, it was excusable to believe genes matter a lot, even if it was never the best interpretation of data. Now it's just going against bulk of the evidence.
And in case you're wondering how could twin studies show high heredity when everything else says otherwise, I have two examples for you.
This one from a critique of twin studies by Kamin and Goldberger:
"A case in point is provided by the recent study of regular tobacco use among SATSA's twins (24). Heritability was estimated as 60% for men, only 20% for women. Separate analyses were then performed for three distinct age cohorts. For men, the heritability estimates were nearly identical for each cohort. But for women, heritability increased from zero for those born between 1910 and 1924, to 21% for those in the 1925-39 birth cohort, to 64% for the 1940-58 cohort. The authors suggested that the most plausible explanation for this finding was that "a reduction in the social restrictions on smoking in women in Sweden as the 20th century progressed permitted genetic factors increasing the risk for regular tobacco use to express themselves." If purportedly genetic factors can be so readily suppressed by social restrictions, one must ask the question, "For what conceivable purpose is the phenotypic variance being allocated?" This question is not addressed seriously by MISTRA or SATSA. The numbers, and the associated modeling, appear to be ends in themselves."
As the final nail in the coffin of heredity studies:
The Body-Mass Index of Twins Who Have Been Reared Apart
We conclude that genetic influences on body-mass index are substantial, whereas the childhood environment has little or no influence. These findings corroborate and extend the results of earlier studies of twins and adoptees. (N Engl J Med 1990; 322:1483–7.)
Or as paraphrased by a certain commenter on Marginal Revolution:
IOWs, the reason why white kids of today are much fatter than white kids of the 50s and 60s is due to genetic influences and environment has little or no influence
To summarize - heredity studies are pretty much totally worthless data manipulation. Once we accept that, all other evidence points for environment being extremely important, and genes mattering very little. We should accept that already.
An introduction to decision theory
This is part 1 of a sequence to be titled “Introduction to decision theory”.
Less Wrong collects together fascinating insights into a wide range of fields. If you understood everything in all of the blog posts, then I suspect you'd be in quite a small minority. However, a lot of readers probably do understand a lot of it. Then, there are the rest of us: The people who would love to be able to understand it but fall short. From my personal experience, I suspect that there are an especially large number of people who fall into that category when it comes to the topic of decision theory.
Decision theory underlies much of the discussion on Less Wrong and, despite buckets of helpful posts, I still spend a lot of my time scratching my head when I read, for example, Gary Drescher's comments on Timeless Decision Theory. At it's core this is probably because, despite reading a lot of decision theory posts, I'm not even 100% sure what causal decision theory or evidential decision theory is. Which is to say, I don't understand the basics. I think that Less Wrong could do with a sequence that introduces the relevant decision theory from the ground up and ends with an explanation of Timeless Decision Theory (and Updateless Decision Theory). I'm going to try to write that sequence.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)