Undiscriminating Skepticism
Tl;dr: Since it can be cheap and easy to attack everything your tribe doesn't believe, you shouldn't trust the rationality of just anyone who slams astrology and creationism; these beliefs aren't just false, they're also non-tribal among educated audiences. Test what happens when a "skeptic" argues for a non-tribal belief, or argues against a tribal belief, before you decide they're good general rationalists. This post is intended to be reasonably accessible to outside audiences.
I don't believe in UFOs. I don't believe in astrology. I don't believe in homeopathy. I don't believe in creationism. I don't believe there were explosives planted in the World Trade Center. I don't believe in haunted houses. I don't believe in perpetual motion machines. I believe that all these beliefs are not only wrong but visibly insane.
If you know nothing else about me but this, how much credit should you give me for general rationality?
Certainly anyone who was skillful at adding up evidence, considering alternative explanations, and assessing prior probabilities, would end up disbelieving in all of these.
But there would also be a simpler explanation for my views, a less rare factor that could explain it: I could just be anti-non-mainstream. I could be in the habit of hanging out in moderately educated circles, and know that astrology and homeopathy are not accepted beliefs of my tribe. Or just perceptually recognize them, on a wordless level, as "sounding weird". And I could mock anything that sounds weird and that my fellow tribesfolk don't believe, much as creationists who hang out with fellow creationists mock evolution for its ludicrous assertion that apes give birth to human beings.
You can get cheap credit for rationality by mocking wrong beliefs that everyone in your social circle already believes to be wrong. It wouldn't mean that I have any ability at all to notice a wrong belief that the people around me believe to be right, or vice versa - to further discriminate truth from falsity, beyond the fact that my social circle doesn't already believe in something.
Back in the good old days, there was a simple test for this syndrome that would get quite a lot of mileage: You could just ask me what I thought about God. If I treated the idea with deeper respect than I treated astrology, holding it worthy of serious debate even if I said I disbelieved in it, then you knew that I was taking my cues from my social surroundings - that if the people around me treated a belief as high-prestige, high-status, I wouldn't start mocking it no matter what the state of evidence.
On the other hand suppose I said without hesitation that my epistemic state on God was similar to my epistemic state on psychic powers: no positive evidence, lots of failed tests, highly unfavorable prior, and if you believe it under those circumstances then something is wrong with your mind. Then you would have heard a bit of skepticism that might cost me something socially, and that not everyone around me would have endorsed, even in educated circles. You would know it wasn't just a cheap way of picking up cheap points.
Today the God-test no longer works, because some people realized that the taking-it-seriously aura of religion is in fact the main thing left which prevents people from noticing the epistemic awfulness; there has been a concerted and, I think, well-advised effort to mock religion and strip it of its respectability. The upshot is that there are now quite wide social circles in which God is just another stupid belief that we all know we don't believe in, on the same list with astrology. You could be dealing with an adept rationalist, or you could just be dealing with someone who reads Reddit.
And of course I could easily go on to name some beliefs that others think are wrong and that I think are right, or vice versa, but would inevitably lose some of my audience at each step along the way - just as, a couple of decades ago, I would have lost a lot of my audience by saying that religion was unworthy of serious debate. (Thankfully, today this outright dismissal is at least considered a respectable, mainstream position even if not everyone holds it.)
I probably won't lose much by citing anti-Artificial-Intelligence views as an example of undiscriminating skepticism. I think a majority among educated circles are sympathetic to the argument that brains are not magic and so there is no obstacle in principle to building machines that think. But there are others, albeit in the minority, who recognize Artificial Intelligence as "weird-sounding" and "sci-fi", a belief in something that has never yet been demonstrated, hence unscientific - the same epistemic reference class as believing in aliens or homeopathy.
(This is technically a demand for unobtainable evidence. The asymmetry with homeopathy can be summed up as follows: First: If we learn that Artificial Intelligence is definitely impossible, we must have learned some new fact unknown to modern science - everything we currently know about neurons and the evolution of intelligence suggests that no magic was involved. On the other hand, if we learn that homeopathy is possible, we must have learned some new fact unknown to modern science; if everything else we believe about physics is true, homeopathy shouldn't work. Second: If homeopathy works, we can expect double-blind medical studies to demonstrate its efficacy right now; the absence of this evidence is very strong evidence of absence. If Artificial Intelligence is possible in theory and in practice, we can't necessarily expect its creation to be demonstrated using current knowledge - this absence of evidence is only weak evidence of absence.)
I'm using Artificial Intelligence as an example, because it's a case where you can see some "skeptics" directing their skepticism at a belief that is very popular in educated circles, that is, the nonmysteriousness and ultimate reverse-engineerability of mind. You can even see two skeptical principles brought into conflict - does a good skeptic disbelieve in Artificial Intelligence because it's a load of sci-fi which has never been demonstrated? Or does a good skeptic disbelieve in human exceptionalism, since it would require some mysterious, unanalyzable essence-of-mind unknown to modern science?
It's on questions like these where we find the frontiers of knowledge, and everything now in the settled lands was once on the frontier. It might seem like a matter of little importance to debate weird non-mainstream beliefs; a matter for easy dismissals and open scorn. But if this policy is implemented in full generality, progress goes down the tubes. The mainstream is not completely right, and future science will not just consist of things that sound reasonable to everyone today - there will be at least some things in it that sound weird to us. (This is certainly the case if something along the lines of Artificial Intelligence is considered weird!) And yes, eventually such scientific truths will be established by experiment, but somewhere along the line - before they are definitely established and everyone already believes in them - the testers will need funding.
Being skeptical about some non-mainstream beliefs is not a fringe project of little importance, not always a slam-dunk, not a bit of occasional pointless drudgery - though I can certainly understand why it feels that way to argue with creationists. Skepticism is just the converse of acceptance, and so to be skeptical of a non-mainstream belief is to try to contribute to the project of advancing the borders of the known - to stake an additional epistemic claim that the borders should not expand in this direction, and should advance in some other direction instead.
This is high and difficult work - certainly much more difficult than the work of mocking everything that sounds weird and that the people in your social circle don't already seem to believe.
To put it more formally, before I believe that someone is performing useful cognitive work, I want to know that their skepticism discriminates truth from falsehood, making a contribution over and above the contribution of this-sounds-weird-and-is-not-a-tribal-belief. In Bayesian terms, I want to know that p(mockery|belief false & not a tribal belief) > p(mockery|belief true & not a tribal belief).
If I recall correctly, the US Air Force's Project Blue Book, on UFOs, explained away as a sighting of the planet Venus what turned out to actually be an experimental aircraft. No, I don't believe in UFOs either; but if you're going to explain away experimental aircraft as Venus, then nothing else you say provides further Bayesian evidence against UFOs either. You are merely an undiscriminating skeptic. I don't believe in UFOs, but in order to credit Project Blue Book with additional help in establishing this, I would have to believe that if there were UFOs then Project Blue Book would have turned in a different report.
And so if you're just as skeptical of a weird, non-tribal belief that turns out to have pretty good support, you just blew the whole deal - that is, if I pay any extra attention to your skepticism, it ought to be because I believe you wouldn't mock a weird non-tribal belief that was worthy of debate.
Personally, I think that Michael Shermer blew it by mocking molecular nanotechnology, and Penn and Teller blew it by mocking cryonics (justification: more or less exactly the same reasons I gave for Artificial Intelligence). Conversely, Richard Dawkins scooped up a huge truckload of actual-discriminating-skeptic points, at least in my book, for not making fun of the many-worlds interpretation when he was asked about in an interview; indeed, Dawkins noted (correctly) that the traditional collapse postulate pretty much has to be incorrect. The many-worlds interpretation isn't just the formally simplest explanation that fits the facts, it also sounds weird and is not yet a tribal belief of the educated crowd; so whether someone makes fun of MWI is indeed a good test of whether they understand Occam's Razor or are just mocking everything that's not a tribal belief.
Of course you may not trust me about any of that. And so my purpose today is not to propose a new litmus test to replace atheism.
But I do propose that before you give anyone credit for being a smart, rational skeptic, that you ask them to defend some non-mainstream belief. And no, atheism doesn't count as non-mainstream anymore, no matter what the polls show. It has to be something that most of their social circle doesn't believe, or something that most of their social circle does believe which they think is wrong. Dawkins endorsing many-worlds still counts for now, although its usefulness as an indicator is fading fast... but the point is not to endorse many-worlds, but to see them take some sort of positive stance on where the frontiers of knowledge should change.
Don't get me wrong, there's a whole crazy world out there, and when Richard Dawkins starts whaling on astrology in "The Enemies of Reason" documentary, he is doing good and necessary work. But it's dangerous to let people pick up too much credit just for slamming astrology and homeopathy and UFOs and God. What if they become famous skeptics by picking off the cheap targets, and then use that prestige and credibility to go after nanotechnology? Who will dare to consider cryonics now that it's been featured on an episode of Penn and Teller's "Bullshit"? On the current system you can gain high prestige in the educated circle just by targeting beliefs like astrology that are widely believed to be uneducated; but then the same guns can be turned on new ideas like the many-worlds interpretation, even though it's being actively debated by physicists. And that's why I suggest, not any particular litmus test, but just that you ought to have to stick your neck out and say something a little less usual - say where you are not skeptical (and most of your tribemates are) or where you are skeptical (and most of the people in your tribe are not).
I am minded to pay attention to Robyn Dawes as a skillful rationalist, not because Dawes has slammed easy targets like astrology, but because he also took the lead in assembling and popularizing the total lack of experimental evidence for nearly all schools of psychotherapy and the persistence of multiple superstitions such as Rorschach ink-blot interpretation in the face of literally hundreds of experiments trying and failing to find any evidence for it. It's not that psychotherapy seemed like a difficult target after Dawes got through with it, but that, at the time he attacked it, people in educated circles still thought of it as something that educated people believed in. It's not quite as useful today, but back when Richard Feynman published "Surely You're Joking, Mr. Feynman" you could pick up evidence that he was actually thinking from the fact that he disrespected psychotherapists as well as psychics.
I'll conclude with some simple and non-trustworthy indicators that the skeptic is just filling in a cheap and largely automatic mockery template:
- The "skeptic" opens by remarking about the crazy true believers and wishful thinkers who believe in X, where there seem to be a surprising number of physicists making up the population of those wacky cult victims who believe in X. (The physicist-test is not an infallible indicator of rightness or even non-stupidity, but it's a filter that rapidly picks up on, say, strong AI, molecular nanotechnology, cryonics, the many-worlds interpretation, and so on.) Bonus point losses if the "skeptic" remarks on how easily physicists are seduced by sci-fi ideas. The reason why this is a particularly negative indicator is that when someone is in a mode of automatically arguing against everything that seems weird and isn't a belief of their tribe - of rejecting weird beliefs as a matter of naked perceptual recognition of weirdness - then they tend to perceptually fill-in-the-blank by assuming that anything weird is believed by wacky cult victims (i.e., people Not Of Our Tribe). And they don't backtrack, or wonder otherwise, even if they find out that the "cult" seems to exhibit a surprising number of people who go around talking about rationality and/or members with PhDs in physics. Roughly, they have an automatic template for mocking weird beliefs, and if this requires them to just swap in physicists for astrologers as gullible morons, that's what they'll do. Of course physicists can be gullible morons too, but you should be establishing that as a surprising conclusion, not using it as an opening premise!
- The "skeptic" offers up items of "evidence" against X which are not much less expected in the case that X is true than in the case that X is false; in other words, they fail to grasp the elementary Bayesian notion of evidence. I don't believe that UFOs are alien visitors, but my skepticism has nothing to do with all the crazy people who believe in UFOs - the existence of wacky cults is not much less expected in the case that aliens do exist, than in the case that they do not. (I am skeptical of UFOs, not because I fear affiliating myself with the low-prestige people who believe in UFOs, but because I don't believe aliens would (a) travel across interstellar distances AND (b) hide all signs of their presence AND THEN (c) fly gigantic non-nanotechnological aircraft over our military bases with their exterior lights on.)
- The demand for unobtainable evidence is a special case of the above, and of course a very common mode of skepticism gone wrong. Artificial Intelligence and molecular nanotechnology both involve beliefs in the future feasibility of technologies that we can't build right now, but (arguendo) seem to be strongly permitted by current scientific belief, i.e., the non-ineffability of the brain, or the basic physical calculations which seem to show that simple nanotechnological machines should work. To discard all the arguments from cognitive science and rely on the knockdown argument "no reliable reporter has ever seen an AI!" is blindly filling in the template from haunted houses.
- The "skeptic" tries to scare you away from the belief in their very first opening remarks: for example, pointing out how UFO cults beat and starve their victims (when this can just as easily happen if aliens are visiting the Earth). The negative consequences of a false belief may be real, legitimate truths to be communicated; but only after you establish by other means that the belief is factually false - otherwise it's the logical fallacy of appeal to consequences.
- They mock first and counterargue later or not at all. I do believe there's a place for mockery in the war on dumb ideas, but first you write the crushing factual counterargument, then you conclude with the mockery.
I'll conclude the conclusion by observing that poor skepticism can just as easily exist in a case where a belief is wrong as when a belief is right, so pointing out these flaws in someone's skepticism can hardly serve to establish a positive belief about where the frontiers of knowledge should move.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (1329)
Responding to an old post:
This is wrong. Explaining away a single experimental aircraft as Venus doesn't mean that you're an undiscriminating skeptic or that whether there are UFOs doesn't make any difference to what you would say. It just means that you've made one mistake. And estimates based on probability are going to turn out to be wrong sometimes; there could very well be one that is an aircraft but where the available information indicates that Venus is more likely than an aircraft. Someone using this available information would legitimately (although incorrectly) deduce that the object is Venus.
This seems like a bad metric. Perhaps if it was changed to "domain experts".
Physicists are not domain experts on everything and can have some nutty beliefs about fields outside of their area.
Want to know if someone is a good rationalist? Ask them what the best arguments are for a belief he strongly opposes on a complex issue. See if the arguments he gives are the strongest ones, or the weak ones. To strongly oppose a belief on a complex issue, requires hearing the best arguments from both sides. Being unaware of the best opposing arguments, or being unwilling to speak them, is pretty good evidence that he let his biases get in the way of his reasoning.
It helps if, prior to using this technique, I've given them reason to trust me to be primarily interested in something other than scoring points off of them by "winning" arguments.
If a lot of crazy people believe in UFOs, it's probably not because every crazy person picked a random page in the dictionary and said "I'll have a crazy belief about that". Rather, it's probably because the human mind has intrinsic flaws for which characteristics of the UFO meme happen to be a good match. If I conclude that UFOs exist, it is more likely that my reasoning process was corrupted by these intrinsic human flaws and therefore that my argument has an unnoticed flaw than if I conclude something else which isn't a subject of cult behavior. Of course, if I assume that my mind is unflawed, this doesn't apply, but I really shouldn't go around assuming that my mind is unflawed.
And even if I assume that my mind doesn't contain any UFO-leaning flaws, I can't assume the same about other people. Any evidence they provide is more likely to be biased. Even if I just try to analyze the arguments made by other people for UFOs, that set of arguments will contain a larger proportion of bad arguments than a similar set of arguments for a non-cultish proposition. Assuming that I am equally good at detecting bad arguments for UFOs and for the non-cultish proposition, it is then more likely overall that a bad UFO argument will slip by my filters than a bad argument for the non-cultish proposition. Again, if I assume that I'm perfect at reasoning and never let bad arguments of any type pass my filters, this doesn't apply, but I can't assume that.
What if someone has rational reasons for rejecting a belief such as cryonics, but is deliberately using Dark Art rhetoric to talk more convincingly about that belief by associating it with low-status people? You'd class them as irrational when you should class them as unethical.
They would be classed as irrational based on the belief that they do not, in fact, have 'rational reasons' for their decision. If that belief is false then it is false - just like any other. The specifics of their Dark Arts rhetoric gives some evidence regarding both their ethics and their beliefs but only a small amount.
Both humbleness and arrogance are rationalist sins, but arrogance is worse. I can think of at least 4 different things wrong with this level of self-assurance in this context, and none of them have anything to do with cryonics in particular.
I'm not saying that there's anything wrong with assigning a high probability to cryonics working. I'm saying that mentally dinging others for not agreeing with you is likely to leave your assessment of both them and of cryonics worse overall than if you didn't.
No level of self assurance is specified or implied by me. My reply to was your complaint regarding the qualitative nature of the judgement - that is, that the dishonest people are being judged as irrational rather than unethical when their irrational seeming arguments are in fact not sincere.
I don't know much about 'dinging', but the simple act of disagreeing is already an act of disrespect. Further, when trying to seek out people whose opinions can provide strong evidence to you it is necessary to apply discretion. Whether that means "negative dings" to people whose beliefs are, to the best of your ability to judge, misguided or the assignment of "positive dings" to other people who execute behaviors that you judge as superior, in the end you need to judge the thoughts of others if you hope to improve your own.
"IIf that belief is false then it is false - just like any other." I read this statement as an arrogant assumption that a sufficiently rational person could expect to correctly judge the probability of cryonics working with enough certainty that disagreeing with them would be "just like" disagreeing with reality. (Yes, I realize that this is not what you were directly saying with the words "just like", but as far as I can see it is in fact implied.)
Of course you judge thought processes in order to improve your own. And it's even rational to judge people by individual examples of their thought processes, though the human tendency is to overdo, not underdo, such generalized judging of people.
But any belief about cryonics (and similar areas) is of necessity based on a relatively long chain of inference from any possible direct evidence. There are much richer ways to assess the quality of that chain than by whether it reaches the same conclusion as your own.
I read "that belief" as referring to "the belief that they do not, in fact, have 'rational reasons' for their decision". "Just like any other", then, probably refers to the fact that many rational beliefs turn out to be incorrect (rather, frequently the optimally epistemically rational degree of credence for a proposition is further from the mark than a degree of credence selected by an alternate method, or is somewhat high despite the belief actually being incorrect, though on average rational degrees of belief are more accurate).
In other words, this is not entirely correct:
In this case, it might be (epistemically) correct to class them as irrational (with some probability, etc.), given the information you have about them.
Similarly, if someone draws a card at random from a standard 52-card deck, your degree of credence that it is the seven of diamonds should be 1/52 - it wouldn't be correct to be more confident than that, even if in actuality it IS the seven of diamonds, as this is information you do not have access to.
(ETA: I'm speaking abstractly here - making no comment on rational beliefs about cryonics.)
I don't think filtering people by rationality is a good idea at all. It's pretty much the definition of an ad hominem argument, and also a more-harmful-than-average case of the fundamental attribution error. Yes, it might be able to give you an early advantage on deciding whether they are right in any particular case; but that advantage would quickly evaporate as you got new data, and in most cases you'd already have enough data from the start for it to be a disadvantage (given limited human bandwidth).
Ad hominem reasoning is not always fallacious.
When someone else makes an argument that doesn't seem right to you, your estimation of whether it's they or you who're making a mistake should vary widely depending on whether the argument is coming from someone with an established history of predictive expertise in contentious cases, or from Bob the Biased Bozo.
I didn't say it was fallacious, I said it was a waste of bandwidth. There are almost always other, better clues about whether some statement is right or wrong. And even for filtering attention, it's not the best heuristic. if someone is just telling you things you already know, it doesn't really matter if they're being rational or just parrots, they're not worth paying attention to.
Okay, so astrology to me sounds extremely unscientific. But I haven't read anything on the subject, and other than knowing that it's something a lot of scientists thing is.. unscientific. To be perfectly fair, I can't just dismiss it because other people dismiss it.
I'd like to be able to dismiss it for scientific reasons. Because I was reading my horoscope, and I was like, "Hmm, well these are extremely vague statements that could apply to anyone and I don't particularly identify with." But then I was reading a friends, and I majorly freaked out because of how accurate it was.
So because of that, I now want to know the truth. Either astrology works or it doesn't. Does anyone know how I could go about determining this? I mean, does anyone have any books or online articles that they would recommend? I'd really appreciate it. I just want to understand.
You should read the essay "Science: Conjectures and Refutations" by Karl Popper. In short, although astrology may use things like observation, it is not scientific. Why? Well you answered your own question, it's made up of extremely vague statements that will always be true. The virtue of a scientific theory is not in its ability to be proven true, but its ability to be proven untrue.
Let me use a simple analogy:
Let's say I tell you that I have a theory about why people commit murder. I say the sole reason why people are killers is because they had poor relationships with their parents, or if they were orphans with the major adult figures in their lives.
Now, say we look at some samplings of convicted murderers, there are no cases where you can not interpret their childhood as satisfying my criteria above.
I'm anticipating some disagreement with what I've said, so let me find some random examples and I'll try to show how each can be interpreted to agree with my hypothesis no matter the case.
I'm simply going to go through a few of the people on this list: http://en.wikipedia.org/wiki/List_of_serial_killers_by_number_of_victims
First one "Luis Alfredo Garavito Cubillos": Here is a short blurb about him,
"Garavito's victims were poor children, peasant children, or street children, between the ages of 8 and 16. Garavito approached them on the street or countryside and offered them gifts or small amounts of money. After gaining their trust, he took the children for a walk and when they got tired, he would take advantage of them. He then raped them, cut their throats, and usually dismembered their corpses. Most corpses showed signs of torture."
This fits my original hypothesis because the victims were all children. Clearly he had a poor childhood as a result of the upbringing his parents gave him, which resulted in his neuroses and violent thoughts towards children. The fact that he gave them gifts also shows that his parents most likely had a poor relationship with him, and he is expressing negative behavior towards typical parent-child actions such as giving a child candy or gifts.
Let's look at another one.
"Daniel Camargo Barbosa" : According to wikipedia "Camargo's mother died when he was a little boy and his father was overbearing and emotionally distant. He was raised by an abusive stepmother, who punished him and sometimes dressed him in girls clothing, making him a victim of ridicule in front of his peers"
This obviously fits the hypothesis as well, his mother was abusive, so it fits the hypothesis. We're doing pretty good with this idea right?
And another one: "Ahmad Suradji"
"He told police that he had a dream in 1988 in which his father's ghost told him to kill 70 women and drink their saliva, so that he could become a mystic healer"
The reason why he committed his crimes is because of his relationship with his father, which is evident by the "ghost of his father" telling him to kill the women.
And yet another one: "John Wayne Gacy"
"Throughout his childhood, Gacy strove to make his father proud of him, but seldom received his approval: One of Gacy's earliest childhood memories was of being beaten with a leather belt by his father at the age of 4"
This also fits my hypothesis, he had a poor relationship which lead to him becoming a killer.
That's enough, I think my point has been made. In any situation the evidence could be interpreted as confirming the original hypothesis even if there was no clear evidence that it was their relationship that caused them to become serial killers
This is what's known as post hoc ergo propter hoc reasoning: http://en.wikipedia.org/wiki/Post_hoc_ergo_propter_hoc and it also falls under confirmation bias: http://en.wikipedia.org/wiki/Confirmation_bias
If my original hypothesis had been more specific, such as "all people who have poor relationships with their parents or parental figures will become killers" then it would be much easier to disprove, and thus a much more useful hypothesis.
One of the things that most believers in astrology will do when presented with refuting evidence is to use interpret the evidence in such a way that it either does not refute astrology or confirms it. If you look at any scientific theory you will see that it's possible for it to be wrong if there is evidence that refutes it, and in fact there are many cases where scientific theories have been proven wrong by observations made after formulating the hypothesis.
The key idea to show why astrology doesn't work in my opinion is its lack of riskiness, you may feel that the statements are very accurate, but this is only due to the fact that they are meant to be overly generic.
Here's a link:
http://rationalwiki.org/wiki/Astrology
In brief, there is no evidence from properly conducted trials that astrology can predict future events at a rate better than chance. In addition physics as we currently understand it precludes any possible effect on us from objects so far away.
Astrology can appear to work through a variety of cognitive biases or can be made to appear to work through various forms of trickery. For example when someone is majorly freaked out by the accuracy of a guess (and with a large enough population reading a guess it's bound to be accurate for some of them) that is much more memorable and much more likely to be shared with others than times when the prediction is obviously wrong. As such the availability heuristic might make you think that such instances are far more common than they actually are, while the actual frequency is entirely explicable by chance alone.
So our world would look exactly the same without astronomy? (I'm kidding of course but that statement should require further qualification)
Here's a really neat chart from OkTrends (a blog discussing data from the dating website OkCupid) showing match percentages between people of various astrological signs, based on similarity between the users' answers to a wide range of questions:
http://cdn.okcimg.com/blog/races_and_religions/Match-By-Zodiac-Title.png
The data there implies pretty strongly that astrological sign has no predictive ability when it comes to a person's self-description.
Unless they had several thousand couples for each one of the 144 cells, I'm very surprised there weren't bigger fluctuations due to chance alone. (And that single “59” shows that they didn't round all numbers to the nearest ten.)
Indeed they did -- about 868 million couples per cell by my reckoning, or about half that if they're only pairing based on preferred gender:
Sorry, I should have linked the article earlier instead of just the chart.
On sample size: Keep in mind that it isn't couples that are being looked at here, just comparisons between users' self-reports. Specifically, each question has two answers: The user's self-report, and what they would want a potential date to answer. The compatibility percentage is based on matching from A's wants to B's reports and vice-versa.
For the article, data was collected from a randomly selected pool of 500,000 straight users. The gender balance among straight users is about 60% men, 40% women, so that's about 25,000 men in each row and 17,000 women in each column. So each cell has about 400 million comparisons.
Arguments from authority are invalid, but they are often inductively strong. If a community has a good track record for having good judgement within a given domain, then any particular judgement they make within that domain is evidence (sometimes weak, but sometimes strong) for the truth of their judgement. Arguably, scientists have relevant expertise in recognising what is and isn't science.
No they aren't. Incorrectly applied arguments from authority are often invalid but the form of argument is not itself intrinsically invalid. You do acknowledge this in your reasoning but I'd like to emphasize that the initial conclusion "Arguments from authority are invalid" isn't actually correct and that the 'inductive strength' makes the arguments valid when used correctly.
An argument is valid if and only if the truth of its premises entails the truth of its conclusion.
The truth of the premises of an argument from authority does not entail the truth of its conclusion.
Therefore, arguments from authority are not valid.
Note: This argument is valid in the sense I am using the term.
If your use of the term valid is such that arguments from authority are (necessarily) invalid then your use of the term is simply wrong. The very wikipedia link that you provide explains it as one of the many forms of potentially valid argument that is often used fallaciously. The following is an example of a valid argument form:
If you wish to trace the error in conclusion back to a specific false premise then it may be the (false) assumption "All valid arguments are deductive arguments".
That is indeed a valid argument-form, in basic classical logic. To illustrate this we can just change the labels to ones less likely to cause confusion:
The problem arises when instead of sticking a label on the set like "Snarfly" or "bulbous" or whatever you use a label such as "likely to be correct", and people start trying to pull meaning out of that label and apply it to the argument they've just heard. Classical logic, barring specific elaborations, just doesn't let you do that. Classical logic just wants you to treat each label as a meaningless and potentially interchangeable label.
In classical logic if you make up a set called "statements which are likely to be correct" then a statement is either a member of that set or it isn't. (Barring paradoxical scenarios). If it's a member of that set then it is always and forever a member of that set no matter what happens, and if it's not a member then it is always and forever not a member. This is totally counterintuitive because that label makes you want to think that objects should be able to pop in and out of that set as the evidence changes. This is why you have to be incredibly careful in parsing classical-logic arguments that use such labels because it's very easy to get confused about what is actually being claimed.
What's actually being claimed by that argument in classical logical terms is "Z is 'likely to be correct', and Z always will be 'likely to be correct', and this is an eternal feature of the universe". The argument for that conclusion is indeed valid, but once the conclusion is properly explicated it immediately becomes patently obvious that the second premise isn't true and hence the argument is unsound.
Where the parent is simply mistaken in my view is in presenting the above as an instance of the argument from authority. It's not, simply because the argument from authority as it's usually construed contains the second premise only in implicit form and reaches a more definite conclusion. The argument from authority in the sense that it's usually referred to just goes:
That is indeed an invalid argument.
You can turn it in to a valid argument by adding something like:
2a. Everything Person X says about Y is true.
...but then it wouldn't be the canonical appeal to authority any more.
The link I provided (here) does not contain the string "valid" as of 01:43 1/22/2012 Phoenix, Arizona time. What is does say is:
Inductively Strong != Valid
That is more than a tad disingenuous. You seem to be trying to claim that because the string 'valid' is not present in the text the clear meaning of the text cannot be that arguments from authority can be valid. I hope you agree that this sounds silly if made explicit. Things that are present in article are the phrase 'statistical syllogism' and the inclusion of "Fallacious appeals to authority" as a whole seperate subsection. That section opens by explaining:
... This is an explanation of how fallacious arguments from authority differ from valid ones.
Yes, this is exactly my position.
That argument is not valid. Valid arguments don't become invalid with the introduction of additional information, but the argument you provided does. For instance, compare these two arguments:
1.)
All men are mortal.
Socrates is a man.
Therefore, Socrates is mortal.
2.)
All men are mortal.
Socrates is a man.
Socrates is in extremely good health for his age.
Therefore, Socrates is mortal.
This argument will stay valid no matter how many additional premises we add (provided the premises do not contradict each other). Here is a variation of the argument you provided with additional information:
Person X has reputation for being an expert on Y.
Things said about Y by a person who has a reputation for being an expert on Y are likely to be correct.
Person X said Z about Y.
Person X said Z because he was paid $1,000,000 by person A.
Person X doesn't really believe Z.
Z is likely to be correct.
There is no contradiction between an argument having arbitrarily high inductive strength (like the very best arguments from authority) and still being invalid.
What? Of course it's valid (logically). The first three statements are premises and the final statement is the conclusion, which is entailed by the premises. If things said about Y by person X are likely to be correct and person X says Z about Y then Z is likely to be correct. That's a trivial deduction.
The argument is however not necessarily sound, because the premise "Things said about Y by a person who has a reputation for being an expert on Y are likely to be correct" is not universally true, for example if the person is saying stuff which blatantly contradicts other far stronger evidence.
Edit: Okay, enough silliness. Here is a formalised version of the above argument. You could run it through a proof checker, probably.
This argument is valid. It is not sound, because premise 2 is false. This is basic logic.
Not only is it valid it is trivially so. It does not even rely on the possibility of there being valid inductive arguments. I made it the most simple of deductions from supplied premises.
Your problem here seems to be that you object to deducing a conclusion of 'likely to be' from a premise of 'likely to be'. By very nature of uncertain information things that are merely likely do not always occur and yet this does not make reasoning about likely things invalid so long as uncertainty is preserved correctly. (The premise could possibly be neatened up such that it includes a perfect technical explanation with ceritus paribus clauses, etc but the meaning seems to be clear as it stands.)
If the argument was in the form of a deduction when only an induction is possible from the information then the appeal to authority is invalid. If the argument is a carefully presented inductive claim then it most certainly can be valid.
Not all arguments are deductions. Not all arguments that are not deductions are invalid.
Jayson_Virissimo is talking about logical validity. The argument is not logically valid, because it is possible for "Z is likely to be correct" to be false, even if the other statements are true (for instance, add the premise "Z is incorrect"). Induction is not (in general) logically valid. It's valid in other senses, but not that one.
Yes, we both are. We have gone as far as to accept a shared definition of logical validity and trace the dispute from there.
This is simply false. The following premise:
... becomes invalid the moment there is in fact a "thing said <etc, etc>" that is not likely to be correct. That's why I put it there! It is an instance of the class of premise "ALL G ARE W" and so just like all other premises in that class it is false if there is a G that is NOT W. it just so happens that 'likelyhood' is the subject matter here.
The above serves to make the premise in question rather brittle. While it does means that the whole argument can be treated as deductive reasoning (about the subject of likelyhoods) it is also means that there are very few worlds for which that premise is true and meaningful.
I interpreted your premise as: (Things said about Y by a person who has a reputation for being an expert on Y) are likely to be (correct.) as opposed to (Things said about Y by a person who has a reputation for being an expert on Y) are (likely to be correct.)
If, as you seem to be agreeing, a thing cannot be "likely to be correct" and "incorrect" (as known by the same reasoner), then the premise reduces to "Things said about Y by a person who has a reputation for being an expert on Y are correct".
Is this really what you intended?
No it cannot.
That which is said to be invalid in the text that you link to (things such as generalizing from anecdotes to make mathematically certain claims about a set) is not the same kind of reasoning as that which we are talking about here. Here we are talking about probabilistic arguments, about which you say:
That leaves us at an impasse. There is not really much more I can say if you pit yourself against what is a foundational premise of this site: That the correct way to reason from evidence is to use Bayesian updating. You have essentially dismissed the vast majority of all useful reasoning as invalid. I disagree strongly.
The terms "valid" and "invalid" have a precise logical meaning; that is the meaning Jayson_Virissimo intends, as they have said many times now.
As you are using them, you seem to mean "well-grounded, justifiable, effective, appropriate, and etc."
Really this all could have been avoided if you all had just taboo'd the offending terms.
You are correctly restating my claim. The vast majority of all useful reasoning is invalid. And by "invalid" I mean that it would not be self-contradictory to affirm the premises and deny the conclusion.
Probabilistic arguments are not the same as logical arguments. A Logical argument contains all information pertinent to the argument within itself. A probabilistic argument, by including words such as likely or probably, explicitly states that there is information to be had outside the argument. Probabilistic arguments are necessarily changed with the inclusion of more information.
Agreed. Probabilistic arguments are necessarily invalid (except when the probability of every relevant premise is equal to 1).
Is this an example of the persuasion tactic advocated (or described) recently? That is, you open with 'agreed' and then clearly say something that would undermine drethelin's whole comment.
No. I affirm all 4 sentences in drethelin's comment. Also, I maintain that nothing in drethelin's comment contradicts anything I have said in this discussion.
Really I just think he's using a stupidly strict definition of "Valid"
The stuff you find in newspapers aren't really horoscopes.
Anyway... I seem to remember that some organisation in India drew up a bunch of horoscopes of normal kids, mixed in a bunch of horoscopes of disabled kids, and challenged astrologers to figure out which were which.
You could do something like that, if you had the inclination to study the subject... Have a lot of LWers drop in their horoscopes (there's gotta be a horoscope creation tool somewhere on the internet) and see if you can deduce their major life events. Or pay an astrologer to do it.
As OtherDave said, all you need is a blind-test. You need to read the horoscopes WITHOUT KNOWING WHICH ONE IS WHICH; then grade them on "accuracy" still without knowing which one is which. Only after you've written the grades down, you should check whether they correspond better than chance would allow.
A simple exercise to see whether further theoretical research is justified might be to have a friend print out the horoscopes for all the Zodiac signs or whatever, remove identifying characteristics from each one, and have you rank all of them every day for a month in terms of how accurate they are. Then see whether the horoscope accuracy correlates better with the ones for your sign than the ones for other signs.
More doable than my idea. Upvoted.
How'd it go?
EDIT: My bad, I thought this was posted on 22 January 2013, not 22 January 2012. I'll leave this up just in case though.
“I don't believe in UFOs. I don't believe in astrology. I don't believe in homeopathy. I don't believe in creationism. I don't believe there were explosives planted in the World Trade Center. I don't believe in haunted houses. I don't believe in perpetual motion machines. I believe that all these beliefs are not only wrong but visibly insane.” (Emphasis added by me)
I am assuming that you are defining what is insane as what is irrational, because being irrational makes it impossible to achieve goals or reach desired outcomes. If this is not the case please correct me, but for now I will continue under this assumption. I agree with you that people who carry any of the above listed beliefs as ideology are insane. However, I would add to that list “I do not believe humans are innately rational in the traditional sense,” as well as, “ I do not believe that tribes are bonded through tradition rationality” as beliefs not only wrong but visibly insane. What does this mean? First it must be addressed what I mean by “traditionally-rational.” Tradition deals with the socio-historic practices that constitute a specific tribe’s solidarity. As a part of this solidarity, every tribe contains both a method and a methodology for the production of “Truth.” Truth is a desired state of knowledge, a way of filtering information. What I call traditional-rationality more specifically refers to the conception of Truth that has shaped the production of Western knowledge. Since the normative sciences have emerged out of Western culture they too exist as products of traditional-rationality.
What is the Traditional-rationality of the Western Tribe? That is a very complicated question, because what represents truth to our tribe has evolved. I will give a brief history of this evolution, but if you would like a more detailed discussion of it I can point you to some good academic work on the subject. The summary of said evolution is as follows:
1.) PRE- GREEK ENLIGHTENMENT: Truth is filtered through Mythos. (What is true is represented through Folklore or Myth, legitimized intelligentsia being storytellers)
2.) POST GREEK ENLIGHTENMENT: Conveyance of information is redefined and myth and rhetoric are excluded from what is considered legitimized Truth. Post-Aristotle Legitimized Truth emerges are what is related to Logic. Logic referring to deductive reasoning.
3.) SCIENTIFIC REVOLUTION: Francis Bacon, Galileo, Robert Boyle, along with various other thinkers of their time attack pure deductive reasoning as mind games that ultimately lead to no new production of knowledge. Logic is redefined as primarily inductive. (Meaning the driving filter for whether or not information is legitimized, as truth is empirical evidence.) Which brings us to today. What most normatively defines legitimized knowledge in present society is without a doubt scientific knowledge, because the scientific method is the most powerful method of empirical investigation. This standard of knowledge is what I call traditional rationality.
Now returning to my original claims
-I do not believe that tribes are bonded through tradition rationality. -I do not believe humans are innately rational in the traditional sense.
Traditional rationality is not innate. The reason there are experts on a matter who we are taught to trust, is because without extensive education the average person is not scientific, even if they believe in science. Without extensive specialization in some sub-field the average person has no way of containing enough empirical knowledge to validate them as a source of truth. What this means is that the traditional western sense of logic/rationality is a product of privilege. Yet our culture for political reasons has exhaled traditional rationality as the cornerstone of human development. Such thinking is counterintuitive if you accept the fact that humans are irrevocably social creatures. If humans demand sociality to exist and thrive, then the cornerstone of human development must be linked to sociality. Now here is where it gets complicated because technically Traditional Western Rationality is a type of social bond. In attempting to rationalize and modernize the world, what the West is essentially doing is attempting to extend their tribe. However, the extension of a particular tribe cannot be confused with the axiom of tribal existence in general. What creates the tribe is not rationality-rationality (Western logic), but social harmony. What creates social harmony? The current management of Western society is attempting to make traditional rationality the bond that not only we share (we being the Western world), but also what the global world shares. What is the problem with this? People are not unanimously outfitted to be efficiently rational. Just as not every is unanimously outfitted to play baseball efficiently. Mathematic logic is just one of multiple human intelligences. Most people probably have some capacity to use every type of human intelligence, but being proficient in traditional-rationality is analogous to being able to be proficient enough in kinesthetic logic to be a professional athletic. Only a very small portion of the human population is able to be a professional athletic. A larger population is proficient enough to be able to understand the world of a professional athletic and navigate themselves through it. But there is an even larger population of people who are average or below average in kinesthetic knowledge and cannot comprehend that world accurately. I argue the same holds true for Traditional western rationality. This type of logical power dominates our society. Everyone wants to be a lawyer, a doctor, a scientist, etc. But the harsh reality is that millions of people, who are taught to want this, lack the capacity to truly be it. There is sizable population that is proficient enough to “fake it,” but there is a larger population that is unable to even pretend they belong in that world. It is damaging to the tribe if prestige is solely placed on one type of organ within the tribal organism. Society needs rationality, but it also needs other type of reasoning that seem to the logician irrational. People are not completely irrational, but the conception of the rational man is fallacious. Increasingly social psychology and behavioral psychology are proving that people do not behave rationally in the way economic models assume, a fact that sociology and anthropology has long advocated. Now back to what you said.
You say “I don't believe in UFOs. I don't believe in astrology. I don't believe in homeopathy. I don't believe in creationism. I don't believe there were explosives planted in the World Trade Center. I don't believe in haunted houses. I don't believe in perpetual motion machines. I believe that all these beliefs are not only wrong but visibly insane.” (Emphasis added by me) It is irrational to assume that the functioning of a tribe has ever been rational. So while those statements you list do not match up with what we know about empirical reality, they do serve as social bonds. Is it not rational to say that what best promotes human survival is human solidarity? Therefore making irrational social bonds rational.
Now do not get me wrong. The idea is to be moving towards a more rational society. Because even if a tribe is stable in its solidarity if its bonds are created through damaging practices (such as blood rituals) ultimately such practices will lead to its destruction. However, it must be accepted that the rational state of the human animal is a semi-irrational one. We are value-reason based creatures, not solely reason based, or solely value based. The ideal society is one that is irrational in a rational way. Meaning that its irrationality benefits continued survival and harmony. In my opinion, your quest to completely rationalize the human animal is irrational and potentially dangerous (also potentially marginally beneficial, but the danger potential seems greater).
Hope you enjoy thinking about this, and that I do not come off as too aggressive ^_%. The last thing I will say is this: One of the more interesting things Freud argued was that the opposite of psychosis was not rationality but culture. Psychosis defines a belief system held by an individual or a marginal population; whereas culture defines a belief system held by the majority. The difference between psychosis and culture is not a degree of truth, but a degree of a degree of quantity.
I believe that all these beliefs are not only wrong but visibly insane.
How do you convince someone of the superiority of your insanity? This is what I am currently working on. I have enjoyed reading some of your essays, I like several of your ideas. Have fun!
-Tom Mitchell
Summary: People's beliefs are very strongly influenced by their culture. We can't cure that by encouraging contrarianism, because most people aren't suited for that. We should work more on group rationality instead.
I agree with you. But don't you think that experts are the minority of any tribe? Perhaps on this blog it is experts who are the majority, but I believed the writer and the blog to be trying to improve our society, our tribe. In that sense, I see group rationality as contrarianism, because it is advocating for an incredibly specialized set of skills held by a minority group to become the basis of society. I am accepting the fact that the majority is irrational in the traditional sense, and thus trying to think of a way to further progress our tribe given that fact. Whereas, by trying to progress a tribe/society through democratizing group rationality, you are attempting something that is radically opposed to the majority.
To clarify: I'm not trying to make a point, just to rephrase yours.
You are trying to say that we should not try to teach everyone to be an expert individual rationalist. But are you trying to say that we should teach everyone to be an expert group rationalist as long as they're in a group of people with the same teaching (an extended wisdom of crowds, embedded in culture)? Or that we should develop an elite of specialized individual rationalists and have everyone blindly follow them like they blindly follow the instructions of car mechanics? Or something else?
Ah, my bad. I am somewhat embarrassed and ashamed of the fact that the characterization I had prescribed to members of this website was so strong that it led me to vilify your response into an attack. I really apologize.
Yup, your initial post is a a summary of my point.
As to your follow up questions:
I am not sure if by " a group of expert group rationalists" you mean
Or a group that promotes everyone to actualize themselves in their own group expertise (type of multiple intelligence), still aiming for an expert group, but one of diverse capacities.
Actually now that i think about it, the answer is the same for both cases. I do not think that this is possible. In my opinion any type of expert is a minority demographic of the larger population.
I am supporting the later idea of having an elite that guides the masses, despite the huge potential for damage/corruption such an idea carries. My defense of such a totalitarian idea would be that humans cannot escape such a hierarchy. Even if we delude ourselves into thinking that we have removed a class elite from our current production of society, the truth is that we have merely chosen an elite that is a hidden class. What I mean by hidden class is that certain aspects of our current episteme hide the totalitarian aspects of our society. For example, I would see deep seeded ideologies of individualism, democracy, and cartesian dualism, and universal human rights, make most people hostile to the idea that cognition is as physical a capacity as running. And as with any physical capacity, there is both natural and nurtured disparity between individuals.
Before I continue I feel I need to further explain myself, because all the ideologies I have listed above are laden with such heavy positive connotations in our culture I fear that my words will be vilified if I do not partially explain them. With for example the idea of universal human rights, you exclude the potential that there are fundamentally different types of people. Again, this idea seems evil and oppressive in light of the dominance of democratic equality in our culture, but I cannot help that. There number of people who can play professional basketball is not the majority. In the same sense the number of people who can rationally think on a professional level is not the majority.
I don't think everyone is born with what we consider proficient rationality. Perhaps a majority could be taught to be rational, I am not denying this possibility, but I do not think that it is economically feasible. At least not in our current system of education. But then again, there are many essential facets of society that do not require world class logical skills. In my opinion the existing emphasis our power structure places on rationality has skewed the pretending-doing balance of a large demographic.
I would suggest having a elite of empathetic rationalists who guide the masses to more humane and potentially more rational living.
I'd doubt the feasibility without brain modification tech.
Surely education is "brain modification tech". You can upgrade your own software.
No.
A lot of educational tools are technology - I would personally say that all educational systems are forms of technology.
...and they definitely modify your brain. Not counting them? Consider reconsidering.
The link provided in the grandparent is important:
"Education" is "brain modification technology" in about the same way the invention of LSD was a singularity.
It was a long time ago so my memory is hazy... was that post actually written as a direct response to you back in the day or was the "corporations are super-intelligent" guy someone else?
There is very little to rationality. All it takes is to be committed to take consequent actions that are implied by two basic questions:
If you ask those questions, everything else will follow naturally. The very first implication is to ask,
Rationality, in its broadest sense, is a collection of heuristics that help you to answer those questions. In that respect rational decision making is already implied by our preference for world states that satisfy our utility-function.
This means that brain modifications, if necessary, are not a precondition but a possible consequence of rationality.
I think that most healthy humans could be taught to ask those questions and pursue follow-up actions. The problem are the circumstances in which they reside.
I am of the same opinion of you. I chose rhetorically to emit this argument because it is more radical and I was not sure exactly of my bearings on the open sea of values. But seeing that you are of the same type as me, I would agree with you. I do not think it is feasible in any sense of the word as of now.
Not a native speaker I am guessing? Where "same opinions as you" expresses agreement "same opinion of you" has more potential as a retort. "Emit" gives approximately the opposite meaning to what I assume you intended, given that you did not release, give off, send out or express the radical opinion - you omitted it. (I assume English is a second language since your thoughts seem far more advanced than your expression thereof.)
No actually english is my first language. Though I have spent the past 6 years deeply immersed in the study of Chinese linguistics and scholarship. So I apologize in advance for comma splicing or other somewhat awkward rhetorical strategies that I may use. I try to think in chinese as much as I can and I guess it messes me up at points.
That said, I do not see a causal correlation between the quality of my ideas and my mastery of the english language. I know very well that on a scale of 1-10 it would be generous to call my writing a 7. But I do not think that defines the nature of my thoughts, especially since you do not know what stage of the writing process my responses are in. I will go ahead and tell you anything I write on this site is done in a single draft. I am writing not to meet the rhetorical standards of whatever game you are playing. I am writing because of the potential to see what emerges from me when I mix with interesting materials such as Mr./Mrs MixedNuts. Personally I do not see the point in attacking rhetoric, especially if the idea is conveyed. It seems as insecure as my own initial vilifying of Mr./Mrs. MixedNuts. In fact it fulfills the stereotype I was expecting to meet in posting on this website! However, if I can have my insecurities, then I cannot hold your insecurities against you. So I forgive you, and I hope we can keep talking.
Sometimes things are in flight and the observers can't identify them. What we don't believe in is paranormal or space alien explanations for UFOs.
I've seen undiscriminating skepticism applied to doubting the reports of slightly weird things in the sky.
Note: this post should be not only included in, but at the top of, lists like this. This is one of the most important posts on the site.
Following a suggestion from Cayenne:
Eliezer, I don't understand how you arrived at this conclusion, could you explain the reasoning behind it? Specifically I don't understand why this belief is visibly insane.
I cannot answer for Eliezer, but I can (perhaps) explain why the belief is "visibly insane".
For 3 to be true, too many things to be true. For the non-conspiracy explanation, all that's needed is the (perhaps slightly surprising) fact that the fire caused a specific kind of collapse. Most "truthers" know about as much about physics as me (highschool mechanics, some basics in college). So for a given truther to believe that, the truther needs to assume a high degree of certainty for his or her intuitive physics estimation in the fairly subtle area of civil engineering. In fact, they'd have to have a degree of certainty so high that all the elements in 3 are not enough to sway them the other way. That degree of certainty should be reserved for actual trained civil engineered, and perhaps not even then...
Awesome. =]
If say, "This isn't about a test of rationality itself, but a test for true free-thinking. All good rationalists must be free-thinkers, but not all free-thinkers are necessarily good rationalists", is that a good summary?
Here's a discussion of this post at the James Randi forums. Reaction seems net negative with high variance: http://forums.randi.org/showthread.php?p=5726673
Eliezer, could you explain how you arrived at the conclusion that this particular believe is visibly insane?
If you disagree with your tribe, you get rationality points for independent thinking; but you lose rationality points for failing to update. Is the total positive or negative?
"And of course I could easily go on to name some beliefs that others think are wrong and that I think are right, or vice versa, but would inevitably lose some of my audience at each step along the way - just as, a couple of decades ago, I would have lost a lot of my audience by saying that religion was unworthy of serious debate. "
So are you admitting to just going for "cheap credit"? In your post you encourage people to stick their intellectual necks out, but seem reluctant to do so yourself.
No; he's trying to stick to the point. He's stuck his neck out in other posts.
OK, I may have misunderstood his meaning. I thought he was saying that there were things he would never mention, as it would alienate people, as opposed to just not mentioning it in this post.
I've been following Alicorn's sequence on luminousness, that is, on getting to know ourselves better. I had lowered my estimate of my own rationality when she mentioned that we tend to think too highly of ourselves, but now I can bump my estimate back up. There is at least one belief which my tribe elevates to the rank of scientific fact, yet which I think is probably wrong: I do not believe in the Big Bang.
Of course, I don't believe the universe was created a few thousand years ago either. I don't have any plausible alternative hypothesis, I just think that the arguments I have read in the many popular science physics book I have read are inconclusive.
First, these books usually justify the Big Bang theory as follows. Right now, it is an observable fact that stars are currently moving away from each other. Therefore, there was a time in the past where they were much closer. Therefore, there was a time where all the stars in the universe occupied the same point. It is this last "therefore" which I don't buy: there is no particular reason to assume that if the stars are moving away from each other right now, then they must always have done so. They could be expanding and contracting in a sort of sine wave, or something more complicated.
Second, the background radiation which is said to be leftover stray photons from the big bang. If the background radiation was a prediction of Big Bang theory, then I might have been convinced by this experimental evidence, but in fact the background radiation was discovered by accident. Only afterwards did the proponents of Big Bang theory retrofit it as a prediction of their model.
Third, the acceleration. The discovery that the expansion was accelerating was a surprise to the scientific community. In particular, it was not predicted by Big Bang theory, even though it seems like the kind of thing which an explanatory model of the expansion of the universe should have predicted right away.
Fourth, the inflation phase. This part was added later on, once it had been observed that Big Bang theory did not fit with the observed homogeneousness of the cosmos. To me, this seems like a desperate and ad hod attempt to fix a broken theory.
Now, it could be that all these changes are a progression of refinements, just like Newtonian physics was adjusted to take into account the effects of relativity, and just like the spherical Earth was adjusted to make it an elliptical Earth. But the adjustments which Big Bang theory has suffered seem like they should change the predictions completely, rather than, as in the other cases, increasing the precision of the existing theory.
I am, of course, open to being convinced otherwise. If Big Bang theory really is true, then I wish to believe it is true.
The key is there at the end of your quote. From the first set of observations (of relatively close galaxies), the simplest behavior that explained the observations was that everything was flying apart fast enough to overcome gravity. This predicted that when they had the technology to look at more distant galaxies, these too should be flying away from us, and at certain rates depending on their distance.
When we actually could observe those more distant galaxies, we did in fact see them red-shifted as predicted. This alone should be enough to put the "sine wave" theory in the epistemic category of "because the Dark Lords of the Matrix like red shifts", because the light left these galaxies at all different times! It would take a vast conspiracy for them all to line up as red-shifted right now, from our perspective.
With strong evidence in hand that the galaxies had been flying apart for billions and billions of years, the scientists then noticed an irregularity: the velocities of those distant galaxies were different from the extrapolation made on the early data! However, they differed in a patterned way, and the simplest way to account for this discrepancy was a variant of Einstein's "cosmological constant" idea.
Additional support for the Big Bang:
Stephen Hawking calculated that there would have been no way for matter to fly towards a point, "miss" colliding with itself, and fly apart in an apparent expansion without a singularity and Big Bang. (This is somewhere in A Brief History of Time, but Google Books won't let me find it.)
We can roughly estimate our galaxy's age by other means (i.e. how much hydrogen has been used up in stars, how much is left). Have you looked into this, to see whether the estimates thus derived are consistent with the estimate of about 10 billion years that the Big Bang theory implies?
Finally, the cosmic background radiation gives us way more than one bit of data; its spectrum is precisely the black-body radiation one expects from a Big Bang.
ETA: Also, this seems like exactly the sort of issue where the "physicist-test" applies, as described above. For example, being critical of QM on common-sense grounds (of course the electron has to go through one slit or the other!) doesn't make for discriminating skepticism, since one should assign high probability to physicists having strong evidence to this effect if they're claiming something weird, or else one should have strong evidence that common sense usually beats the consensus of the physics community. Needless to say, I wouldn't hold my breath on the second claim.
You win. I did not realize that we knew that galaxies have been flying apart for billions and billions of years, as opposed to just right now. If something has been going on for so long, I agree that the simplest explanation is that it has always been going on, and this is precisely the conclusion which I thought popular science books took for granted.
Your other arguments only hammer the nail deeper, of course. But I notice that they have a much smaller impact on my unofficial beliefs, even thought they should have a bigger impact. I mean, the fact that the expansion has been going on for at least a billion years is a weaker evidence for the Big Bang than the fact that it predicts the cosmic background radiation and the age of the universe.
I take this as an opportunity to improve the art of rationality, by suggesting that in the case where an unofficial belief contradicts an official belief, one should attempt to find what originally caused the unofficial belief to settle in. If this original internal argument can be shown to be bogus, the mind should be less reluctant to give up and align with the official belief.
Of course, I'm forced to generalize from the sole example I've noticed so far, so for the time being, please take this suggestion with a grain of salt.
I prefer the meme where you've just won by learning something new; you now know more than most people about the justifications for Big Bang cosmology, in addition to (going meta) the sort of standards for evidence in physics, and (most meta and most importantly) how your own mind works when dealing with counterintuitive claims. I won too, because I had to look up (for the first time) some claims I'd taken for granted in order to respond adequately to your critique.
Good idea! It's especially helpful, I think, that you're writing out your reactions and your analysis of how it feels to update on new evidence. We haven't recorded nearly as much in-the-moment data as we ought on what it's like to change one's mind...
When two people argue, and they both realize who is actually right, without drama or flaring tempers, then everybody wins. Even people down the block who weren't participating at all, a bit; they don't know it yet, but their world has become slightly awesomer.
Not true; Alpher & Gamow predicted the radiation, although they were off by a few kelvins.
True, but this lacks parsimony, & the mechanism by which the "sine wave" (or whatever) could be produced is unknown. The universe is expanding now, implying some force behind the expansion. Gravity is attractive only. Celestial objects almost all have net electric charge as close to 0 as makes no odds, so they do not repel each other. The strong nuclear force is always attractive too. You see what I mean? What could possibly cause the outward oscillation, if not extreme density? It's not like when stars come close to each other they suddenly feel a repulsion.
I don't see how you can make sense of this without the Big Bang, except by positing unknown physical forces or something.
Very interesting post though. You seem curious; I'd recommend Jonathan Allday's book "Quarks, Leptons & the Big Bang" on this subject. It's reasonably technical, given that it's not a textbook.
Thanks! I had only heard about the accidental discovery by two Bell employees of an excess measurement which they could not explain, but now that you mention that it was in fact predicted, it's totally reasonable that the Bell employees simply did not know about the scientific prediction at the moment of their measurement. I should have read Wikipedia.
The probability of predicting something as strange as the background radiation given that the theory on which the prediction is based is fundamentally flawed seems rather low. Accordingly, I should update my belief in the Big Bang substantially. But actually updating on evidence is hard, so I don't feel convinced yet, even though I know I should. For this reason, I will read the book you recommended, in the hope that its contents will manage to shift my unofficial beliefs too. Thanks again!
I don't think we can reasonably elevate our estimate of our own rationality by observing that we disagree with the consensus of a respected community.
I am wary of this kind of argument. I should not be able to discredit a theory by the act of collecting all possible evidence and publishing before they have a chance to think things through.
But isn't Eliezer suggesting, in this very post, that we should use uncommon justified beliefs as an indicator that people are actually thinking for themselves as opposed to copying the beliefs of the community? I would assume that the standards we use to judge others should also apply when judging ourselves.
On the other hand, what you're saying sounds reasonable too. After all, crackpots also disagree with the consensus of a respected community.
The point is that there could be many reasons why a person would disagree with a respected community, one of which is that the person is actually being rational and that the community is wrong. Or, as seems to be the case here, that the person is actually being rational but hasn't yet encountered all the evidence which the community has. In any case, given the fact that I'm here, following a website dedicated to the art of rationality, I think that in this case rationality is quite a likely cause for my disagreement.
I agree that if a piece of evidence is published before it is predicted, this is not evidence against the theory, but it does weaken the prediction considerably. Therefore, please don't publish this entire collection of all possible evidence, as it will make it much harder afterwards to distinguish between theories!
"But isn't Eliezer suggesting, in this very post, that we should use uncommon justified beliefs as an indicator that people are actually thinking for themselves as opposed to copying the beliefs of the community? I would assume that the standards we use to judge others should also apply when judging ourselves.
On the other hand, what you're saying sounds reasonable too. After all, crackpots also disagree with the consensus of a respected community."
Eliezer didn't say that we should use "disagreeing with the consensus of a respected community" as an indicator of rationality. He said that we should use disagreeing with the consensus of one's own community as an indicator of rationality.
Post alternative tl;dr's here.
People tend to believe things that are popular to believe. If you want to find out whether someone's actually smart, rather than if they're just going along with the crowd, then look at their unpopular beliefs. If they believe unpopular true things, they're probably actually smart.
As per http://news.ycombinator.com/item?id=1193450, this could use a catchier title.
Post alternative title suggestions here.
I'm pretty sure the commenter there is referring to the title with which it was posted on Hacker News ("First write the crushing counterargument, then conclude with mockery."), not the title here.
Right, but I predict that if submitted to Hacker News at the same time of day with the actual title and no one commenting here that it was resubmitted, then it will receive even fewer upvotes. "Undiscriminating Skepticism" isn't as catchy of a title as Less Wrong's last general audience article and Hacker News hit, "What is Bayesianism?"
"Groupdoubt", but that's fairly horrible.
Another test:
Could smoking during pregnancy have a benefit? Could drinking during pregnancy have a benefit? It's not necessary that someone know what the benefit could be, just acknowledge the nicotine and alcohol are drugs that have complex effects on the body.
As for smoking, it's definitely a bad idea, but it reduces the chances of pre-eclampsia. I don't know of any benefit for alcohol.
I'll reply two years later: Light drinking during pregnancy is associated with children with fewer behavioral and cognitive problems. This is probably a result of the correlation between moderate alcohol consumption and iq and education, but it's interesting nonetheless.
OK, now here's one that might be interesting. Is there a gap, or is the date a lie?
This is quite an old "thesis" by Illig originally stemming from a very simple arithmetic misunderstanding. (No: Pope Gregory aligned his calendar to match eastern date as at the time of the Council of Nicaea in 325, not with the original beginning of the Julian calendar)
There is no need for radiocarbon dating to refute it, since a lot of evidence could easily pinpoint it as a crackpot theory, especially:
Making up an additional 200 years of Roman imperial history, in a way that duped generations of later historians, sounds to me prima facie very unlikely.
Is there ice core data to cover the gap?
EDIT: radiometric dating would present another big problem for this thesis. Still, it's very unfortunate that the dendrochronology data isn't public.
Ice core data might provide an interesting test. Smash the dendro temperature values (these are probably available even if the measurements aren't) into shards, and then use techniques of the "shotgun sequencing" variety to reassemble it against the continuous ice core data template. See how it falls into place when there is no human dictating the dates.
The data proposed to support the gap is awfully weak - and I think that is the correct response for an educated layperson.
My first reaction is: I think the simplest hypothesis is a continuous tree ring record. It's continuous everywhere else. A sudden gap needs more than a just-so-story about Romans to justify it.
Also:
That sounds very much like fitting the evidence to the hypothesis.
I'll grant you that the idea might be worth testing - for example, by radiocarbon dating calibrated on other dendro data - but I don't think it has been shown convincingly enough to outweigh the historical accounts.
Did anyone read this post and worry whether they're one of the poseurs and not one of the true-blooded rationalists?
I could believe I'm a poseur with respect to this group, i.e. adopting the opinions of the average Less Wrong reader without doing much thinking myself. But this might be rational in the case of issues where the average Less Wrong reader has done more thinking than me, right?
Maybe we should have a thread where we all do this? Heh, what a cult initiation ceremony that would be: loudly proclaim to the cult what they're wrong about.
Of course. If you know others who share your belief, that's a cause for worry, and if you know no-one who does, that's also a cause for worry.
Doesn't that violate conservation of expected evidence? Or are you saying that this article was a cause for worry?
I'm having a bit of a hard time reconstructing my meaning from two years ago I'm afraid! Clearly it does violate conservation of expected evidence, so I can only think that it's offered as a way to combat overconfidence bias than actually meant as a way that a ideal reasoner would update on the evidence. Or I'm just trying too hard to sound clever...
OK. So I can only stop worrying if exactly 1 person shares my belief? :-P
You can stop worrying after your brain's been properly frozen. The question is what to worry about.
Proposed litmus test: infanticide.
General cultural norms label this practice as horrific, and most people's gut reactions concur. But a good chunk of rationality is separating emotions from logic. Once you've used atheism to eliminate a soul, and humans are "just" meat machines, and abortion is an ok if perhaps regrettable practice ... well, scientifically, there just isn't all that much difference between a fetus a couple months before birth, and an infant a couple of months after.
This doesn't argue that infants have zero value, but instead that they should be treated more like property or perhaps like pets (rather than like adult citizens). Don't unnecessarily cause them to suffer, but on the other hand you can choose to euthanize your own, if you wish, with no criminal consequences.
Get one of your friends who claims to be a rationalist. See if they can argue passionately in favor of infanticide.
You haven't taken account of discounted future value. A child is worth more than a chimpanzee of equal intelligence because a child can become an adult human. I agree that a newborn baby is not substantially more valuable than a close-to-term one and that there is no strong reason for caring about a euthanised baby over one that is never born, but I'm not convinced that assigning much lower value to young children is a net benefit for a society not composed of rationalists (which is not to say that it is not an net benefit, merely that I don't properly understand where people's actions and professed beliefs come from in this area and don't feel confident in my guesses about what would happen if they wised up on this issue alone).
The proper question to ask is "If these resources are not spent on this child, what will they be spent on instead and what are the expected values deriving from each option?" Thus contraception has been a huge benefit to society: it costs lots and lots of lives that never happen, but it's hugely boosted the quality of the lives that do.
I do agree that willingness to consider infanticide and debate precisely how much babies and foetuses are worth is a strong indicator of rationality.
Are you allowed to use moral questions as litmus tests for rationality? Paper clippers are rational too.
It isn't inconceivable that a human might just value babies intrinsically (rather than because they possess an amount of intellect, emotion, and growth potential).
If anyone here has been reading this and trying to use more abstract values to try to justify why one should not to harm babies, and is unable to come up with anything, and still feels a strong moral aversion to anyone harming babies anywhere ever, then maybe it means you just intrinsically value not harming babies? As in, you value babies for reasons that go beyond the baby's personhood or lack thereoff?
(By the way, the abstract reason i managed to come up with was that current degree of personhood and future degree of personhood interact in additive ways. I'll react with appreciation to someone poking a hole in that, but I suspect I'll find another explanation rather than changing my mind. It's not that I necessarily value babies intrinsically - it's more that I don't fully understand my own preferences at an abstract level, but I do know that a moral system that allows gratuitous baby-killing must be one that does not match my preferences. So if you poke a hole in my abstract reasons, it merely means that my attempt to abstractly convey my preferences was wrong. It won't change the underlying preference.)
<But a good chunk of rationality is separating emotions from logic
Even if I insert "epistemic", i find this only partially true.
Edit: Although, my preferences do agree with yours to the extent that harming a young child does seem worse than harming a baby (though both are terrible enough to be illegal and punishable crimes). So I might respect the idea of merciful killing (in times of famine, for example) at a young age to prevent future death-inducing-suffering.
That's an amusing example because infanticide was extremely common among human cultures, so all good cultural relativists should be fine with this practice.
Usually there was a strong distinction between actually killing a baby (extremely wrong thing to do), and abandoning it to elements (acceptable). I'm not talking about any exotic cultures, ancient Greece and Rome and even large parts of Christian Medieval Europe practiced infant abandonment. There are even examples of Greek and Roman writers noting how strange it is that Egyptians and Jews never kill their children - perfect stuff for any cultural relativists. It was only once people switched from abandoning infants to elements to abandoning them at churches when it ceased being outright infanticide.
Anyway, pretty much the only reason babies are cute is as defense against abandonment. This shows it was never anything exceptional and was always a major evolutionary force. By some estimates up to 50% of all babies were killed or abandoned to certain death in Paleolithic societies (all such claims are highly speculative of course).
Infant abandonment is normal, and people should have the same right to abandon their babies as they always had. Especially since these days we just put them into orphanages. Choosing infanticide over abandonment is pretty pointless, so why do it?
A lot of sources can be easily found here: http://en.wikipedia.org/wiki/Infanticide
"Choosing infanticide over abandonment is pretty pointless, so why do it?"
How about infanticide as euthanasia ?
Killing another living thing doesn't qualify as "euthanasia" if you do it for your benefit, not that being's.
By infant abandonment by giving it to an orphanage (it's not legal everywhere, but in a lot of countries it's perfectly legal and acceptable) you lose both your responsibility and your control over the baby, so you no longer have any right to do so.
And speaking of euthanasia, we really should seriously reban it. We pretty much know how to deal with even the most severe pain - very large doses of opiates to get rid of it, and large doses of stimulants like amphetamines to counter the side effects. War on Drugs is the reason why we don't routinely do this to people in severe pain.
We don't have a magical cure for depression, but if someone is depressed, they cannot make rational decisions for themselves anyway, so they cannot decide to kill themselves legitimately.
Once you cover these casese, there are zero legitimate arguments left for euthanasia.
"Choosing infanticide over abandonment is pretty pointless, so why do it?" "Killing another living thing doesn't qualify as "euthanasia" if you do it for your benefit, not that being's."
I once was a friend with a boy with a progressive muscular dystrophy. It is a degenerative disease, where gradually, Your muscles stop working, and at the age of cca 20, most patients die, because they stop breathing. If You have heard great stories about people on the wheelchair getting adapted to their situation, well, here adaptation can be only shorterm, because next year, You might not be able of doing what you can do now. The pain was not excruciating but there was some, the body which is deprived of excercise gives You this feedback. If he had a bad dream at night, he could not turn to the other side (a very usual remedy, most people do it without even realizing). The boy had 2 suicide attempts, although, frankly, he did not really mean them. He would make phonecalls to his friends in the evening to relieve his pain - very unwelcome calls. I sometimes pretended not to be at home, and I know other people who did the same (We were in our twenties). Then, his desperation was deepened by feeling he is not loved. Once he was calling his psychologist, and caught her in the middle of a suicide attempt, poisoned by drugs - she repeated to him HIS previous statements from the previous phonecalls. I am not saying it was HIS fault, the lady clearly failed to safeguard the known risks of her profession (plus had other problems, departed partner etc.) I am just illustrating how hard it was sometimes to deal with him. (He called other people who saved her life, to close up this branch of the story). His parents took great care of him up to the level of their financial abilities, plus using the limited help of our government. There were frequent conflicts between him and his parents, though, and made him feel unloved, again. On the other hand, his parents were deeply religious and, knowingly, had another baby with the same genetic defect later, they did not choose abortion. The older boy has died at the age of 28, his life being surprisingly long.
This story clearly contains aspects, which were not optimized, the parents could have earned more money and bring more comforts to his lives, he could have gotten a personal assistant at night, more physiotherapy excercises, a better computer, some lectures how to deal with people and get a girlfriend (his desires were strong), he could have tried harder to develop his talents and get a job, which would make him feel useful to society. (We persuaded him to get a job eventually, phone operator, lasted 1 year or so). His friens, including me, could have worked harder on their emotional maturity. But, can You see all the energy and resources to make a misery somewhat better ?
Now let us see a different story, where the parents of a sick child became EXTREME optimizers. Watch the film Lorenzo's Oil (http://en.wikipedia.org/wiki/Lorenzo%27s_Oil_%28film%29) or read about Lorenzo Odone (http://en.wikipedia.org/wiki/Lorenzo_Odone). Wonderful and admirable story. But can You see the end result, after You do all that is in Your power for Your baby ?
"Choosing infanticide over abandonment is pretty pointless, so why do it?" Abandoning a baby with a severe genetic defect at birth condemnes the baby to even lower quality of life in most government institutions, unless a millionaire chooses to adopt him.
I have a counterargument to my own reasoning right away - what if some parents killed their baby diagnosed with adrenoleukodystrophy (but with no developed symptomps yet) a year before Augusto and Michaela Odone invented the Lorenzo's Oil for their son ? Such parents would have lost a potentially healthy baby, the baby would lose a realistic chance to live their normal life...
I am not really trying to win this argument, just explaining, why I sometimes TOY with the idea of infanticide being not so immoral, and considering it a form of euthanasia.
There's plenty of diseases we can now deal with quite well because we didn't infanticide or murder everyone who had them. This isn't a coincidence that a treatment is found, if we killed everyone with a disease there would be no search for treatment.
Note that you're arguing that your preferred policy can never have true drawbacks, rather than arguing that it's worth it on balance. Be careful.
Policy of not mass murdering people is as close to drawback-free as it gets.
I'm sure you can figure out some trivial drawbacks if you want.
Doesn't appreciably constrain your behavior, though, unless you happen to be the star of a popular Showtime series or something. Declaring a policy is only meaningful if it actually affects your choices, which in this case only makes sense if you expect to be considering mass murder as a solution to your problems.
And in a situation as extreme as that, I wouldn't be surprised if some otherwise unthinkable subjective downsides came up.
Is this one of those "torture one person for 50 years" versus "deaths of millions" thought experiments?
Easiest thought experiments ever?
Would you rather be tortured for 3^^^3 years, or have a dust speck in your eye?
That's not that easy, unless having a dust speck in my eye also entails my living for 3^^^3 years.
This seems like a good "control" thought experiment to determine whether people are just being contrarian.
If I use UDT2 can I choose 'both'?
Wow. You just decreed it impossible for euthanasia to be done professionally.
I think if someone's paying you do perform a service for them, that counts as doing it for their benefit. You're benefiting from the money, not the act itself.
I'd be incredibly surprised if this actually worked clinically.
Start here, and follow the links.
That doesn't answer my question. I'm not interested in the ethical, legal, and societal barriers to adequate pain management, which is what your link covers as far as I can tell.
I want to know how one intends to circumvent opiate tolerance, and whether or not large doses of stimulants really do counteract the side effects of large doses of opiates in a large enough class of people to be effective, without the side effects of these stimulants becoming undesirable.
Assembling a drug cocktail in order to achieve some central result while minimizing side effects, with ongoing adjustment as the severity of the underlying condition and the patient's sensitivity to the drugs in question both change, is one of those complicated problems which modern medicine is nonetheless capable of solving, given adequate resources.
Suppose I say now, in my non-depressed state, that if I were ever to become so depressed that I wanted to die, I'd prefer that this want be fulfilled.
We cannot allow this any more than we can allow people to sold themselves to slavery as a loan guarantee.
Which doesn't preclude allowing both. I can see benefits of allowing the latter. Or, more to the point, I can see situations where forbidding the latter is morally abhorrent. Specifically, when there is not a safety net in place that prevents people starving or otherwise suffering for the lack of finances that they should be able to acquire.
Sure, I can see how if you didn't like the latter then you'd dislike the former.
Yes, I should also be allowed to kill adults. Especially if they have it coming. After all, the infant still has a chance to grow up to make a worthwhile contribution while there are many adults that are clearly a waste of good oxygen or worse!
Real world test of human value along similar lines: Ashley X.
I'd say the primary value of an infant is the future value of an adult human minus the conversion cost. Adult humans can be enormously valuable, but sometimes, the expected benefits just can't match the expected costs, in which case infanticide would be advisable.
However, both costs and benefits can vary by many orders of magnitude depending on context, and there's no reliable, generally-applicable method to predict either. No matter how bad it looks, someone else might have a more optimistic estimate, so it's worth checking the market (that is, considering adoption).
Is it acceptable to assume that the conversion cost up to a newborn is less than the rest of the way to an adult? (Think this through before reading on, to avoid biased thinking about the above (This is called "Meditate", right?)) Given that, wouldn't a rich excentric that commits to either spend a pool of money on paying people to roll boulders up and down a hill or on raising the next child he makes you pregnant with cause you to not be allowed to say no? (Edited for clarity)
It quite obviously is.
If you mean as an alternative to infanticide, definitely. What's your point?
What I meant to say is that this complete stranger wants to have a child with Strange7 (for this hypothetical Strange7 can get pregnant) and it would be as wrong/illegal for Strange7 to not do so as late abortion or infanticide would be. (Edited grandparent for clarity)
If this hypothetical rich person is able and willing to cover all the costs of me bearing a child and the child being raised, they can draft a contract and present it to me. What greater good would be served by making it illegal for me to refuse? Such a law would weaken my negotiating position, increasing the chances that the rich eccentric would be able to avoid internalizing some of the long-term costs and/or that I would be put in the position of having to give up some marginally more lucrative prospect in order to avoid the legal penalty.
I'd rather not try to derive the full ethical calculus of abusive relationships and rape from first principles, but i can point you at some people who've studied the field enough to come up with excellent working approximations for most real-world cases.
Infanticide and abortion are okay, as long as doing so increases paperclip production.
However, infanticide and abortion are obviously not alone in that respect.
How do you feel about the destruction of a partially bent piece of steel wire before it has been bent fully into paperclip shape?
Is that some kind of threat???
A key point is that they don't need to advocate the legalization of infanticide, they just need to be able to cogently address the arguments for and against it. Personally, I think that in the US at this time optimal law might restrict abortion significantly more than it currently does and also that in many past cultural contexts efforts to outlaw or seriously deter infanticide would have been harmful. Just disentangling morality from law competently gets a person props.
Despite some jokes I made earlier, things that could arguably depend on values don't make good litmus tests. Though I did at one point talk to someone who tried to convert me to vegetarianism by saying that if I was willing to eat pork, it ought to be okay to eat month-old infants too, since the pigs were much smarter. I'm pretty sure you can guess where that conversation went...
That guy clearly asked you those questions in the wrong order.
... is obviously going to activate biases leading to the defense of killing animals for food, whether by denying they are equivalent or claiming to accept killing children for food. Thus the chance of persuading someone eating babies is morally acceptable depends on how strongly you argue the second point.
However...
... leads to the opposite bias, as if the listener cannot refute your second point they must convert to vegetarianism or visibly contradict themselves.
It isn't a question of current intelligence, it's a question of potential. Pigs will never grow beyond human-infant-level comprehension. Human babies will eventually become both sapient and sentient.
Saying a baby and a pig can be considered equally intelligent is like saying a midget and an 11-year-old of the same height are equally likely to become basketball players.
How about fertilized egg cells?
Caviar made from fertilized human egg cells, yum.
No, saying a baby and a pig can be considered equally intelligent is like saying a midget and an 11-year-old can be considered equally tall.
this is sounding like a copout....
Option zero: "There's an interesting story I once wrote..."
Option one: "Well then, I won't/don't eat pork. But that doesn't mean I won't eat any animals. I can be selective in which I eat."
Option two: "mmmmm... babies."
Option three: "Why can't I simply not want to eat babies? I can simply prefer to eat pigs and not babies"
Option four: "Seems like a convincing argument to me. Okay, vegetarian now." (after all, technically you said they tried, but you didn't say the failed. ;))
Option five: "actually, I already am one."
Am I missing any (somewhat) plausible branches it could have taken? More to the point, is one of the above the direction it actually went? :)
(My model of you, incidentally, suggests option three as your least likely response and option one as your most likely serious response.)
Option six: "I was a vegetarian, but I'm okay with eating babies, and if pigs are just as smart, it should be okay to eat them too, so you've convinced me to give up vegetarianism."
This reminds me of the elves in Dwarf Fortress. They eat people, but not animals.
I actually did a presentation arguing for the legality of eating babies in a Bioethics class.
And I don't eat pigs, on moral grounds.
Well, not quite option two, but yes, "You make a convincing case that it should be legal to eat month-old infants." One person's modus ponens is another's modus tollens...
I'm imagining this conversation while you're both holding menus...
In seriousness, there are good instrumental reasons not to allow people to eat month-old infants that are nothing to do with greatly valuing them in your terminal values.
Both menus being "vegetarian and non vegetarian" or "pork menu and baby menu"? :)
You started eating month-old infants?
Time of birth serves as a bright line.
Very much agreed. This is also why we place much more moral value in the life of a severely brain-damaged human than a more intelligent non-human primate.
Kudos to you for forthrightness. But em... no. Ok, first, it seems to me you've swept the ethics of infanticide under the rug of abortion, and left it there mostly unaddressed. Is an abortion an "ok if regrettable practice?" You've just assumed the answer is always yes, under any circumstances.
I personally say "definitely yes" before brain development (~12 weeks I think), "you need to talk to your doctor" between 12 and 24 weeks, and "not unless it's going to kill you" after 24 weeks (fully functioning brain). Anybody who knows more about development is welcome to contradict me, but those were the numbers I came up with a few years ago when I researched this.
If a baby/fetus has a mind, in my books it should be accorded rights - more and more so as it develops. I fail to see, moreover, where the dividing line ought to be in your view. Not to slippery-slope you but - why stop at infants?
*(Also note that this is a first-principles ethical argument which may have to be modified based on social expedience if it turns into policy. I don't want to encourage botched amateur abortions and cause extra harm. But those considerations are separate from the question of whether infants have worth in a moral sense.)
This gave me a nasty turn, because probably the most annoying idea religious people have is that if we're "just" chemicals, then nothing matters. One has to take pains to say that chemicals are just what we're made of. We have to be made out of something! :) And what we're made of has precisely zero moral significance (would we have more worth if we were made out of "spirit"?).
I mean, I could sit here all day and tell you about how you shouldn't read "Moby Dick," because it's just a bunch of meaningless pigment squiggles on compressed wood pulp. In a certain very trivial sense I am absolutely right - there is no "élan de Moby Dick" floating out in the aether somewhere independent of physical books. On the other hand I am totally missing the point.
.
The standard answer is that at that point there is no longer a conflict with the rights of the women whose body the infant was hooked into. We don't generally require that people give up their bodily autonomy to support the life of others.
In the first month of pregnancy, right, but in the seventh month you can Caesarean the baby out of the mother and put it into an incubator, can't you?
Not without some risk to both, the exact amounts depending on the situation..
(I'm assuming that by “some” you mean ‘larger than that of either abortion or natural childbirth’, otherwise it wouldn't be relevant. Right?)
Smaller would be relevant too, for the opposite reason.
We don't?
In what situation, exactly, do we fail to do this? I can't think of any other real-world situation. I can imagine counterfactual ones, sure, but I'm fairly certain most people see those as analogies for abortion and respond appropriately.
We don't, for instance, require people to donate redundant organs, nor even blood. Nor is organ donation mandatory even after death (prehaps it should be).
What are some cases where we do require people to give up their bodily autonomy?
Mandatory drug testing?
That's the big one I can think of, and this usually arises in a very different context where it's easy to dehumanize those forced to take such tests: alleged criminals and children.
(Even in these contexts, peeing in a cup or taking a breathalyzer is quite a bit less severe than enduring a forced pregnancy. Mandatory blood draws for DUIs do upset a signifianct number of people. How you feel about employment tests and sports doping might depend on how you feel about economic coercion and whether it's truly "mandatory".)
The complication here is that a responsible, consenting adult tacitly accepts giving up her bodily autonomy (or accepts a risk of doing so) when she has sex. That's precisely the same reason men are required to pay child support even if they didn't wish for a pregnancy. (Yes, I see the asymmetry; yes, it sucks).
Case-by-case reasoning is probably a good thing in these circs, but unless the mother was not informed (minor/mental illness) or did not consent, then the only really tenable reason for a late-term abortion I can think of is health. In which case the relative weighing of rights is a tricky business, a buck I will pass to doctors, patients & hospital ethics boards.
The complication there is that on the standard view, one cannot give up one's bodily autonomy permanently. You cannot sell yourself into slavery. The pregnant person always has the right to opt-out of the contract.
Though the fetus would presumably be able to get damages. I guess those get paid to the next-of-kin.
Upvoted entirely for this line, which made me spit coffee when it finally registered.
This is already a significant retreat from your previously stated position. ("not unless it's going to kill you" after 24 weeks)
That's a hell of an assertion. I don't really see any reason to accept it as other than a normative statement of what you wish would happen.
As you say, there is an asymmetry. Garnishing a wage is a bit different, and seems appropriate to me.
Yes, it is, so long as it is reasoning rather than assertions that this case is different. We have to specify how it is different, and how those differences make a difference. The easiest way for me to do this is to use analogies. This is dangerous of course, as one must keep in mind that they can ignore relevant differences while emphasizing surface similarities.
So, in this case the relevant specialness you're calling out is that a risky activity was knowingly engaged in that created a person who needs life support for some time, as well as care and feeding far after that. So I'm going to try to set up an analogous situation, but without sex being the act (which I think is irrelevant) coming into the mix. This will also mean another difference: the person will not be "created" except metaphorically from a preëxisting person. I personally don't see how that would be relevant, but I suppose it is possible for others to disagree.
Suppose a person is driving, and crashes into a pedestrian. This ruptures the liver of the pedestrian. A partial transplant of the driver's liver will save the pedestrian's life. Is the driver expected to donate their liver? Should it be required by law?
Note that the donor's death rate for this operation is under 1%. When we compare this to the statistics for maternal death, we see it is similar to WHO's 2005 estimate of world average of 900 per 100,000, though developed regions have it far lower at 9 per 100000.
The driver could instead be made responsible for the victim's exact medical costs or some fraction thereof, in addition to any punitive or approximated damages. This would provide adequate incentive to seek out ways to reduce those costs, including but not limited to a voluntary donation on the part of the driver or someone who owes the driver a favor.
In the abortion example, the fetus 1) is created already attached and ending ongoing life support may not be the same as requiring that someone who is not providing it provide it, 2) needs life support for an extended period, and 3) can only use the life support of one person.
"Suppose a person is driving, and crashes into a pedestrian. This ruptures the liver of the pedestrian. A partial transplant of the driver's liver will save the pedestrian's life. Is the driver expected to donate their liver? Should it be required by law?"
For organ transplantations, the body biochemistries of the organ donor and acceptor must be somewhat compatible, otherwise the transplanted organ gets rejected by the immune system of the acceptor. The best transplantation results are between the identical twins. For unrelated people, there are tests to estimate the compatibility of organs, and databases. A conclusion: The driver is not generally expected to donate their liver, because in the majority of the cases, it would not help the victim.
Imagine an alternate universe, where all the human bodies are highly compatible for transplantation purposes. - Yes, I believe it might become a social norm in this alternate universe, or even a law, that the driver must donate their liver to the victim.
Is it? I suppose it is. I contain multitudes. No, honestly, I just didn't name all my caveats in the previous post (my bad). Clearly there are two people's interests to take into consideration here. Also, as I noted, that was an ethical rather than legal argument. I don't have any strong opinions about what the law should do wrt this question.
I don't think it's unreasonable, although you're right it's not a fact statement. But I think it's a fairly well-established principle of ethics & jurisprudence that informed consent implies responsibility. Nobody has to have unprotected sex, so if you (a consenting adult) do so, any reasonably foreseeable consequences are on your shoulders.
It's a reasonably good analogy I guess. There are two separate questions here: what should the law do, and what should the driver do. I don't think anybody wants the law to require organ donations from people who behave irresponsibly. However, put in the driver's shoes, and assuming the collision was my fault, I would feel obligated to donate (if, in this worst-case scenario, I am the only one who can).
There is a slight disanalogy here though, which is that an abortion is an act, whereas a failure to donate is an omission. It's like the difference between throwing the fat guy on the tracks and just letting the train hit the fat guy.
I am very much in favour of this sort of policy; it would do no end of good.
The effect of pretending to have opt-out organ donation is small. Austria is unique in really having opt-out organ donation (everywhere else, next of kin decide in practice), so it's hard to judge the effect, but it's not an outlier. In the 90s, Spain became the high outlier and Italy ceased being the low outlier, so rapid change is possible without doing anything ethically sensitive. graph. More Kieran Healy links here.
An interesting article.
"Reform of the rules governing consent is often accompanied by an overhaul and improvement of the logistical system, and it is this—not the letter of the law—that makes a difference. Cadaveric organ procurement is an intense, time-sensitive and very fluid process that requires a great deal of co-ordination and management. Countries that invest in that layer of the system do better than others, regardless of the rules about presumed and informed consent."
In our country, we have an opt-out donation, but I guess the relatives can have a veto. I have seen a physician on TV, who said some scary things openly. Our doctors are standardly overworked and underpayed. Imagine a doctor, who, towards the end of the long shift, sees a patient dying with some of the organs intact. If he decides to report the availability of the organs, he creates an extra, several hours work for himself and others, paperwork included. There is either none or very little financial reward for reporting the organs, I do not remember exactly. They might feel heroic for the first couple of times, but, eventually, they resign and stop making these reports, after they work long enough. I have seen this on TV cca 3 years ago, do not know the current situation.
Sorry, you have a point that my test won't apply to every rationalist.
The contrast I meant was: if you look at the world population, and ask how many people believe in atheism, materialism, and that abortion is not morally wrong, you'll find a significant minority. (Perhaps you yourself are not in that group.)
But if you then try to add "believes that infanticide is not morally wrong", your subpopulation will drop to basically zero.
But, rationally, the gap between the first three beliefs, and the last one, is relatively small. Purely on the basis of rationality, you ought to expect a smaller dropoff than we in fact see. Hence, most people in the first group are avoiding the repugnant conclusion for non-rational reasons. (Or believing in the first three, for non-rational reasons.)
If you personally don't agree with the first three premises, then perhaps this test isn't accurate for you.
So your point is that anyone who feels there is a moral difference between infanticide and abortion is irrational?
Because most pro-lifers already say that, in my experience.
I'll be the first to disagree outright.
First, when a woman is pregnant but will be unable to raise her child we do not force a woman to give birth to give up the baby for adoption. This is because bringing a child to term is a painful, expensive and dangerous nine-month ordeal which we do not think women should be forced into. In what possible circumstances is infanticide ethically permissible when the baby is born, the woman has already paid the cost of pregnancy and giving birth, and adoption is an option?
In general, I'm not sure it follows from the fact that persons aren't magic that persons are less valuable than we thought. Maybe babies are just glorified goldfish. Maybe they aren't valuable in the way we thought they were. But I haven't seen that evidence.
Due to a severe birth defect, the baby is profoundly mentally retarded, will suffer severe pain its entire life, and will most likely not live to see its fifth birthday.
Unfortunately, thus phrased it fails as a litmus test. For better discrimination, leave out the part about childhood death, then the pain. Then, if you're adventurous, the retardation.
Once you've left out the pain I no longer think killing the baby is ethically permissible. And I don't see how knowing that people don't have souls alters my position.
Most people's moral gut reactions say that humans are very important, and everything else much less so. This argument is easier to make "objective" if humans are the only things with everlasting souls.
Once you get rid of souls, making the argument that humans have some special moral place in the world becomes much more difficult. It's probably an argument that is beyond the reach of the average person. After all, in the space of "things that one can construct out of atoms", humans and goldfish are very, very close.
I like what Hook wrote. If I believed that babies were valuable because they have souls and then was told, "no they don't have souls", I might for a while value them less. But it has been a very long time since I believed in souls and the value I assign to babies is no longer related at all to my belief about souls (if it ever was).
Sure, they just don't resemble each other in many morally significant ways (the exception, perhaps, being some kind of experience of pain). There is no reason to think the facts that determine our ethical obligations make use of the same kinds of concepts and classifications we use to distinguish different configurations of atoms. Humans and wet ash are both mostly carbon and water, and so have a lot more in common than, say, the Sun. But wet ash and the sun and share more of the traits we're worried about when we're thinking about morality. The same goes for aesthetic value, if we need a non-ethics analogy.
I think "making the argument that humans have some special moral place in the world" in the absence of an eternal soul is very easy for someone intelligent enough to think about how close humans and goldfish are "in the space of 'things that one can construct out of atoms.'"
Would you please share? I would really, really like to know how the argument that "humans have some special moral place in the world" would work.
Show me someone who actually needs to be convinced. Just about everyone acts as if that is true. One could argue that they are just consequentialists trying to avoid the bad consequences of treating people as if they are not morally special. I'm not even sure that is the psychological reality for psychopaths though.
Also, a corollary of what Matt said, if humans aren't morally special, is anything?
The question might be less "do humans have some special moral place in the world" than "do human beings have some special moral place in the world". For example: are we privileging humans over cows to an excessive extent?
Leaving aside the physical complications of moving cows, I think most vegetarians would find the decision to push a cow onto the train tracks to save the lives of four people much easier to make than pushing a large man onto the tracks, implying that humans are more special than cows.
EDIT: The above scenario may not work out so well for Hindus and certain extreme animal rights activists. It may be better to think about pushing one cow to save four cows vs. one human to save four humans. It seems like the cow scenario should be much less of a moral quandary for everyone.
Humans are the only animals that seem to be capable of understanding the concept of morality or making moral judgements.
Morality is complicated and abstract. Maybe cetaceans, chimps, and/or parrots have some concept of morality which is simply beyond the scope of the simple-grammar, concrete-vocabulary interspecies languages so far developed.
My mother made this argument to me probably when I was in high school. Given my position as past infanticide candidate, it was an odd conversation. For the record, she was willing to go up to two or six years old, I think.
And let us not forget the Scrubs episode she also agreed with: "Having a baby is like getting a dog that slowly learns to talk."