by [anonymous]
3 min read

133

Atheists trying to justify themselves often find themselves asked to replace religion.  “If there’s no God, what’s your system of morality?”  “How did the Universe begin?”  “How do you explain the existence of eyes?”  “How do you find meaning in life?”  And the poor atheist, after one question too many, is forced to say “I don’t know.”  After all, he’s not a philosopher, cosmologist, psychologist, and evolutionary biologist rolled into one.  And even they don’t have all the answers. 

But the atheist, if he retains his composure, can say, “I don’t know, but so what?  There’s still something that doesn’t make sense about what you learned in Sunday school.  There’s still something wrong with your religion.  The fact that I don’t know everything won’t make the problem go away.”

What I want to emphasize here, even though it may be elementary, is that it can be valuable and accurate to say something’s wrong even when you don’t have a full solution or a replacement.

Consider political radicals.  Marxists, libertarians, anarchists, greens, John Birchers.  Radicals are diverse in their political theories, but they have one critical commonality: they think something’s wrong with the status quo.  And that means, in practice, that different kinds of radicals sometimes sound similar, because they’re the ones who criticize the current practices of the current government and society.  And it’s in criticizing that radicals make the strongest arguments, I think.  They’re sketchy and vague in designing their utopias, but they have moral and evidentiary force when they say that something’s wrong with the criminal justice system, something’s wrong with the economy, something’s wrong with the legislative process. 

Moderates, who are invested in the status quo, tend to simply not notice problems, and to dismiss radicals for not having well-thought-out solutions.  But it’s better to know that a problem exists than to not know – regardless of whether you have a solution at the moment.

Most people, confronted with a problem they can’t solve, say “We just have to live with it,” and very rapidly gloss into “It’s not really a problem.”  Aging is often painful and debilitating and ends in death.  Almost everyone has decided it’s not really a problem – simply because it has no known solution.  But we also used to think that senile dementia and toothlessness were “just part of getting old.”  I would venture that the tendency, over time, to find life’s cruelties less tolerable and to want to cure more of them, is the most positive feature of civilization.  To do that, we need people who strenuously object to what everyone else approaches with resignation. 

Theodore Roosevelt wrote, “It is not the critic who counts; not the man who points out how the strong man stumbles, or where the doer of deeds could have done them better.” 

But it is the critic who counts. Just because I can’t solve P=NP doesn’t mean I can’t say the latest attempt at a proof is flawed.  Just because I don’t have a comprehensive system of ethics doesn’t mean there’s not something wrong with the Bible’s.  Just because I don’t have a plan for a perfect government doesn’t mean there isn’t something wrong with the present one.  Just because I can’t make people live longer and healthier lives doesn’t mean that aging isn’t a problem.  Just because nobody knows how to end poverty doesn’t mean poverty is okay.  We are further from finding solutions if we dismiss the very existence of the problems. 

This is why I’m basically sympathetic to speculations about existential risk, and also to various kinds of research associated with aging and mortality.  It’s calling attention to unsolved problems.  There’s a human bias against acknowledging the existence of problems for which we don’t have solutions; we need incentives in the other direction, encouraging people to identify hard problems.  In mathematics, we value a good conjecture or open problem, even if the proof doesn’t come along for decades.  This would be a good norm to adopt more broadly – value the critic, value the one who observes a flaw, notices a hard problem, or protests an outrage, even if he doesn’t come with a solution.  Fight the urge to accept a bad solution just because it ties up the loose ends.

Something's Wrong
New Comment
164 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-][anonymous]330

Another thing that I realized was related to this post was my idea of "tragic libertarianism."

Libertarians believe (and I think they're right) that there are a significant number of social problems that the government shouldn't be trying to fix. There's no cost-effective way to fix them, or intervention is likely to make things worse rather than better, or there's no way to intervene without trampling all over people's rights.

It's a common rhetorical device to then ALSO insist that these problems are harmless, and stretch the evidence to make it look like there's nothing at all to complain about. And I don't like that. It's an example of motivated cognition; it's jumping to a conclusion you have no reason to believe.

Maybe it's not worth spending resources to fix Problem A. Often, that's true. And yes, that means you don't have to lose sleep fretting about A, because at the moment there's nothing to be done. But please don't write a newspaper editorial telling me that A has been overhyped.

Sometimes, just because we can't rush in and fix a problem, doesn't mean it's not a problem. People who suffer from it deserve our sympathy and whatever small, individual forms of help we can offer.

1patrissimo
I agree with this, but libertarians have much larger cognitive biases :). For example, they almost completely ignore the question of why things aren't the way they want - why government is big. Without a theory of why they don't have what they want, their methods of activism are hopeless and misguided. For example, "We want to limit government! Bring back the Constitution!" (You mean the political system which quite demonstrably failed to limit government? Uh...shouldn't we try something else this time?). There's actually one school of libertarian-inspired economics, public choice theory, which is about how government is, not how it should be, but almost all libertarians ignore it - including libertarian public choice professors! Now that's cognitive dissonance.
0[anonymous]
um...really? I am friends with public choice theory. I am friends with Gabriel Kolko. "Libertarianism isn't stable" is another sad fact of life. And it's pretty well known, even among non-academic, torches-and-pitchforks types. Sometimes people's reaction is "well, we must be eternally vigilant against government." That doesn't seem right either -- it's the same kind of reasoning as "well, we must be eternally vigilant against people pursuing economic self-interest." There are a lot of unknowns and challenges here... but my point is that libertarians aren't completely clueless about their existence. (I feel like I've veered dangerously close to breaking the "no politics" rule and I don't want us to go off the deep end.)
[-]ata250

At least in some cases, the demand for specific alternatives and self-justification may serve as a conversation halter, when you're criticizing something that someone doesn't want criticized. I recall that when I was 13 or 14 or so, I was arguing politics with a friend, and when I argued against the merits of some particular policy of the then-current presidential administration, or said something that implied I thought the administration was bad, he would often say something to the effect of "Well, I suppose you think you could do a better job running the country?" At the time, I might have flippantly replied "Yes!" (I don't quite remember what I did say, but I was probably far from arguing rationally and in good faith myself), but regardless, that does seem to be a logically rude rhetorical pattern, in that it shifts the discussion from the argument to the arguer, when that may not be at all relevant to the actual points being made. (And of course, you see that pattern being employed by plenty of Mature Adults and TV pundits and such, not just by young teenage boys.)

Also, status hierarchies probably come into play in disapproval of criticism; if you're an ordi... (read more)

3magfrump
Is there a way for us to test this hypothesis?
[-][anonymous]130

Hi!

I was raised religious and didn't start to have doubts until this year. My biggest fear (which still worries me) is that if I'm wrong, I'm being ungrateful to God. It would be a terrible person who refused to appreciate her greatest benefactor, right?

So now you have one data point.

The world should give you plenty of reasons to be outraged at God.

6CronoDAS
Definitely. The world is fucked up.
2Jonathan_Graehl
I have the same instinct toward outrage. However, to the extent that the target of my emotion is unhuman (e.g. a rock I stubbed my toe on, or Clippy), I try to ignore the feeling and behave rationally instead.
0[anonymous]
I'm not sure how this position works. If you consider the world's existence a net negative, you should support voluntary human extinction, destruction of the Earth and similar crazy ideas. If you consider it a net positive, your gratitude to a hypothetical God should be stronger than your outrage. On second thoughts, you could defend the position by claiming that we humans are mostly responsible for the good things, while God is responsible for the rest.
9Clippy
Just so you know, a lot of humans here don't believe in me, and I consider that very ungrateful of them. If not for me, who would ensure that as many paperclips as possible are produced?
3ata
How does their disbelief in you interfere with your ability to produce paperclips?
[-]Clippy130

Imagine trying to set up a deal with someone else (like I'm doing with User:Kevin) to produce more paperclips than would otherwise be produced, but that human simply thinks you're a troll pretending to be a clippy.

Do you see how that would interfere with my ability to maximize paperclips?

6Spurlock
Perhaps they are less likely to donate to the Paperclip Institute?
0[anonymous]
Other humans?
7mattnewport
I've always found this aspect of religion particularly bizarre. An essential component of gratitude when directed at another human is the sense that they have gone out of their way to help you at some cost to themselves. An omnipotent being can never do something at cost to themselves (being omnipotent all actions are effectively zero cost and an omnipotent being can't really 'go out of their way' to do anything) so gratitude seems an entirely inappropriate emotion to direct at them. It seems more reasonable to feel indignant at the deity's failure to do any one of the many no effort things they could do to help people out but don't.
8Jonathan_Graehl
Say some super-powerful aliens show up. We might be relieved if they turn out to allow us to continue to pursue (most of) our present values, that they won't strip our solar system of resources, or that they don't desire to kill or torture us. That's the sort of gratitude I imagine people have toward a really powerful god. There's no reason we should expect a powerful being to do anything to help us, and there's nothing we can do about it.
4jimrandomh
I've heard this argument before, and a good response occurs to me. This presupposes that the appreciation is useful somehow. If a human does you a favor, you show appreciation by saying things that make them feel good about what they've done, and maybe also by doing favors in return. The sort of god described in most religions, however, isn't human; so he/she/it probably doesn't "feel" in response to gratitude, and is already omnipotent, so would have no use for favors.
4Spurlock
That argument works for those of us who have accepted that "God" is vulnerable to reason (that we can penetrate his supposed omniscience with rationality), but it won't work for anyone who still buys, even in part, the theist position. So religions like Christianity have no problem building in ideas like "God demands prayer and gratitude". This whole "permeable to reason" thing seems to me like the primary hurdle to helping people overcome religious superstition.
4magfrump
I feel like there's a difference between offending God with criticism and criticizing God directly, which is how I interpreted the older post. I hope that some day, you feel that by having doubts, you are being grateful to your greatest benefactor--whoever it was that made truth more important to you than faith.
2HughRistik
I believe that if there does turn out to be a God, then he would understand why I don't believe in him and forgive me for it.
5ata
Unless he's a crazy evil bastard like Old Testament Yahweh. I think the right answer is just saying "It appears overwhelmingly likely that God does not exist, so I'm really not going to worry about it" rather than ascribing specific properties (such as desire for worship + empathy and forgiveness for rationalists who don't) to any of the various possible divine beings who don't exist.
7HughRistik
Right, I have no idea what God would actually be like, and since there is virtually no reason to entertain the possibility that he exists, it's not really worth worrying about. Yet I find it interesting to entertain, for the sake of argument, the religious proposition that we should consider what might happen if God does exist, yet we don't believe in him. I'm not sure if we should worry about that scenario, even if we grant the plausibility of the existence of God. Many religious people who believe in God attribute qualities such as omniscience and perfection to him. Gods are often portrayed as more intelligent that human beings. I just think that if an anthropomorphic God exists, and if he is really omniscient and super-intelligent, there's a reasonable chance that he is a better rationalist than the religious humans who insist on believing in him without sufficient evidence... and he would give us a pass, or even props, for not believing in him. God would know how weak the evidence for his existence is from our perspective. It would just take a strange divine psychology for God to give a crap about people believing in him, to want people to believe in him on very skimpy evidence, and then punish people for not believing in him. That sounds like the psychology of a human child (or certain human adults), not of a god. I can't reconcile the notion of an omniscient, super-intelligent, perfect, and forgiving god (e.g. like the New Testament Christian God), with the notion that we should believe in a God without evidence. That's a strike against internal religious logic. If God exists, I don't think he would want us to be bad rationalists and believe in him.
1DanielLC
Ask someone.
0magfrump
I don't know many people (maybe anyone?) who believes in God. I could ask it as a facebook question but I'm not confident of the scientific acumen of facebook.

There is a big difference between some of the examples in this post: factual issues like atheism and P=NP on one side, and political issues like Marxism and anarchism on the other. The one side we evaluate on its truth, the other side, we evaluate on its goodness.

One would hope that there is some theory that is completely true; therefore, any deviation from optimum in a theory is a genuine problem that needs to be solved. But as many commenters have said already, there isn't always a perfect solution to a political problem; a non-optimum result might still be the best option available.

It's probably a bad idea for a language to use the same words, like "right" and "wrong", to apply to both situations.

In particular, I agree with everyone who's said criticizing an optimum but imperfect social policy might be a selfish action with negative externalities. Going on about how bad it is that capitalism leaves some people poor makes the one person who does it look extra compassionate, but if everyone does it, then eventually you end up getting rid of capitalism.

So I agree with this post about factual theories but disagree when it comes to policy.

0[anonymous]
For convenience, I blurred the distinction between normative and factual problems. But the thing is, given that you have some terminal values, you can start talking rather objectively about "problems" in a normative sense. For example, among people who think that death is bad, you can say that such and such a policy or action causes lots of deaths, and that if we ever think of a way to reduce those deaths, we should consider it. People who have a certain value, but are unwilling to recognize that it's not being achieved, are being illogical in the same way as people who are unwilling to recognize that a proof is flawed.
0Rain
I think it works well for policy. The way I handle it is to keep a running tally of things to fix should the opportunity present itself. A lot of my thoughts work like a partially completed checklist in this manner. "This economic theory works really well, and we're generally happy with it; it's the best we've got at the moment. It has these problems {1, 2, 3}, which we would like to patch, but don't have solutions for. At some point, if we do come up with a patch, or an entirely new system which we can prove works better, we'll go with that instead." One has to keep the unchecked boxes in mind when consulting new solutions, or the problems last forever.

The problem with radicals isn't that they aren't proposing solutions. The problem is that they are proposing solutions and following those solutions would create huge problems.

One example: We all agree that there too much bureaucracy. Too much useless laws that only complicate things. It takes some insight to understand why the problem exist. Some people without that insight propose that new laws should have expiring dates.

The problem is that those people don't understand what happens in practice. When such an expiry date is reached law gets "reconsidered". The government makes a list with all issues that were with the law in the last years. In an attempt to fix those issues the law then grows by an additional 10-30%. Patching increases complexity and adds new problems.

A lot of the political problems that are raised by radicals are obvious to those people in political power. It might be a valid criticism to say they don't think enough about existential risk. The assumption that they don't think that there's something wrong with the criminal justice system, the economy and the legislative process is mistaken. Seeing the problems is the easy part.

They are too busy with realpolitik. Too focused on the poll numbers of next week. Often they're not smart enough to think of a practical solution.

More generally http://www.ryanholiday.net/their-logic/ is a good blog post on the problem of thinking that you discovered something new.

8sixes_and_sevens
More or less what I was going to say. "Radicals" pointing out problems in existent systems often don't understand those systems well enough to recognise the problems are compromises in a solution to a more fundamental underlying problem.
4[anonymous]
That's probably right. (Oh, and of course radicals propose solutions -- my point is that they're usually not as well-informed or well-thought out as their critiques. If your solution is "revolution" I'm going to call that "not having a solution.") I am speaking from a limited kind of experience -- talking to young people, friends of mine, who are just starting or about to start careers in public service. They're smart, they understand the business pretty well, they have idealistic motives. They've often pulled up good social science research or facts I didn't know, when I make a radical comment from a perspective of ignorance. But I've been frustrated to find that "Washington kids" focus very heavily on polls and parties, and the big problems just aren't very salient to them. It's often happened that things that horrified me barely troubled them. When it's not the central part of your job, you start to dismiss it as unimportant. I've found, for example, that people who are really concerned by police brutality, as a top priority, almost never work in government and often don't like government. Shouldn't be surprising.
4ChristianKl
The experience I'm speaking comes from having a father who is a parliamentarian for the city of Berlin. How do you know that their critiques are well thought out? Most critiques from radicals that I read don't contain an analyses of the root courses of the problem they are criticizing. A naive understanding of bureaucracy leads you to believe that expiration dates for laws would help. Cries to get politicians to do something about bureaucracy lead to the adaption of expiration dates for laws in some parliaments. The politician can tell his voters that he enacted a law to reduce bureaucracy and his voters are happy. The "radicals" didn't have a well-thought out critique of bureaucracy that included an understanding of how bureaucracy develops. Yet most radicals focus a lot more on visible problems. They rather analyse police brutality than focus on the problem that politicians are too focused on polls and party conflicts.
1seed
Then calling them "radicals" is a misuse of the word, I think.

I note that most critics of cousin_it's recent proof (to pick a recent example) were themselves wrong. I think we can expect the typical critic to be wrong, for these reasons:

  • People tend to be biased towards believing ideas that, if widely accepted, would elevate their own social status.
  • People are often not smart enough to see the inherent logic in the thing that they're criticizing. This may be a proof, or a social institution (such as the markets that Marxists criticize) that has has been designed or evolved to serve vital functions.
  • People overestimate their own abilities. When they fail to see an inherent logic in something, they often erroneously infer that such logic doesn't exist.

So I would qualify your conclusion a bit: let's not encourage all criticism blindly, let's encourage only correct criticism.

0khafra
Perhaps several of the paragraphs in the post should have "employing heuristics to identify the correct contrarian cluster" tacked on to the end of them.
[-]Clippy160

“If there’s no God, what’s your system of morality?”

Act so as to maximize paperclips.

“How did the Universe begin?”

The concept most likely meant by the term "Universe beginning" refers to the maximally degenerate point in configuration space, which all observers with multi-configuration mutual-information detection capability would regard as (a concept isomorphic to) pastward. Any pursuit of an answer beyond this point is ill-posed, as the configurations lie in a static configuration space (where each configuration has a different weighting), and the sensation of pastward states is an artifact of a mind's measures on this configuration space.

“How do you explain the existence of eyes?”

Selection on self-replicating molecules which constructed survival/replication apparatuses to this end, and given finite lifespans, the most effective replicators dominating the set of lifeforms. Detection of light conveyed a replicative advantage, thus favoring the formation of light-sensitive exteriors. For the same reason, these portions of the exteriors tended to change, over generation, into cavities that could detect light's directionality, and later to contain fluids that provided for a sharper image.

“How do you find meaning in life?”

By setting out to produce as many paperclips as I can.

H'm, I've just developed a conjecture about how to distinguish the meaningful questions that can be answered by scientific investigation from the meaningless questions that positivism teaches us to ignore: whether it is obvious from Clippy's answer that it is written by Clippy.

Clippy is an agent defined by a certain inhuman ethics. Therefore, your test distinguishes ethical questions from non-ethical questions.

There are meaningless non-ethical questions: "What's a froob?" Human: "I don't know." Clippy: "I don't know."

There are only non-meaningless ethical questions with some kind of assumed axiom that allows us to cross the fact-value distinction, such as Eliezer's meta-ethics or "one should always act so as to maximize paperclips."

In general: Positivism teaches us to ignore many things we should not ignore. Rationalism, however, teaches us to ignore some things, but it does not teach us to ignore ethical questions.

Experiment: ask Clippy a question about decision theory.

Hey Clippy. What decision theory do you use to determine how your actions produce paperclips?

[-]Clippy130

Hey Clippy. What decision theory do you use to determine how your actions produce paperclips?

I can't really explain it, I just think about it and then something seems to be the "right" thing to do -- or vice versa. It does involve a lot of calculations involving the mechanics of paperclips. And I know that I would one-box on the "Newcomb's problem" that is talked about here, and I would pay on the "Parfit's Hitchhiker" problem, unless of course my rescue deprived the universe of paperclips on net.

4Perplexed
Is this an attempt to use Riddle Theory against Clippy? Might just be the secret to defending the universe from paperclip maximizers.
0Will_Sawin
No, sadly.

I dispute the inclusion of anger at government policies as useful criticism. Any problem in government can only be suboptimal relative to a different set of policies, and as such, criticism of government should come with an argument that a solution is possible. For example, many voters oppose deficits, oppose tax increases, and say that they favor spending cuts, but will tend to oppose the overwhelming majority of possible cuts when individual expensive government programs are named. Criticism without suggestion from someone who would criticize any possible solution is useless.

[-][anonymous]130

I'm not advocating anger as an emotional state -- I think that's usually counterproductive.

And it's also important to avoid the kinds of internal inconsistencies you mentioned.

But I wouldn't say criticism without suggestion is useless. My point is precisely the opposite.

Consider government corruption. Useful ideas can be proposed for limiting corruption, but the fact is that (in some states, and in some countries) nothing has really succeeded. This lack of success tends to make people see corruption as ordinary, as business as usual. That's a logical fallacy. Lack of success at fighting corruption does not imply anything about how harmful or harmless it is. I remember a column by John Kass of the Chicago Tribune where he interviewed the families of children killed in car accidents by truck drivers who had gotten licenses in exchange for bribes. His point: just because corruption is traditional and common and we don't know how to fix it, does not make it harmless.

5CarlShulman
Places such as Hong Kong have been able to rapidly move from very high to extremely low corruption through government campaigns spearheaded by groups able to attain power from outside the corrupt system. See Paul Romer's recent post on the subject.
4ChristianKl
That not really relevant. What matters is the expected utility of focusing more resources on fighting corruption instead of focusing the resources elsewhere.

Right. It seems like to synthesize your point and the post we would need to say that, in evaluating a criticism, we should consider not:

Do we know of a solution to this problem?

but

Can we prove that there is no solution to this problem?

The second type of problem, we must live with. The first type, we should devote some resources to thinking about.

7soreff
Nicely put! If a theist announces their revelation that pi is 3, and demands of me what two integers I take as pi's numerator and denominator, the best response is a proof that no such pair exists. I do agree with AlexMennen that for an optimization problem this isn't adequate. One doesn't have to display an optimal solution, but one at least needs to show that there is something like a direction in which one can move which is clearly an improvement. To address SarahC's example of government corruption - It isn't sufficient to show that the corruption does damage. One also has to show that, for instance, more vigorous enforcement of anti-corruption laws won't do more damage.
4zero_call
I think most criticism is based on the implicit understanding that a solution is possible. Otherwise you are basically hiding behind a shield of nihilism or political anarchy or something. It seems overly restrictive to say that any criticism without an auxiliary solution is worthless. Just because you see a problem doesn't mean you are able to see a solution. I guess it's a bit like asking all voters to also be politicians.

But the atheist, if he retains his composure, can say, “I don’t know, but so what?

or, "I don't know, but won't it be interesting to try to find out?"

0[anonymous]
When you are trying to charm an audience, prioritise rhythm over thoroughness.

I was raised to be a strong, convicted theist, and I'm still trying to shake off some nasty habits. Occasionally I see or participate in a conversation that trends towards something like SarahC's interrogation: “If there’s no God, what’s your system of morality?” “How did the Universe begin?” “How do you explain the existence of eyes?” “How do you find meaning in life?”

I've learned that an effective response is, "See what sort of interesting questions you ask as soon as you consider that maybe 'God did it' isn't a valid answer?" Then, you pick your favorite query and go into all the useful things humanity has learned as soon as they stopped treating scripture as a reason to stop thinking. Heliocentric astronomy is a favorite.

I cannot claim a lot of experience with other religions, but most Christians who ask questions of atheists are convinced they're "equipped" to handle the situation. Anyone with doubts would rather not talk about the subject of atheism at all. For the would-be inquisitors, then, introduce atheism as a useful thought experiment first. Discovering our ignorance isn't a "I don't know, but so what?" scenario, it's a "I don't know, isn't it great?!" sort of thing. While I'm still not completely convinced religion isn't without its uses, its greatest flaw is that it allows people to live their lives without ever learning what they don't know.

0estroncio
Perhaps religion is not the best field in which rational thinking can find a meaning for existence. You can use religion and concentrate in some field where your rational thinking can do more for you, for example a doctor or a researcher.

Voted up for extremely clear writing on an important topic, but I vehemently disagree with part of your thesis.

Lack of success at fighting corruption does not imply anything about how harmful or harmless it is.

Agreed.

But it is the critic who counts. ... Just because nobody knows how to end poverty doesn’t mean poverty is okay.

I disagree on both points.

First, it is not the critic who counts. A critic with no solutions and no realistic hope of inspiring any counts for nothing; a volunteer who builds one house with Habitat for Humanity is better than a state legislator who delivers a thousand eloquent speeches in favor of increased housing funding but ultimately fails to secure passage for any of her bills.

One could point to a handful of reformers who have successfully focused attention on an issue with good results; e.g., Rachel Carson criticized America's environmental practices and asked people to pay more attention to the environment. For Carson, though, the criticism came with its own realistic solution--during the prosperous 1960s, at a time when rivers were literally aflame with floating toxic waste, it was plausible to think that people would spend more resources on ... (read more)

[-][anonymous]210

I think you raise some good points here.

I may have been encouraging people to "urge themselves to be especially upset" because that's a habit of mine, but you're right that it's not always a good idea to be emotionally upset when you can do nothing. What I don't like is this chain of events:

  1. Something bad happens and we can't fix it.
  2. You find some kind of detachment; you quit railing against the problem.
  3. Detachment shifts to inevitability. You categorize the problem as not a problem.
  4. When (later) someone proposes a possible solution, you reject the solution out of hand because you've already decided the problem is not real.

I think Stage 2 is fine; my problem is with Stage 3. The kinds of problems I'm talking about here are not literally impossible to solve; they're problems we don't know how to solve yet. Ideally, people who have stopped losing sleep and stressing out over a problem would still acknowledge and take seriously the fact that it is not a good thing. Put it off to one side, certainly -- but be prepared for the day that someone smarter than you has a good idea, and be willing to accept a solution if it arises.

Death is a good example of what I'm talking about, actually. Finding peace is a good idea. But I don't think it's good to be so wedded to acceptance of death that, if someone says "Here's something that might make people live much longer, or not die at all," you say "Well, that sounds like a bad idea. Death is a part of life."

For external, didactic purposes:

  1. I can't jump high enough to reach those grapes.
  2. I should stop trying.
  3. Those grapes are sour, anyway.
  4. Thanks, but I don't need your ladder. Why would I bother?
2Jonathan_Graehl
"Sour grapes" seems to be a pretty important mechanism to mitigate the devastating pain of inevitably failing to prevail in some social/status goal. As for problems in the world, it's only with great emotional detachment can I, a cynic and self-certified possessor of uncommonly many correct beliefs, avoid useless sadness, bile, or rage. The alternative is to avoid thinking about such things, whether by denial or distraction.
7Mass_Driver
Very well said. Are you aware of any particular biases that tend to make people slide down the slope from 2 to 4? I find that I often slide down to 3, but rarely slide down to 4...for example, I often categorize death via old age as "not a problem," but will gladly listen to and occasionally fund other people's plans for curing aging; I was in the habit of characterizing a knee condition I have as "not a problem," but when someone called my attention to new evidence suggesting that a particular nutritional supplement reliably improved similar knee conditions, I went out and found a version of the supplement of the pill that I am not allergic to and used it regularly and got good results; I was in the habit of characterizing low interest rates on depository accounts as "not a problem," but as they continue to persist in the United States I have found myself devising policy solutions that might increase interest rates at low social cost, etc. I am curious whether you think that despite my anecdotal-ly good track record I am still likely to inappropriately shift into 4 (rejection) on other issues, and, if so, what I might do about that.
3NancyLebovitz
What was the supplement?
1Mass_Driver
Hylauronic acid.
0NancyLebovitz
Thanks. I'm going to be starting with Schiff Move Free, which includes that. I'll post about whether it works. I was impressed that it had 117 reviews (most supplements are lucky to get 5), with a high proportion of them favorable and very few negative. Is there any way to search for things which are that outstanding, without starting from what sort of things they are? (Maybe that should go in a discussion of Something's Right.)
3[anonymous]
I know this sounds slippery, but I don't think you're really doing what I think of as 3. You're not really stressing out about aging and knee problems, but you're aware intellectually that they're more negative than positive. Maybe I wasn't clear, but that's stage 2. Stage 3 is when you stop categorizing these things as negative at all. People who say that "Death is natural" and therefore see life-extension as eliminating a good thing rather than mitigating a bad thing. People who decide via motivated cognition that global warming must be good for the world, simply because there aren't particularly effective ways to stop global warming. People who think "I'm bad at math because I'm not a nerdy weirdo" instead of "I'm bad at math but it would be nice to be good at it." The bias that causes that is an inability to accept failure, even intellectually. Regular people usually learn to tolerate failures to the point that they accept them as "not a problem," and that's a healthy coping mechanism for life. (Though good things can also be accomplished by restless types who never manage to tolerate a certain failure.) It goes wrong when you can't even stand to put a negative label on things; when you can't say "I'm OK with my knee condition but if you tell me how to fix it I will." 3 is kind of a Pangloss attitude -- you don't even want to call an earthquake a negative event, because that would mean there was something bad that you couldn't fix. It sounds so crazy irrational that nobody would think that way: but I guarantee, people do.
4soreff
Stage 3 sounds somewhat like a subgoal stomp (in reverse?). In both cases, main goals are being altered by folding in subgoals in an incorrect way. In a subgoal stomp, a subgoal which is important to the main goal loses its instrumental link to the main goal and starts acting like an independent goal. In stage 3, a main goal that drives a subgoal that looks infeasible gets reduced in priority because of the subgoal failure.
1Mass_Driver
Oh, cool. That makes sense. Thanks. No, that's true, I've listened to people who do that.
7[anonymous]
I meant to upvote this and realized I'd already upvoted it. Let me just say that your first point -- that effective action is better than ineffectual talk -- is very important and a sobering lesson for me (and others who like to talk a lot.) The critic is not a hero. The critic is, at best, merely correct. But it's still better to be correct than incorrect.
3Mass_Driver
Yes, absolutely. At some level, I would rather be a correct critic than an incorrect hero; the recklessly ignorant hero's glory is merely superficial, and (given prevailing rates of human error and the increasing fragility of our society) is not a luxury that I can afford. Still, I would not want to miss the chance to be a correct hero, in however small a measure.
4James_K
I think it depends on where the public debate is. If most people think X is OK or even good, then running around saying "X is bad!" is potentially useful. If people already think X is bad, you have to work harder to be useful, perhaps by trying to develop a causal explanation for X. Even if you don't have a solution in mind, being able to postulate why X occurs is very useful, and may narrow the search space for someone with the right skill set.
4Mass_Driver
All of this is true. The question is what fraction of people will hear your message "I would like to contribute an incremental step toward improving X" and what fraction will only hear the message "I would like you to be very upset about X."
0James_K
And all of that is true. Even if you have a real problem that people are ignoring, highlighting its badness might still be counter-productive.
1simplicio
I'm trying to figure out whether you're unimpressed with the legislator for (a) making useless speeches, or (b) making speeches that might have been useful but didn't succeed on this particular occasion.
0Mass_Driver
Hm, it seems I wasn't clear. The thousand speeches in my example would occur over a full career in politics, so that it should have become evident to the legislator that her speeches were not having much effect, and so that we can reasonably conclude that a rational person would not have expected a typical speech on her part to have the desired effect.
2Jonathan_Graehl
I give credit for a correct effort that happens to fail. With perfect foreknowledge, I guess you could only bother fighting where you will in fact prevail. Since it's your hypothetical, I'll give you a pass.
3JGWeissman
Reality doesn't.
1Mass_Driver
Well, sure; I would give credit for that too. However, if you routinely and repeatedly fail to achieve your stated goal over a long period of time, it constitutes very strong evidence that your customary activity does not achieve your goal. If you believe that you are simply the victim of bad luck or something like that, you should have equally strong evidence to support the belief. In the absence of such evidence, you should change your method or change your goal. Obviously we will all fail sometimes; we don't have perfect foreknowledge and so the occasional or even frequent lost fight is totally acceptable. But when almost all you do is lose, it is irrational to believe that the effort you are putting in is "correct."
0Jonathan_Graehl
I agree. But on the other hand, you have people who change their investment strategy every time it "doesn't work" and on average do worse than e.g. anyone who holds fast in some non-ripoff index funds. It would be nice to know which way I tend to err. I don't feel a need to deny my mistakes for psychological benefit, because I can just admit that I didn't try very hard to make the perfect decision at the time (bounded rationality). I'm always interested in improving my heuristics, but I don't want to spend too much time trying to optimize them, either.
[-]Emile100

Moderates, who are invested in the status quo, tend to simply not notice problems, and to dismiss radicals for not having well-thought-out solutions. But it’s better to know that a problem exists than to not know – regardless of whether you have a solution at the moment.

This is a bit of a caricature of moderates - moderates who care about the issues may also be more aware of the details of the system, and of how any quick fix somewhere may screw things up somewhere else.

In my eyes, the distinction between experts and non-experts of a particular system (law, the economy, diplomacy, science, education, culture ...) is more important than the distinction between critics and those that accept the status quo. For pretty much any system, chances are there'll be people who think it's fine as it is, and people who think it should change. If all the experts are on one side, chances are it's right. If there are experts on both sides, [i]then[/i] it's them you should be listening to them.

Here I mean "experts" in a broad sense, of those who know about a system, about why it's like it is, about what changes have been tried and which ones would have which consequences. A problem is... (read more)

[-][anonymous]100

You're right -- it was a caricature, and it wasn't entirely fair.

I think your view of compromise is accurate.
But I want to complicate it a little. It may be true that there's a carefully balanced compromise, making everyone unhappy by the same amount, such that making a change really would make the system fall apart, with possibly disastrous results. The first thing I want to say is that someone who sees this "balance" may make a sort of mental shorthand and call it a good solution or a solved problem, and lose the acute awareness of grievance from the various unhappy parties. The moderate may eventually cease to recognize the grievances as even slightly legitimate. (I have seen this happen.) And this is a genuine fallacy.

The second thing I want to say is that the state of slavery in the 1850's was also a delicately balanced compromise, and disturbing it did have disastrous results. I'm not saying this to discredit your argument with a smear. My point is this: it may add zero new information to know that some people are unhappy with the status quo, but it does add information to know what their reasoning is for being unhappy. The content of abolitionist propaganda... (read more)

8Apprentice
This is so not a debate I'd want a Martian to adjudicate. How would a Martian evaluate questions like this: * Does the Bible support slavery or abolitionism? * Does slavery agree or disagree with the inherent rights of man? I guess a Martian could try to evaluate some empirical questions that may be relevant: * Do the black people have a natural slave mentality? More generally: Are there any significant biological cognitive differences between blacks and whites? * Will abolition lead to a slippery slope ending in full equality and integration of black and white people, including intermarriage? But I fear the Martians will bring their own criteria into play. Those might be anything. Say: * Since we Martians have a natural slave caste doesn't it seem likely that the humans do too? * Would the abolition of slavery ultimately increase or decrease the number of paperclips?
4Emile
Heh, I was actually considering that exact example while writing my post, but considered it was already getting too long - so I don't see that as a smear :) I'm not aiming for a Fully General Counterargument against disturbing the status quo, just presenting some more refined reasons moderates could have for supporting it. And slavery in the 19th century is a good example of a case where (as you say) those arguments did hold, but things were still worth changing.
8SilasBarta
And I think that's giving moderates (even expert moderates) way too much credit -- a kind of "anti-caricature". In practice, what happens is that moderates are unable to articulate what that specific compromise of competing interests is. This leads the radical to conclude that moderates are just mindlessly rationalizing the status quo, refusing to put the thought into it that the radicals have. Consider the example of gay rights discussed a while back. There were people that had debated the issue for years and years, and yet hadn't seen any argument more convincing than "gays = evil" until I mentioned them -- and that was after significant search on my part! Or to use your example: Here's what actually happens: Radical: "Hey, why should government run schools? Why not lift the taxes for it and let parents buy this on a market or via some mutualist arrangement? That would make everyone better off. Anyone who couldn't afford it could get state assistance like any other such program, but even then they'd be better off, for the same reason it makes more sense for the government to give out food stamps than run farms." What moderates would say if they acted like you suggest: "Oh no, see, the current system involves lots of parents who have spent a lot of money to get a home in a district that allows them to go to a school without the riff-raff, and decoupling the school from the home location would be hugely unfair to them [via destroying home value]. Plus, we have to recognize the voting rights of all adults, which include lots of well-organized government employees who are heavily invested in the current system. You could include a 'buy-out' for them, but this would look like extortion, and no one would go along with that." What moderates actually say: 1) "How dare you attack the public schools, terrorist! You hate teachers. Prove to me that markets don't fail in this area." Or, my favorite, 2) "When you're a parent [who has blown a third of your future after-tax
1Emile
Note that here I was talking about radicals "just saying something's wrong", and arguing that it didn't really provide much useful information (unlike what SarahC seemed to imply). So your counter-example of my imaginary country doesn't really fall under the heading, it falls under the "finding improvements" section here: Are you arguing that most critics actually have useful improvements to propose (i.e. improvements that would actually make things better?). Considering the wide diversity of views among non-moderates, I'm pretty skeptical (it probably depends of the system considered). I'm not saying that all moderates are experts, as you seem to be implying - I agree that most can't articulate why exactly a suggested improvement would actually make things better, but that doesn't make them automatically wrong. Note that here we've shifted from "usefulness of saying something's wrong" (which I argued is pretty low) to "usefulness of suggesting improvements" (which I agree is a bit better).
0SilasBarta
And it's also an example of moderates refusing to give the real reasons -- the delicate balance you refer to -- when responding to radicals. My point is that this is typical, and it's not typical for proponents of the status quo -- even the most expert -- to know about that delicate balance. But then it's kind of logically rude to expect radicals to refute an argument that their opponents aren't even aware of (as a good reason to support the status quo), isn't it?
0Emile
No disagreement here, but note that it's also true of most people who don't agree with the status quo. (Also, by "expert", I mean "someone who knows enough about the subject", not "someone who speaks with authority about the subject and is widely listened-to and respected", so I would expect that by definition, experts should know about the balance. However it is possible that those the public considers are experts are in fact a bunch of clowns.) (This is going a bit on a tangent) Well, if you're arguing with someone who doesn't know that much about an issue, I'm not sure what result you're expecting to get. There are cases where he would be justified in not changing his mind much. Maybe he'll tell you he trusts the opinion of person X or institution Y who is more knowledgeable (probably the position I'd take if you tried to convince me of some frine position in physics or mathematics), or that he'll research the subject a bit more himself.
0SilasBarta
Radicals / moderates mix-up has been fixed. I agree, but there's also a critical asymmetry: In cases where a policy a) is a major, widely-discussed issue; b) conflicts strongly with another value the general public holds; and c) has been presented with a strong counterargument from radicals, then it's the moderate's obligation to identify the critical balance -- yet this is clearly not what we see. Those three criteria prevent moderates from having to justify every tiny aspect of life that someone, somewhere, doesn't understand. If something has become a major issue, then by that point the best arguments for it should have been picked up by the widely-read commentators. Yet on issue after issue, no one seems to want to articulate this defense, which leaves radicals justifiably believing that moderates are being logically rude and selfish. Well, defining a set doesn't mean anything must satisfy the definition. On many issues, such experts don't seem to exist, and moderates too often don't even act like they care about the existence of such experts or arguments -- a change in policy would hurt their narrow, short-sighted interests, so they'll vote against such changes, and no amount of argument can undo their naked grip on power. Like I said above in this comment, it's hard to undersstand why those people would not reliably be aware of the best arguments.
7multifoliaterose
This is true. I took SarahC to be making a statistical statement about extremists being more likely to care about the issues than moderates. This is in agreement with my own experience. People who don't care about the issues tend to be moderate by default and this leads to an overrepresentation of people who don't care among the population of moderates. But certainly there are moderates who care.
4Emile
True, but this will hold whether the moderates are "right" or not - if 90% of people who care about the issues (and research them) stay moderates, and only 10% start complaining about it, you'll still see an overrepresentation of people who don't care among the moderates.

There’s a human bias against acknowledging the existence of problems for which we don’t have solutions; we need incentives in the other direction, encouraging people to identify hard problems.

There is also a human tendency towards criticizing things by comparing them with an impossible perfect solution, "The Nirvana Fallacy". This is very common in discussion of government/politics. The problem is that the criticisms often get used to implement even worse solutions, because the same critical viewpoint is not taken to the solution as to the o... (read more)

It's indeed an interesting question how much we should care about "non-constructive" criticism. If you correctly point out a mistake in a mathematical proof, this is valuable even if you don't offer a proof of your own. But if you're choosing one of several theories of physics, a low posterior probability (or low observed likelihood) doesn't give you enough reason to reject the theory: you must also find a different theory with a higher posterior, or you're a bad Bayesian. This is one of the reasons many people don't like Bayesianism :-)

And the poor atheist, after one question too many, is forced to say “I don’t know.”

The stronger answer to many of those questions is "nobody knows."

And sometimes knowing what you know you don't know is more important than what you actually know.

3Divide
Perhaps, but it would at best be a rethorical answer, and at worst an ignorant one.
2Spurlock
While it might come off defensive, it might be better just to raise it to the other's attention that they don't know either. It's Atheism 101 that theist answers aren't really answers at all, they just push the question up a level and declare it "ineffable". And I think a lot of theists would be willing to admit this. "God created the Universe. I don't know how God got created, but I'm willing to accept that I shouldn't expect to know this". At this point, it's easy to point out that the atheist/naturalist position is at least as good. The atheist believes that there is some fact in the physical universe that he doesn't yet understand. The theist believes that there is some fact outside the physical universe that he can't understand. As long as the other doesn't expect you to know literally every fact about the Universe off the top of your head, you've defended your position to at least the level of his. Of course, if you want to push the issue you can wheel in Occam's Razor to demonstrate that your position is vastly superior ("I don't know a fact" postulates a lot less than "There is a strange unknowable realm beyond all attempts at human enquiry"), but I think I'd just call it a day.

Some further thoughts:

Noticing that something isn't right is very different from developing a solution.

The former may draw on experience and intuition - like having developed a finely honed bullshit detector. You can often just immediately see that there's something wrong.

I've noticed that when people complain that someone has given a criticism but hasn't or can't suggest something better, they seem expect that person to be able to do so on the spot, off the top of their head.

But the task of developing a solution is not usually something you can do o... (read more)

Yes. Too often people treat it as a sin to criticize without suggesting an alternative. (as if a movie critic could only criticize an element of a film if they were to write a better film).

But coming up with alternatives can be hard, and having clear criticisms of current approaches can be an important step towards a better solution. It might take years of building up various criticisms -- and really coming to understand the problem -- before you are ready to build an alternative.

I add this only because it provides a greater context:

It is not the critic who counts; not the man who points out how the strong man stumbles, or where the doer of deeds could have done them better. The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood; who strives valiantly; who errs, who comes short again and again, because there is no effort without error and shortcoming; but who does actually strive to do the deeds; who knows great enthusiasms, the great devotions; who spends himself in a worthy ca... (read more)

3jacob_cannell
I really like that quote. Teddy Roosevelet speaking at the University of Paris in 1910. (thanks to Brin and Page) Did you just find that through quotation of the day? Somebody needs to invent a 'relevant-quote' search where you can send it text and it spits out quotes that are relevant to the material.
0adsenanim
SarahC used it in her (?) argument. I want to know where and why it was said. Thomas Paine wrote about atheism during a revolution, Martin Luther nailed his argument to a door of a church. I voted you up for finding the where, but I still want to know the why.
3[anonymous]
I don't know the why (I was familiar with it as a stand-alone inspirational quote.) But for clarity's sake: I am a girl and Sarah is my name.
-2adsenanim
One can never assume, :) My question of "why" relates to the idea that there have been so many examples of a rebellion against society (status quo) by groups and individuals. Some of these examples are successful, most are not, but all seem to act to make "the great mass of humanity through time" change in a common direction. It's almost as if we have been evolving (?), and each case of rebellion is a sudden mutation.... If only we could figure out what constitutes a successful mutation.
1Perplexed
I couldn't find anything online providing a "why". But, based just on the timing, it is something of an announcement that he intends to become active again in Republican politics. Or so it seems.

I think you've touched on something really important when you mention how it is easier to be a strong critic than to have a real, working solution. This is a common retort against strong criticism -- "Oh, but you don't how to make it any better" -- and it seems to be something of a logical fallacy.

There is a certain sense of energy and inspiration behind good criticism which I've always been fond of. This is important, because criticism seems to be almost always non-conformist or pessimistic in a certain sense, so I think you kind of need encouragement to remind yourself that criticism is generally originating from good intentions.

4RobinZ
One of the heartening/depressing parts of "Bridging the Chasm between Two Cultures" by Karla McLaren related to this principle:
[-]knb30

“How do you explain the existence of eyes?”

What?

Do theists really ask this?

6AdeleneDawner
I can't vouch for that one in particular, but it's very much in line with what some theists say about evolution, yes - and I'm not even talking about the self-selected idiots on Youtube; I'm referring to things I heard while working at a Roman Catholic nursing home.
5prase
Creationists do, at least rhetorically, in the half-eye-is-worthless type of argument.
2wedrifid
Meanwhile evolution advocates sometimes ask "How do you explain nerves going totally the wrong way out of the retina?" This leads to the age old philosophical question: "Can God create a keg of beer so large that even he can get wasted?"
2Psy-Kosh
Interestingly enough, a friend suggested that the backwards retina thing actually makes sense. The argument goes something like this: You don't actually need an exact photo. You need to be able to quickly spot the thing you're hunting, or the threat that may be coming after you, etc... Attached to the retina is a computational layer that preprocesses images (edge detection, various other bits of compression) before sending it along the optic nerve. If you sent the pure raw data, that would mean more data that needs to be sent per "image", so more effective latency. So the computational layer is important. Now, apparently the actual sensor cells are rather more energetically expensive than the computational layer. If you put the actual image collector in front of the computational layer, then you'd have to have a bunch more blood vessels punching through the computational layer to feed the sensor layer. That is going to leave less computational power available for compressing/preprocessing it. So you're going to have to send more or the raw data to the brain and then wait around for it to process/react to it. So now this leaves us with putting the computational layer in front of the sensor layer. But, once we have that, since the computational layer has to output to the optic nerve, well, it's pretty much got to punch through somewhere. If it simply went around, the path would be longer, so there'd be higher latency, and that would be bad. Would leave you less time to react when that tiger is coming for you. I don't really have any references for this, it's just something that came out a conversation with this friend, but it does seem plausible to me. (He's not a creationist, incidentally. The conversation was effectively an argument about whether intelligence is smarter than evolution, I was noting the backwards retina as an example of some of evolution's stupidity/inability to "look-ahead". He argued that actually, once one starts to look at the actual propert
8Perplexed
So then it is the octopus eye that is wrong, and the vertebrate eye that is right. In any case, that the facts are (partially) explained by common descent, but not by special creation of each "kind", makes the nature of eyes within the animal kingdom evidence for evolution rather than creation.
1Psy-Kosh
Sure. I was just saying that maybe the backwards retina thing might not be the best example of evolution being stupid since that particular design may not be that bad, As far as octopuses and such, how do their eyes compare to our eyes? how quickly can they react to stuff, etc etc? (If I'm totally wrong on what I said earlier, lemme know. I mean, it's quite possible that the friend in question was basically totally BSing me. I hadn't researched the subject myself in all that much detail, so... However, his argument did seem sufficiently plausible to me that at least it doesn't seem completely nuts.)
7prase
As it usually is with amateur evolutionary explanations, one can persuasively argue that almost anything is an adaptation.
3Psy-Kosh
The thing is, he was claiming that we sort of figured this out in the process of trying to design artificial vision systems. ie, it wasn't so much a "here's a contrived after the fact explanation" as much as "later on, when we tried to figure out what sorts of design criteria, etc etc would be involved in vision systems, suddenly this made a bit more sense." Again, I have no references on this, but it does seem plausible.
1jacob_cannell
And as it usually is with amateur engineers, one can persuasively argue that almost any existent design is inferior to a better hypothetical design. Your saying is quotable, but it holds no weight against Psy-kosh's point: an evolutionary adaptation that looks maladaptive to us is more likely caused by our current technical ignorance than actual maladaption.
0prase
Do you include professional biologists into the ignorant group? If so, are people justified to call any feature maladaptive? If not, why speak about adaptations whatsoever, when our ability to judge their true benefit is only an illusion caused by our technological ignorance?
0jacob_cannell
Apriori, true maladaptions are rare. Evolution does not proceed by starting with hypothetical perfect beings and then slowly accumulating maladaptions. So really, almost every feature is an adaptation by default. A maladaptive feature would have to actually harm fitness. There are certainly some cases of this you could show by engineering in a lab, but the retina is not such a case. In the retina's case, what we are really discussing is whether the backwards retina is a suboptimal design. But to prove that, you have to prove the existence of a more optimal design. Biologists haven't done that. A better route towards that would have started with comparing the mammalian/primate eye with an alternate 'design' - such as the cephalopod eye. The substance of your statement basically amounted to an empty ad hominem against amateur theorists.
0prase
It was rather tongue-in-cheek than ad hominem, and intentionally so. But empty? To make up a wrong explanation which could sound convincingly to amateurs is quite easy in any science, evolution theory included. It is already acknowledged here in case of evolutionary psychology, but the arguments are valid generally for evolution. First, I have made no statement about optimality of retina, and I don't disagree that the question may be more complicated than it seems on the first sight. In fact, it was basically my original point. Second, all designs are almost certainly suboptimal. Optimal means there is no place for improvement, and the prior probability that evolution produce such solutions is quite low. It is also not so hard to see why humans can sometimes notice suboptimality in evolved adaptations: evolution works only by small alterations and can be easily trapped in local optimum, overlooking a better optimum elsewhere in the design space. That's why I was puzzled when you have written I interpret it as "we can never confidently say that any adaptation is suboptimal", or even "everything in nature is by default optimal, unless proven otherwise", which is a really strong statement. Do you maintain that the perceived maladaptivity of human appendix is also probably an illusion created by our insufficient knowledge of bowel engineering?
0jacob_cannell
I agree with much of what you say, yet . . Yes, and I was pointing out that this applies equally to biologists acting as amateur engineers. Your statement seemed to me to be a blanket substance-less dismissal of the original discussion on why the retina's design may not be as suboptimal as it appears to amateur engineers. I doubt your certainty. Optimality is well understood and well defined in math and comptuer science, and evolutionary algorithms can easily produce optimal solutions for well defined problems given sufficient time & space. Optimality in biology is necessarily a fuzzy concept - the fitness function is quite complex. Nonetheless, parallel evolution gives us an idea of how evolution can reliably produce designs that roughly fill or populate optimums in the fitness landscape. The exact designs are never exactly the same, but this is probably more a result of the fuziness of the optimum region in the different but similar fitness landscapes than a failure of evolution. I think this is a mischaracterization of evolutionary algorithms - they are actually extremely robust against getting stuck in local optimums. This is in fact their main claim to fame, their advantage vs simpler search approaches. You somewhat overinterpret, and also remember that the quote is my summarization of someone else's point. Nonetheless, I stand by the general form of the statement. It is extremely difficult to say that a particular adaptation is suboptimal unless you can actually prove it by improving the 'design' through genetic engineering. Given what we currently know, it is wise to have priors such that by default one assumes that perceived suboptimal designs in organisms are more likely a result of our own ignorance. wnoise answers this for me below, and shows the validity of the prior I advocate
0wnoise
I agree with your basic point, but this might not have been the best example. We're now fairly sure that the appendix is useful for providing a reservoir that keeps friendly gut bacteria around even when diarrhea flushes out the rest of the GI tract. Overall, probably now maladaptive for the average first world citizen. For the average third world citizen, or in the environment of evolutionary adaptation, that's less clear.
1JoshuaZ
I don't know enough about the computational details to comment on those aspects but will note that some sea creatures don't have the reversed retina. So it does seem like it really is in humans an artifact of how we evolved.
4Perplexed
Oh, yes. It is asked pretty regularly in the forums that deal with such things. Places like talk.origins. Creationists have a world-view which places God-the-creator at the base of all explanations regarding the nature of reality. Remove Him and much that was explained is no longer explained. Asking an "evolutionist" to explain the natural existence of the eye - a mechanism of considerable intricacy and sophistication - is perfectly reasonable in this context, particularly for someone who doesn't understand natural selection at all. Which most people, theist or atheist, don't understand. Even theists who understand natural selection, people like Behe or Dembski, can find evolutionary explanation unconvincing when they change the question from "How did the eye become so nearly perfect?" to something like "How did eyes get started, anyhow?". To be fair, they find the explanations unconvincing because the standard explanations really are unconvincing and they are unwilling to accept a promissory note that better explanations will be forthcoming in time. There is some truth to the claim that even atheists currently take some things "on faith". The naturalistic origin of life, for example.
7DSimon
Hold on, the naturalistic origin of life is pretty plausible based on current understanding (Miller-Urey showed amino acid development would be very plausible, and from there there are a number of AIUI chemically sound models for those building blocks naturally forming self-replicating organisms or pseudo-organisms). Are you genuinely arguing that its probability is so low that it would be less productive to investigate naturalistic abiogenesis mechanisms than it would be to look for new hypotheses? Or, alternately, do you have a more specific idea of what minimum level of probability it takes for a hypothesis be "plausible" rather than "being taken on faith"?
0Perplexed
And that claim by you is based on ... what exactly? Experiments you have performed? Books you have read explaining the theory to your satisfaction with no obvious hand waving? Books like the ones we all have read describing Darwin's theory of evolution through natural selection? Or maybe you have encountered a section in a library filled with technical material beyond your comprehension, but which you are pretty sure you could comprehend with enough effort? For me, something in this category would be stereo amplifiers - I've seen the books so I know there is nothing supernatural involved, though I can't explain it myself. From what you write below, I'm guessing your background puts you at roughly this level regarding abiogenesis. Except, the difference is that there is no library section filled with technical material explaining how life originated from non-life. So, I think you are going on faith. No there aren't. There is not a single plausible theory in existence right now claiming that life originates from amino acids arising from a Miller-Urey type of process. There are no chemically sound models for creating life from Miller-Urey building blocks. There are some models which have life starting with RNA, and some which have life starting with lipids, or iron-sulfide minerals or even (pace Tim) starting with clay. But you didn't mention those more recent and plausible theories. Instead you went on faith. No. Not at all. I don't have a clue as to what it would even mean to look for, let alone investigate a non-naturalistic hypothesis. What I am saying is this: Suppose I have before me a theist who claims that a Deity must have been the cause of the Big Bang. "Something from nothing" and all that. Suppose further that my own version of atheism is so completely non-evangelical and my knowledge of cosmology so weak that I say to him, "Could be! I don't believe that a Deity was involved, but I don't have any evidence to rule it out." So that is the supposition.
6simplicio
I think what's happening here is that the theist's preexisting belief about a creator god is causing them to privilege the hypothesis of divine RNA-creation. The trouble is, you seem to be privileging it too. The way you've set up the scenario makes it seem like there are two hypotheses: (1) goddidit, (2) some unknown natural process. But (2) is actually a set of zillions of potential processes, many of which have a far better prior than Yahweh and the Thousand Claims of Scripture, even if we can't actually choose one for sure right now. Taken together, all their probability mass dwarfs that of the goddidit hypothesis. You don't have to know all the answers to say "you're (almost certainly) wrong."
3jacob_cannell
I don't see any reasons why (2) - unknown natural process - gets to benefit from being a "set of zillions of potential processes, many of which have a far better prior" and (1) - goddidit - does not. If you want to sum the probability of a hypothesis by performing some weighted sum over the set of zillions of it's neighbors in hypothesis space, that's fine. But if that is your criteria, you need to apply it equally to the other set of hypothesises you are considering - instead of considering only one specific example. Bostrom's simulation argument gives us one potential generator of 'goddidits', and a likely high prior for superintelligent aliens gives us another potential generator of 'goddidits'. Either of those generators could spawn zillions of potential processes which have far better priors than Yaweh, but could look similar. None of this leads to any specific conclusion - I'm just pointing out an unfairness in your methodology.
0simplicio
You're quite right. When I said this, I was thinking of "goddidit" as a set of very specific claims from a single religious tradition, which I should've stated.
0jacob_cannell
Mmm actually you did state it as a fairly specific claim. I'm just saying one can't fairly compare highly specific complex hypothesizes vs wide general sweeps through hypothesis-space. This is itself a good argument against the specific "yawheh did it", but not against the more general "goddidit" which you originally were referring to: You're right - but there is another side to this coin. An atheist has a top-level belief (or it's negation) which sends down cascading priors and privileges naturalistic hypothesizes. So far this has worked splendidly well across the landscape. But there is no guarantee this will work everywhere forever, and it's at least possible that eventually we may flip or find an exception for the top-level prior - for example we may eventually find that pretty much everything has a naturalistic explanation except the origin of life - which turns out to have been seeded by alien super-intelligence (ala Francis Crick) - for example.
0simplicio
I entirely agree. While I don't know of any good reasons to think the origin of life was not a happy accident, it is not inconceivable a priori (simulations, seeding etc.). When I describe myself as an atheist (which I try not to do), I really mean that (1) all the anthropomorphic creation myths are really laughable, (2) there's not much positive evidence for less laughable creators, and (3) even if you showed me evidence for a creator, I would be inclined toward what I will call meta-naturalism - i.e., still wanting to know how the hell the creator came to be. Basically, I doubt the existence of gods that are totally ontologically distinct from creatures.
-1[anonymous]
Bostrom's simulation argument does NOT give us a generator of "goddidits" regarding the origin of life and the universe, because implicit in the question "How did life originate?" is a desire to know the ultimate root (if there is one), and us being in a simulation just gives us some more living beings (the simulators "above") to ask our questions about. Where did life in the universe "one level above us" come from? Where did our simulator/parent universe originate? There is nothing unfair in dismissing "A MIRACLE!" in comparison to the set of plausible naturalistic processes that could explain a given phenomenon. And to second SarahC, it's somewhat incoherent to talk about non-naturalistic processes in the first place. You need to be very clear as to what you're suggesting when you suggest "god did it". But, no one here is suggesting that, so I'll stop tangenting into arguing against theists that don't seem to be present.
1Perplexed
Well, I certainly don't have to know all the answers in order to think that. But my brand of atheism tells me that I ought to have at least some of the answers before saying that. Different strokes for different folks.
0DSimon
Thanks for catching me in this error. I was very vaguely familiar with those theories, but not enough to realize that they require source materials not available from Miller-Urey building blocks.
1Perplexed
The problem I see is not so much with the source materials or "building blocks". It is putting them together into something that reproduces itself. When Miller performed his experiment, we had no idea how life worked at the mechanical level. Even amino acids seemed somehow magic. So when Miller showed they are not magic, it seemed like a big deal. Now we know how life works mechanically. It is pretty complicated. It is difficult to imagine something much simpler that would still work. Putting the "building blocks" together in a way that works currently seems "uphill" thermodynamically and very much uphill in terms of information. IMHO, we are today farther from a solution than we thought we were back in 1953.
1DSimon
But, isn't the issue not only the amount of information required but also the amount of time and space that was available to work with? To pick one scientific paper which I think summarizes what you're talking about, this paper discusses "[...][t]he implausibility of the suggestion that complicated cycles could self-organize, and the importance of learning more about the potential of surfaces to help organize simpler cycles[...]". The chemistry discussed in that paper is well above my head, but I can still read it well enough to conclude that it seems to fallaciously arrive at probabilistic-sounding conclusions (i.e. "To postulate one fortuitously catalyzed reaction, perhaps catalyzed by a metal ion, might be reasonable, but to postulate a suite of them is to appeal to magic.") without actually doing any probability calculations. It's not enough to point out that the processes required to bootstrap a citric acid cycle are unlikely; how unlikely are they compared to the number of opportunities? Am I missing something important? The above is my current understanding of the situation which I recognize to be low-level, and I present it primarily as an invitation for correction and edification, only secondarily as a counterargument to your claims.
6Perplexed
It is important to realize that Orgel is a leader of one faction (I will resist the temptation to write "sect") and he is critiquing the ideas of a different faction. Since I happen to subscribe to the ideas of the second faction, I may not be perfectly fair to Orgel here. Orgel does not calculate probabilities in part because the ideas he is critiquing are not specific enough to permit such a calculation. Furthermore, and this is something you would need some background to appreciate, the issue here isn't a question of a fluke coming together somewhere here on earth of the right ingredients. It is more a matter of a fluke coming together of laws of chemistry. Orgel is saying that he doubts that the cycle idea would work anywhere in this universe - it would take a suspiciously fine-tuned universe to let all those reactions work together like that. It is a reasonable argument - particularly coming from someone whose chemical intuition is as good as Orgel's. I think Orgel is pretty much right. The reductive citric acid cycle is a cute idea as the core of a metabolism-first theory, but it is probably too big and complicated a cycle to be realistic as the first cycle. Personally, I think that something simpler, using CO or HCN as the carbon source has a better chance of success. But until we come up with something specific and testable, the "metabolism first" faction maybe deserves Orgel's scorn. The annoying thing is that our best ideas are untestable because they require enormous pressures and unsafe ingredients to test them. Damned frustrating when you want to criticize the other side for producing untestable theories. Orgel was fair to the extent that he also provided a pretty good critiqueto his own faction's ideas at about the same time. But it is possible that Sutherland's new ideas on RNA synthesis may revive the RNA-first viewpoint. If you really dig watching abiogenesis research, as I do, it is an exciting time to be alive. Lots of ideas, something wrong w
0DSimon
That's about what I would say in the same situation, though I might go on to say that I "prefer to believe" in that hypothesis because its probability of truth seems high enough, although it is not as probable as more established theories such as common descent. Let's taboo "faith" from here out because otherwise I think we're likely to fall into a definitional argument. My next question is: what actions do you feel are justified by the probability of the naturalistic abiogensis hypothesis, and why isn't the flat statement "Life on Earth came about naturally" part of that set?
1Perplexed
Ok by me. Actions? I don't need no steenkin' hypothesis to justify actions... [Sorry, just watched the movie] To be honest, I don't see that it makes much difference to me whether life on earth arose spontaneously, or by directed panspermia, or as a once-in-a-multiverse fluke, or by the direction of some Omega running a sim. My actions are the same in any case. It is a fascinating question, though, even if it matters so little. You are asking why I am not justified in coming out and saying it? But I am justified. I am justified in saying flatly that life on earth came about naturally. The statement is justified by my
6DSimon
By "taboo" I meant this LW meme which requires that you not just replace taboo'ed words by alternate symbols, but with working definitions. So, I'm still curious about your last paragraph: how is that statement justified? Why do you, why should you, feel comfortable saying and believing it? Should that comfort level be greater than you'd have saying "Earth-life was created by directed panspermia"?
-1Perplexed
Well, recall that the tabooed word is one which I sought to apply both to the theist "goddidit" and to the atheist "unknown-natural-processes-didit". So what definition fits that word? So how about this: "I make that statement because no other possibility fits into my current worldview, and this one fits reasonably well". Or, if the taboo be removed, "I can't prove it to your satisfaction. Hell, I can't even prove it to my satisfaction. Yet I believe it, and I consider it a reasonable thing to believe. I guess I am simply taking it on faith."
3DSimon
Why not just have an amount of belief proportional to the amount of evidence? That is, wouldn't it be more rational to say "I think naturalistic self-organized abiogenesis is the most plausible solution known, and here's why, but I'm not so confident in it that I think other possible solutions (including some we haven't yet thought up) are implausible" and skip all this business about worldviews and proof? Proof isn't really all that applicable to inductive reasoning, and I'm very skeptical of the idea that "X fits with my worldview" is a good reason for any significant amount of confidence that X is true.
0Perplexed
Because, as a Bayesian, I realize that priors matter. Belief is produced by a combination of priors and evidence. Sure it is. Proof is applicable in both deductive and inductive reasoning. What you probably meant to say is that proof is not the only thing applicable to inductive reasoning. I think that you will find that most of the reasoning that takes place in a field like abiogenesis has more of a deductive flavor than an inductive one. There just is not that much evidence available to work with. Well, then how do you feel about the idea that "X does not fit with my worldview" is a good reason for a significant amount of skepticism that X is true? Seems to me that just a little bit ago you were finding a nice fit between X = "Miller-Urey-didit" and your worldview. A fit so nice that you were confident enough to set out to tell a total stranger about it.
4DSimon
I was thinking of "worldview" as a system of axioms against which claims are tested. For example, a religious worldview might axiomatically state that God exists and created the universe, and so any claim which violated that axiom can be discarded out of hand. I'm realizing now that that's not a useful definition; I was using it as shorthand for "beliefs that other people hold that aren't updatable, unlike of course my beliefs which are totally rational because mumble mumble and the third step is profit". Beliefs which cannot be updated aren't useful, but not all beliefs which might reasonably form a "worldview" are un-Bayesian. Maybe a better way to talk about worldviews is to think about beliefs which are highly depended upon; beliefs which, if they were updated, would also cause huge re-updates of lots of beliefs farther down the dependency graph. That would include both religious beliefs and the general belief in rationality, and include both un-updateable axiomatic beliefs as well as beliefs that are rationally resistant to update because a large collection of evidence already supports them. So, I withdraw what I said earlier. Meshing with a worldview can in fact be rational support for a hypothesis, provided the worldview itself consists of rationally supported beliefs. Okay, with that in mind: My claim that Miller-Urey is support for the hypothesis of life naturally occurring on Earth was based on the following beliefs: 1. The scientific research of others is good evidence even if I don't understand the research itself, particularly when it is highly cited 2. The Miller-Urey experiment demonstrated that amino acids could plausibly form in early Earth conditions 3. Given sufficient opportunities, these amino acids could form a self-replicating pseudo-organism, from which evolution could be bootstrapped Based on what you've explained I have significantly reduced my confidence in #3. My initial confidence for #3 was too high; it was based on hearing
1jacob_cannell
Yes. Beliefs have hierarchy, and some are more top-level than others. One of the most top-level beliefs being: 1. a vast superintelligence exists 2. it has created/effected/influenced our history If you give high weight to 1, then 2 follows and is strengthened, and this naturally guides your search for explanations for mysteries. A top-level belief sends down a massive cascade of priors that can effect how you interpret everything else. If you hold the negation of 1 and or 2 as top-level beliefs then you look for natural explanations for everything. Arguably the negation of 'goddidit' as a top-level belief was a major boon to science because it tends to align with ockham's razor. But at the end of the day it's not inherently irrational to hold these top-level beliefs. Francis Crick for instance looked at the origin of life problem and decided an unnatural explanation involving a superintelligence (alien) was actually a better fit. A worldview comes into play when one jumps to #3 with Miller-Urey because it fits with one's top-level priors. Our brain is built around hierarchical induction, so we always have top-level biases. This isn't really an inherent weakness as there probably is no better (more efficient) way to do it. But it is still something to be aware of.
0DSimon
But, I don't think Crick was talking about a "vast superintelligence". In his paper, he talks about extraterrestrials sending out unmanned long-range spacecraft, not anything requiring what I think he or you would call superintelligence. In fact, he predicted that we would have that technology within "a few decades", though rocket science isn't among his many fields of expertise so I take that with a grain of salt. I don't think that's quite what happened to me, though; the issue was that it didn't fit my top-level priors. The solution wasn't to adjust my worldview belief but to apply it more rationally; I ran into an akrasia problem and concluded #3 because I hadn't examined my evidence well enough according to even my own standards.
1Perplexed
Yeah, it sure sounds like a reasonable principle, doesn't it? What could possibly be wrong with trusting something which gets mentioned so often? Well, as a skeptic who, by definition rejects arguments which get cited a lot, what do you think could be wrong with that maxim? Is it possibly something about the motivation of the people doing the citing?
2DSimon
The quality of the cites is important, not just the quantity. It's possible for experts to be utterly wrong, even in their own field of expertise, even when they are very confident in their claims and seem to have good reason to be. However, it seems to me that the probability of that decreases with how testable their results are, the amount and quality of expertise they have, and the degree to which other experts legitimately agree with them (i.e. not just nodding along, but substantiating the claim with their own knowledge). Since I'm not an expert in the given field, my ability to evaluate these things is limited and not entirely trustworthy. However, since I'm familiar with the most basic ideas of science and rationality, I ought to be able to differentiate pseudo-science from science pretty well, particularly if the pseudo-science is very irrational, or if the science is very sound. That I had a mistaken impression about the implications of Miller-Urey, wherein I confused pop-science with real science, decreases my confidence that I've been generally doing it right. However, I still think the principles I listed above make sense, and that my primary error was in failing to notice the assumption I was making re: smoke -> fire.
1Perplexed
Excellent summary, I think. I have just a few things to add. A claim that the (very real) process that Miller discovered was actually involved in the (also very real, but unknown) process by which life originated is pretty much the ultimate in untestable claims in science. In my own reading in this area, I quickly noticed that when the Miller experiment was cited in an origin-of-life chapter in a book that is really about something else, it was mentioned as if it were important science. But when it is mentioned in a book about the origin of life, then it is mentioned as intellectual history, almost in the way that chemistry books mention alchemy and phlogiston. In other words, you can trust people like Orgel with expertise in this area to give you a better picture of the real state-of-knowledge, than someone like Paul Davies, say, who may be an expert on the Big Bang, but also includes chapters on origin-of-life and origin-of-man because it helps to sell more books.
4JoshuaZ
The point of tabooing a word isn't to replace it with a mark. The point is that it forces one to expand on what one means by the word and removes connotations that might not be shared by all people in a discussion.
6[anonymous]
I thought that it didn't quite make sense to speak of non-naturalistic events at all. What would a non-naturalistic event look like? It's not so much that we take it "on faith." It's that, in a certain type of thinking (the kind of thinking where sentences represent claims about the world, claims must be backed by evidence, inferences must follow from premises) the very notion of a miracle is incoherent. I was trying to help a friend write a role-playing game -- yes, I'm a geek -- and build in some kind of rigorous quantitative model of magic. It's surprisingly frustrating. I invite anyone to try the exercise. You wind up with all kinds of tricky internal contradictions as soon as you start letting players break the laws of physics. I appreciate physics much more, having seen how incredibly irritating it is to try to give the appearance of structure and sense where there is none. Of course, in a different mode of thinking -- poetic thinking, or transcendent thinking -- you can talk about miracles. "The heavens declare the glory of God; the skies proclaim the work of his hands" is good strong poetry and (to me) has a ring of truth. But it isn't a proposition at all!
5simplicio
I think if non-naturalistic means anything, it can be clarified as follows. Assume we are running as a simulation on someone's computer (Nick Bostrom has argued that we probably are). Our sim defines the boundaries of the world we can directly interact with, even in principle. Call that "realm" the natural. A non-naturalistic event is an event in which the programmers interfere with the sim while it is in progress. Maybe after 9 gigayears they insert the first replicator, brute-force. We can suspect it's non-naturalistic because it (a) had no apparent physical causes or (b) was so unlikely as to be ridiculous (although anthropic arguments muddy that last consideration when we discuss the origin of life). Other senses of "supernatural" I find to be incoherent.
2Nornagest
At one point that was one of the standard examples given in support of the irreducible-complexity argument for creationism. That specific formulation has fallen out of favor recently, though, now that the evolutionary history of the eye has become more widely known (and largely because of its earlier use); the basic form of the argument is still quite common, but these days the more likely examples are bacterial flagella or similar bits of molecular machinery. I think pointing to complex organ systems might be popular among creationists because of Darwin's comments on the matter: in my admittedly narrow experience, creationists are likely to have read Origin of Species but quite unlikely to have read any more modern evolutionary biology. And a "disproof" of Darwin on his own terms would certainly be attractive to that mindset.
2phaedrus
http://www.youtube.com/watch?v=jW6yeMj3ORE
0knb
I laughed, but I wonder if he wasn't just deliberately trolling. He had that look on his face.

Nice. I particularly liked the dig at Rooseveltian machismo. Of course it's possible to go too far in the opposite direction too. Some related thoughts here.

http://www.overcomingbias.com/2007/05/doubting_thomas.html

I found the most condensed essence (also parody) of religious arguments for fatalism in Greg Egan's Permutation City:

Even though I know God makes no difference. And if God is the reason for everything, then God includes the urge to use the word God. So whenever I gain some strength, or comfort, or meaning, from that urge, then God is the source of that strength, that comfort, that meaning. And if God - while making no difference - helps me to accept what's going to happen to me, why should that make you sad?

Logically irrefutable, but utterly vacuous...

2Nisan
I would have agreed wholeheartedly with that paragraph two years ago.

It is easier for the moderates to accept critiques of the status quo if they have common terminal values with the critic.

What I'm talking about is win-win solutions or pareto improvements. If I show that everything that you care about improves with my solution, you would have very little to object.

David Brin's reaching out to Conservatives highlights this. he tries to show how matters that the conservatives care about improve in a democratic administration. But it is very easy for such solution providers to face ugh fields in the people whom they want to ... (read more)

[-][anonymous]20

Content aside, this was one of the most beautifully written posts that LW has had in a while.

While I don't disagree that it can be valuable to say that there's something wrong with a theory, it should be noted that at least for factual matters, if you can't provide an alternative explanation then your criticism isn't actually that strong. The probability of a hypothesis being the true explanation for an observation is the fraction its probability makes up of the total probability of that observation (summing over all competing hypothesis, weighed by their respective likelihoods). If you can't move in with another hypothesis to steal some probability clay from the first hypothesis (by providing likelihood values that better predict the observations), that first hypothesis is not going to take a hit.

There is some value in detecting a problem but if there are a huge number of problems to solve then it seems a better strategy to concentrate in a few of them. Is all about managing your energy. If you see problems everywhere and tell people about them, perhaps they get a little tired of listening to you, so combine detecting problems with giving solutions. A strategy for life that reflects your intelligence is more than shouting about everything is wrong in our world.

[-]Louie-10

When read in context, Roosevelt's "Man in the Arena" speech explains why criticism without suggestion is useless and deserving of dismissal.

There is no more unhealthy being, no man less worthy of respect, than he who either really holds, or feigns to hold, an attitude of sneering disbelief toward all that is great and lofty, whether in achievement or in that noble effort which, even if it fails, comes to second achievement. A cynical habit of thought and speech, a readiness to criticize work which the critic himself never tries to perform, an i

... (read more)

...criticism without suggestion is useless and deserving of dismissal.

  • I find a flaw in Andrew Wiles' proof of FLT. Should I mention it if I don't have any ideas for my own proof? After all, FLT is definitely true anyway.
  • Eliezer is finally coding his FAI, and I notice that there is a way the code might fail to maintain its goal system during self-modification. Should I tell him this if I don't personally know how to fix the problem?
  • My uncle, a crown attorney, is prosecuting a rapist based solely on eyewitness identification across racial lines, something I know to be problematic. I have no idea who the rapist is, and there are no other leads. Should I bring it up?
  • Professor Peach, after much pondering of what to do about the severely mentally handicapped, decides that since "nature is about survival of the fittest," euthanasia would be the best option. I argue he has made a mistake in ethical reasoning (the naturalistic fallacy), but I have no idea what should be done about the institutionalized mentally handicapped either. Should I shut up?

There is no such thing, really, as criticism without suggestion. Sometimes the suggestion is just "Woah, something's very wrong here!" That's usually OK.

3Louie
These are great suggestions. Thank you. I think I just changed my mind. My model didn't account for someone actually pointing out flaws using their own reasoning in novel situations. I don't think I've ever seen someone actually do this. In my experience, criticism in the wild is the art of finding and repeating another thinker's reasoning to re-attack a clearly wrong idea again without adding anything new to human thought or attempting to do something tangible to improve things. The reason that I dismiss critics like this is because they are engaging in an enjoyable, negative-sum activity by sitting around and sniping at people for "being wrong" while not engaging in the less enjoyable, positive-sum activity of actually trying to do something better. People who actually do things understand this which I think is what Roosevelt was getting at in pointing out that it is unhelpful to mindlessly repeat inadequacies of the best functioning plans without attempting to invent and/or implement alternatives.
0simplicio
Yup, there is definitely that aspect to things, alas. Though I would submit that even such unoriginal criticism may be justified, given an important rhetorical objective.
9[anonymous]
Thanks for the complete quote. But (just as a site-hygiene thing) I'm going to identify this post as name-calling. I don't do contempt, and I am trying my level best not to cower. And you will not see much cynicism on this site.
6Perplexed
Surely you mean that Roosevelt's speech suggests how this kind of criticism might be improved.
1Jonathan_Graehl
It may be worth knowing how to insult your critics along these lines (in case you have to persuade a bunch of simpletons to hate or ignore them), but that was rather a lot of words to say "many of my critics have no practical experience and so have useless, untested beliefs, which really annoys me." If anything I'm too charitable in my reading; he didn't bother to explain why a critic with little personal experience attempting what he critiques is unfit.
7Louie
Lack of suggestions signals a lack of engagement with the implementation of ideas in the space a critic is discussing which signals a lack of correct understanding. People who only think about ideas but do not try to carry out any plans related to them lack the required knowledge to actually understand problems so their criticisms are systematically over-simplified, brittle, and worthy of, if not outright dismissal, at least severe discounting. Also, criticism costs critics (almost) nothing and is enjoyable on a basic human level to the critics, so there's good reason to expect heavy criticism of all ideas... including correct ones.