I'm cynical about cynicism. I don't believe that most cynicism is really about knowing better. When I see someone being cynical, my first thought is that they're trying to show off their sophistication and assert superiority over the naive. As opposed to, say, sharing their uncommon insight about not-widely-understood flaws in human nature.
There are two obvious exceptions to this rule. One is if the speaker has something serious and realistic to say about how to improve matters. Claiming that problems can be fixed will instantly lose you all your world-weary street cred and mark you as another starry-eyed idealistic fool. (Conversely, any "solution" that manages not to disrupt the general atmosphere of doom, does not make me less skeptical: "Humans are evil creatures who slaughter and destroy, but eventually we'll die out from poisoning the environment, so it's all to the good, really.")
No, not every problem is solvable. But by and large, if someone achieves uncommon insight into darkness - if they know more than I do about human nature and its flaws - then it's not unreasonable to expect that they might have a suggestion or two to make about remedy, patching, or minor deflection. If, you know, the problem is one that they really would prefer solved, rather than gloom being milked for a feeling of superiority to the naive herd.
The other obvious exception is for science that has something to say about human nature. A testable hypothesis is a testable hypothesis and the thing to do with it is test it. Though here one must be very careful not to go beyond the letter of the experiment for the sake of signaling hard-headed realism:
Consider the hash that some people make of evolutionary psychology in trying to be cynical - assuming that humans have a subconscious motive to promote their inclusive genetic fitness. Consider the hash that some neuroscientists make of the results of their brain scans, supposing that if a brain potential is visible before the moment of reported decision, this proves the nonexistence of free will. It's not you who chooses, it's your brain!
The facts are one thing, but feeling cynical about those facts is another matter entirely. In some cases it can lead people to overrun the facts - to construct new, unproven, or even outright disproven glooms in the name of signaling realism. Behaviorism probably had this problem - signaling hardheaded realism about human nature was probably one of the reasons they asserted we don't have minds.
I'm especially on guard against cynicism because it seems to be a standard corruption of rationality in particular. If many people are optimists, then true rationalists will occasionally have to say things that sound pessimistic by contrast. If people are trying to signal virtue through their beliefs, then a rationalist may have to advocate contrasting beliefs that don't signal virtue.
Which in turn means that rationalists, and especially apprentice rationalists watching other rationalists at work, are especially at-risk for absorbing cynicism as though it were a virtue in its own right - assuming that whosoever speaks of ulterior motives is probably a wise rationalist with uncommon insight; or believing that it is an entitled benefit of realism to feel superior to the naive herd that still has a shred of hope.
And this is a fearsome mistake indeed, because you can't propose ways to meliorate problems and still come off as world-weary.
TV Tropes proposes a Sliding Scale of Idealism Versus Cynicism. It looks to me like Robin tends to focus his suspicions on that which might be signaling idealism, virtue, or righteousness; while I tend to focus my own skepticism on that which might signal cynicism, world-weary sophistication, or sage maturity.
People usually ask questions to clarify some confusion. I don't know what yours is, but thought the article might be helpful since it elucidates this subject. Have you read it?
Organisms obviously don't directly optimize their genetic fitness. Deep Blue obviously doesn't directly optimize winning chess. If you want to economically predict their actions however, finding something they seem to optimize works as a rough model. This is easy if you know the process that made them. It's the nature of a rough model you can poke holes to it by finding exceptions, but this doesn't make the model useless.
Tim might be making a stronger claim than this. If that's the case I probably don't agree with it.
OK, I'm in complete agreement with you.