Perseverance, like everything, is good in moderation.
The funny thing is that the recent popularization of economics, all the Freakonomics books (Dan Ariely, Tyler Cowen, Tim Hartford, Robert Frank, Steve Landsburg, Barry Nalebuff), is summed up by Steve Levitt when he said he likes solving little problems rather than not solving big problems. Thus, economists still don't understand business cycles, growth, inequality--but they are big on why prostitutes don't use condoms, or sumo wrestlers cheat in tournaments, or why it is optimal to peel bananas from the 'other' end. It's better than banging your head against the wall, but I don't think anyone spends the first two years in econ grad school to solve these problems.
At some level, the Humean doubts about the illogic of induction have to give way, and you make assumptions you cannot justify. If you listen to talk radio, everyone has a really strong opinion, but it gets you thinking, sets up an argument to critique. We make decisions based on assumptions and theories, and these are all suspect, but I think without some decisiveness that could be called overconfidence, we would be catatonic.
btw: do they even have Goofus and Gallant any more? I would think highlighting evil Goofus would be blaming the victim.
To the extent one can induce one to empathize, cooperating is optimal. The repeated game does this by having them play again and again, and thus be able to realize gains from trade. You assert there's something hard wired. I suppose there are experiments that could distinguish between the two models, ie, rational self interest in repeated games, versus the intrinsic empathy function.
I think it's not binary. The 'humane' approach is focusing on the endgame, the 'moralist' on a tactic he thinks will get there. Strategy not mated to tactics is futile, so I think the Taoist in this example could be faulted for being naive: can't we all just get along? Clearly, some ethics, habits, are better suited towards humaneness than others. But the problem with the tacticians, the moralists, is that they are often wrong: their practices won't get to the objective very well(think about the poor Raelians). Indeed, any sufficiently comprehensive set of tactics will be wrong, and any right set of tactics will be incomplete.
Thus, I think it's wise to think about a good endgame, what gives your life meaning, satisfaction, and pleasure, but just as important is to think about specific rules that maximizing these objectives. You will certainly not pick the optimum out of ignorance and the difficulty of the problem, and so you will always be 'wrong', especially with hindsight, on both target and tactics, but that should not lead to nihilism, rather, apply your intelligence: learn throughout your life. By the time we die, we still won't have it exactly right, but good enough for this self-aware-subsystem.