How can we trust evidence based practice if it might be overturned tomorrow?
How can we trust non-evidence-based practice if it might never have been well-founded in the first place?
I suspect the reason this article has been downvoted more than a few times is because it sounds like FUD. To fix this, focus on what the right way to do things is, rather than focusing on concern about "thing X". It's not that concern is a wrong feeling to have, it's just that it's impossible to communicate it honestly in this way, because of the prevalence of FUD tactics.
I didn't downvote, but the title did trigger Betteridge's law of headlines for me. That alone might have been enough to inspire a downvote, if I'd been in a less forgiving mood.
The gold standard is randomized trials. If you can't do a trial, make an argument for how your method is equivalent to a randomized trial under some plausible assumptions. Anything else is garbage.
What the heck does opposing 'Evidence-based' policy mean that you support?
Non-evidence based policy? Really?
Super-evidence-based policy? (That's some damn interesting marketing propaganda.)
I literally cannot wrap my head around what the first article wants us to base our policy on except "listen to what we say, and ignore any contrary evidence."
There are quotation marks around it for a reason. Rewrite it as "Are so-called 'Evidence-based' policies damaging policymaking?", and you'll be much closer to a proper interpretation of what he wrote, and then of course your response no longer applies.
It is an important topic, but the Institute of Economic Affairs landing page that you link to is pretty lame.
Emphasizing "Evidence" gives one a hefty shove towards evidence that is quick and easy to gather.
QUICK The IEA say
A disregard for substitution effects.
but the actual problem is that substitution takes time. If you want to gather evidence about substitution effects you have to be patient. "Evidence based policy making" is biased towards fast-evidence, to accommodate the urgency of policy making. So of course substitution effects get under-estimated.
EASY Computing correlations is easy, tracing causality is hard. Worse, you can hardly hope to unravel the network of causal connections in a real world problem without making some theoretical commitments. An emphasis on "Evidence" leaves you relying on the hope that correlation does imply causality because you can get evidence for correlations. Causality? Not very practical. Then you get kicked in the teeth by Goodhart's Law
The IEA say
Calculating the external costs of harmful activities.
which is true, but hardly the worst of the problems. Ideally one would estimate the benefits of an economic policy based on adding the consumer surplus and the producer surplus. But this is too hard. Instead one tots up the market prices of things. This leads to GDP, which is a notoriously crap measure of welfare. But if you insist on "evidence" you are going to end up throwing out theoretical considerations of consumer and producer surplus in favor of GDP.
This submission is getting down voted. You might what to blog about the topic and try again with a link to your blog post. It shouldn't be too hard to provide a substantial improvement on the IEA landing page.
I'm not sure what you're getting with in the last sentence there. It seems like you're saying that neither 1 nor 2 has to be a problem after all if you continuously update on all hypotheses anyway. In which case, you might want to emphasize that instead of leaving it as a dangling implication phrased negatively.
http://www.iea.org.uk/in-the-media/press-release/%E2%80%98evidence-based%E2%80%99-policies-are-damaging-uk-policymaking
Those in favour of evidenced based policies have tended to be dismissive, arguing that this is just a case of lack of evidence, rather than a problem with evidence-based policies per se.
https://twitter.com/bengoldacre/status/370159847985381376
I'm not fully convinced though. I've started reading Taking the medicine: a short history of medicine's beautiful idea and our difficulty swallowing it by Druin Burch, and this has plenty of examples of people saying something like "Of course, up till now medicine has been totally wrong, but I've got a new way of doing it right" - and then going on to make the same old mistakes. Maybe evidence based practice is the same, just a way of convincing ourselves that we've got it right this time.
As I see it, the trouble with evidence based practice is that what counts as evidence based today doesn't meet tomorrows higher standards. This means
At the root of this is the insistence that evidence must meet some 'gold standard'. This seems to be too much the frequentist viewpoint, but in the end we have to be Bayesians, taking all evidence into account.