Nowhere does it follow from seeing the probability as a limit in the infinite number of trials (frequentism), that the mean of that distribution with unknown mean wouldn't be restricted to specific range.
In the particular case I gave, of course frequentists could produce an argument that the mean must be in the given range. But this could not be a statistical argument, it would have to be a deductive logical argument. And the only reason a deductive argument works here is that the posterior of the mean being in the given range is 1. If it were only slightly less than 1, 0.99 say, there would be no logical argument either. In that case, the frequentist could not account for the fact that we should be extremely confident the mean is in that range without implicitly employing Bayesian methods. Frequentist methods neglect an important part of our ordinary inductive practices.
Now look, I don't think frequentists are idiots. If they encountered the situation I describe in my toy example, they would of course conclude that the mean is in the interval [0.1, 1.0]. My point is that this is not a conclusion that follows from frequentist statistics. This doesn't mean frequentist methodology precludes this conclusion, it just does not deliver it. A frequentist who came to this conclusion would implicitly be employing Bayesian methods. In fact, I don't think there is any such creature as a pure frequentist (at least, not anymore). There are people who have a preference for frequentist methodology, but I doubt any of them would steadfastly refuse to assign probabilites to theoretical parameters in all contexts.
I expect most scientists and statisticians are pluralists, willing to apply whichever method is pragmatically suited to a particular problem. I'm in the same boat. I'm hardly a Bayesian zealot. I'm not arguing that actual applied statisticians divide into perfect frequentists and perfect Bayesians. What I'm arguing is against your initial claim that no significant methodological distinction follows from conceiving of probabilities as epistemic vs. conceiving of them as relative frequencies. There are significant methodological distinctions, and these distinctions are acknowledged by virtually all practicing statisticians. Applying the different methodologies can lead to different conclusions in certain situations.
If your objection to LW is that Bayesianism shouldn't be regarded as the one true logic of induction, to the exclusion of all others, then I'm with you brother. I don't agree with Eliezer's Bayesian exclusionism either. But this is not the objection you raised. You seemed to be claiming that the distinction between Bayesian and frequentist methods is somehow idiosyncratic to this community, and this is just wrong.
(Incidentally, I am sorry you find my example boring and stupid, but your quarrel is not with me. It is with Morris DeGroot, in whose textbook I first encountered the example. I mention this as a counter to the view you seem to be espousing, that all respectable statisticians agree with you and only "sloppy philosophers" disagree.)
In the particular case I gave, of course frequentists could produce an argument that the mean must be in the given range. But this could not be a statistical argument, it would have to be a deductive logical argument.
The frequentists do have an out here: conditional inference. Obviously, (v2+v1)/2 is sufficient for m, so they don't need any other information for their inference. But it might occur to them to condition on the ancillary statistic v2-v1. In repeated trials where v2-v1 = 0.9, the interval (v1,v2) always contains m.
Edit: As pragmatist mentio...
I've had a bit of success with getting people to understand Bayesianism at parties and such, and I'm posting this thought experiment that I came up with to see if it can be improved or if an entirely different thought experiment would be grasped more intuitively in that context:
I originally came up with this idea to explain falsifiability which is why I didn't go with say the example in the better article on Bayesianism (i.e. any other number besides a 3 rolled refutes the possibility that the trick die was picked) and having a hypothesis that explains too much contradictory data, so eventually I increase the sides that the die has (like a hypothetical 50-sided die), the different types of die in the jar (100-sided, 6-sided, trick die), and different distributions of die in the jar (90% of the die are 200-sided but a 3 is rolled, etc.). Again, I've been discussing this at parties where alcohol is flowing and cognition is impaired yet people understand it, so I figure if it works there then it can be understood intuitively by many people.