I disagree -- are you referring to chapter 6 of BDA? In that chapter he spells out good ways of addressing the issue: the Bayesian analogs of classical hypothesis testing statistics. Most importantly, though Gelman doesn't use this language, is the idea of devising test statistics that would falsify your model and then using bootstrapping methods to compare those test statistics on the posterior distribution to the test statistics on the data. In my own view, this is a shining success of Bayesian methods over frequentist methods. Bayesian analysis might give you intractable posterior distributions, and the test statistics that matter for falsifiability are hardly ever going to have convenient forms like F-distributions, t-distributions, or Chi-squared distributions, as are naively advertised in the classical approaches. But computational methods like Metropolis-Hastings/Gibbs sampling and other advances in MCMC still let you bootstrap the test statistic even when its distribution is impossibly complicated. I think this advantage of Bayesian methods deserves to be more widely understood. The other notions mentioned in chapter 6 of BDA are graphical data analysis and measures for model expansion / predictive accuracy.
In the paper, it seemed that the part Gelman refused to address was the way in which the addition of model checking / going back to the drawing board ruined the logical coherence of the more usual inductive Bayesian arguments. I agree that he copped out here and didn't attempt to address the underlying philosophical problem -- all he did was point out that each of the other major alternatives has basically the same coherence problem, including inductive Bayes.
Yes, that's the chapter.
For a Bayesian to relinquish his original hypothesis that the distribution belonged to some family, he needs both a way to notice when the data are far too unlikely to have been produced from any member of that family at all, and a way to choose a different family that will fit better. The likelihood of the data given the prior distribution over the family's parameters is straightforwardly computable (or approximated by calculating various test statistics, when the question you're asking is "is this family of models completely...
Andrew Gelman recently linked a new article entitled "Induction and Deduction in Bayesian Data Analysis." At his blog, he also described some of the comments made by reviewers and his rebuttle/discussion to those comments. It is interesting that he departs significantly from the common induction-based view of Bayesian approaches. As a practitioner myself, I am happiest about the discussion on model checking -- something one can definitely do in the Bayesian framework but which almost no one does. Model checking is to Bayesian data analysis as unit testing is to software engineering.
Added 03/11/12
Gelman has a new blog post today discussing another reaction to his paper and giving some additional details. Notably: