I disagree very much, see the other comments about Bell's theorem.
On Google+, Matthew Leifer, a respected researcher in theoretical physics currently at University College London, replied as follows when he was asked what his conclusions were regarding the paper:
"Well, I knew this paper was coming, so it is not a surprise. Basically, it means that if you believe that quantum states are epistemic then you have two options left:
neo-Copenhagenism: Claim that a deeper realist model was never needed to support an epistemic interpretation of the quantum state. The probabilities are just about measurement results, period.
The ontological states have to be more bizarre than imagined in current approaches. For example, you could have retrocausality or “relational” degrees of freedom (whatever that means). Note that, one could also evade the theorem of this paper by claiming that quantum i.i.d. product states do not correspond to i.i.d. probability distributions in the ontological model. However, doing this does not evade a related theorem by Alberto Montina, which is based on a single system.
If neither of those options is to your taste, then you might as well become an Everettian or a Bohmian, since you are stuck with the state vector in your ontology in any case.
Overall, I would say that this result is not too surprising. I think that most people in the “psi-epistemic” camp already had the intuition that a psi-epistemic ontological model formulated in the usual way would not be possible. That is why most of us were already promoting other possibilities, e.g. Fuchs is in the neo-Copenhagen camp and Spekkens often mumbles things about relationalism. Personally, I am quite interested in the idea of retrocausal psi-epistemic hidden variable theories. It is at least a fairly clearly formulated problem to try and come up with one, whereas relationalism seems vague to me, at least as it is applied to quantum theory. If that doesn’t work out then I would probably end up being an Everettian. Despite the attraction of the Fuchsian program, realism has to win out in the end for me.”
I feel like if you understood this, you could have put it in your own words.
Did the theorem presented in this paper make any distinction between measurement with collapse and measurement without collapse? Would the proven theorem break down if collapse was how the world worked?
No. There is no such distinction between measurement mechanisms in this paper. Instead, this paper is about the difference between the wavefunction uniquely corresponding to the physical reality vs. only corresponding to physical reality "on average."
From a recent paper that is getting non-trivial attention...
From my understanding, the result works by showing how, if a quantum state is determined only statistically by some true physical state of the universe, then it is possible for us to construct clever quantum measurements that put statistical probability on outcomes for which there is literally zero quantum amplitude, which is a contradiction of Born's rule. The assumptions required are very mild, and if this is confirmed in experiment it would give a lot of justification for a phyicalist / realist interpretation of the Many Worlds point of view.
More from the paper:
On a related note, in one of David Deutsch's original arguments for why Many Worlds was straightforwardly obvious from quantum theory, he mentions Shor's quantum factoring algorithm. Essentially he asks any opponent of Many Worlds to give a real account, not just a parochial calculational account, of why the algorithm works when it is using exponentially more resources than could possibly be classically available to it. The way he put it was: "where was the number factored?"
I was never convinced that regular quantum computation could really be used to convince someone of Many Worlds who did not already believe it, except possibly for bounded-error quantum computation where one must accept the fact that there are different worlds to find one's self in after the computation, namely some of the worlds where the computation had an error due to the algorithm itself (or else one must explain the measurement problem in some different way as per usual). But I think that in light of the paper mentioned above, Deutsch's "where was the number factored" argument may deserve more credence.
Added: Scott Aaronson discusses the paper here (the comments are also interesting).