gwern comments on On accepting an argument if you have limited computational power. - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (85)
When confronted with highly speculative claims so beloved by philosophers, string theorists and certain AI apologists, my battle cry is "testable predictions!". If one argues in favor of a model that predicts a tiny probability of a really big harm, they better provide a testable justification of that model. In the case of Pascal's mugging, I have suggested a simple way to test if the model should be taken seriously. Such a test would have to be constructed specifically for each individual model, of course. If all you say is "I can't prove anything, but if I'm right, it'll be really bad", I yawn and move on.
And in the least convenient worlds?
Which contingent fact X do you mean?
That you can demand testing in many real world scenarios, a heuristic not always usable.
Or do you have a principled decision theory in mind, where testing is a key modification to the equations of expected-value etc and which defuses the mugging?
As a natural scientist, I would refuse to accept untestable models. Feel free to point out where this fails in any scenario that matters.
<insert standard anti-logical positivism argument like 'how do you test unreproducible events like "human history"?'>
How do you determine if the model is testable? What if there is in principle a test, but it has unacceptable consequences in at least one reasonably probable model?
For the particular scenario described in the Pascal's mugger, I provided a reasonable way to test it. If the mugger wants to dicker about the ways of testing it, I might decide to listen. It is up to the mugger to provide a satisfactory test. Hand-waving and threats are not tests. You are saying that there are models where testing is unfeasible or too dangerous to try. Name one.
That such models exist is trivial - take model A, add a single difference B, where exercising the difference is bad. For instance,
Model A: universe is a simulation Model B: universe is a simulation with a bug that will crash the system, destroying the universe, if X, but is otherwise identical to model A.
Models that would deserve to be raised to the level of our attention in the first place, however, will take more thought.
By all means, apply more thought. Until then, I'm happy to stick by my testability assertion.
A simple example might be if more of the worries around the LHC were a little better founded.