Comment author: JonahSinick 13 June 2013 03:54:02PM 2 points [-]

Finally, while I can see why Euler's reasoning may be representative of the sort of reasoning that physicists use, I would like to see more evidence that it is representative. If all you have is the advice of this chauffer, that's perfectly alright and I will go do something else.

I don't have much more evidence, but I think that it's significant that:

  1. Physicists developed quantum field theory in the 1950's, and that it still hasn't been made mathematically rigorous, despite the fact that, e.g., Richard Borcherds appears to have spent 15 years (!!) trying.

  2. The mathematicians who I know who have studied quantum field theory have indicated that they don't understand how physicists came up with the methods that they did.

These suggest that the physicists who invented this theory reasoned in a very different way from how mathematiicans usually do.

Comment author: twiffy 18 June 2013 06:58:22PM 0 points [-]

As a tangent, I think it's relatively clear both how physicists tend to think differently from mathematicians, and how they came up with path-integration-like techniques in QFT. In both math and physics, researchers will come up with an idea based on intuition, and then verify the idea appropriately. In math the correct notion of verification is proof; in physics it's experimentation (with proof an acceptable second). This method of verification has a cognitive feedback loop to how the researcher's intuition works. In particular physicists have intuition that's based on physical intuition and (generally) a thoroughly imprecise understanding of math, so that from this perspective, using integral-like techniques without any established mathematical underpinnings is intuitively completely plausible. Mathematicians would shirk away from this almost immediately as their intuition would hit the brick wall of "no theoretical foundation".

Comment author: twiffy 06 May 2013 05:15:46AM 4 points [-]

There is likely a broader-scoped discussion on this topic that I haven't read, so please point me to such a thread if my comment is addressed -- but it seems to me that there is a simpler resolution to this issue (as well as an obvious limitation to this way of thinking), namely that there's an almost immediate stage (in the context of highly-abstract hypotheticals) where probability assessment breaks down completely.

For example, there are an uncountably-infinite number of different parent universes we could have. There are even an uncountably-infinite number of possible laws of physics that could govern our universe. And it's literally impossible to have all these scenarios "possible" in the sense of a well-defined measure, simply because if you want an uncountable sum of real numbers to add up to 1, only countably many terms can be nonzero.

This is highly related to the axiomatic problem of cause and effect, a famous example being the question "why is there something rather than nothing" -- you have to have an axiomatic foundation before you can make calculations, but the sheer act of adopting that foundation excludes a lot of very interesting material. In this case, if you want to make probabilistic expectations, you need a solid axiomatic framework to stipulate how calculations are made.

Just like with the laws of physics, this framework should agree with empirically-derived probabilities, but just like physics there will be seemingly-well-formulated questions that the current laws cannot address. In cases like hobos who make claims to special powers, the framework may be ill-equipped to make a definitive prediction. More generally, it will have a scope that is limited of mathematical necessity, and many hypotheses about spirituality, religion, and other universes, where we would want to assign positive but marginal probabilities, will likely be completely outside its light cone.