Posts

Sorted by New

Wiki Contributions

Comments

How about the all-time great, now better than ever:
This time it will be different

Apart from articles submitted for peer-review (while expecting them to be published), exactly how and why do academics need to cencor themselves? I am not talking about politically sensitive ideas like feminism or racial relations, rather, on philosophy of science. Do you have reason to believe there are "closet-bayesians" in the academia?

I think you make an excellent point here, though, but I also think you are being to harsh on (us) academics.

There is a problem here that you do not seem to recognize. If any meta-level approach is better, i.e., will yield a more correct model of the universe than the current scientific method, then the scientific method will, over time, devour it, make it a part of itself. This is, because in the end, the "better" alternative approach will at some point yield a theory, no matter how small, but still, perceptibly better prediction. It may not do so for QM - it may yield only a different "interpretation", but it will, somewhere along the line, make an indisputably better theory, one that has to compete with theories that could not have been discovered without it, and people will start using it.

In essence: if it beats the scientific method in its own game, it will become mainstream scientific method.

Benquo: Even infinitesimal are not equal to zero. You don't even need infinitesimals in differential calculus. Instead, you can think dx and dy are just variables. You let them approach zero to see what would happen at the limit, but you don't set them equal to zero. I have always personally found infinitesimals a little disturbing, since one doesn't really need them anywhere.

I am a little puzzled by this; I don't know how they teach this stuff in the US, but in Finland, if my memory serves me correctly, they taught how this "proof" is wrong in elementary school. So only complete idiots would be fooled by this "logic".

I am sorry that I am too lazy to read this thoroughly, but to me the original problem seems a mere illusion and a strawman. A priori, the two experiments are different, but who cares? The experiment with its stopping condition yields a distribution of results only if you have some assumed a priori distribution over the patient population. If you change the stopping condition without changing this distribution, you change the experiment and you get a different distribution for the result. This has nothing to do with evidential impact. Frequentists don't, as far as I can tell, claim anything like that.

I am sorry that I am too lazy to read this thoroughly, but to me the original problem seems a mere illusion and a strawman. A priori, the two experiments are different, but who cares? The experiment with its stopping condition yields a distribution of results only if you have some assumed a priori distribution over the patient population. If you change the stopping condition without changing this distribution, you change the experiment and you get a different distribution for the result. This has nothing to do with evidential impact. Frequentists don't, as far as I can tell, claim anything like that.

I think this argument is flawed with respect to the more technology-oriented questions. Most people do not seriously claim to solve AI problems. What most people (like myself) who are slightly educated in the field (I did an undergrad minor in AI, just very simple stuff) will do is they will suggest an approach that they would try if they had to start working on it. Technical questions also usually yield to evidence very quickly whenever it matters, i.e., when someone would start burning money on an implementation. That is not to say some time and resources are not to be saved by using the maxim outlined here.

OTOH, the part about economists is valid, since most people have very strong ideas (usually wrong ones) about what will work, e.g., as a policy. But then again, most people have no way of wasting (other peoples') resources based on these faulty ideas.

No, wait...