Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

I think that the God reference and foul language used in Cure_of_Ars comment have misdirected an important criticism to this article, which I for one would like to hear your responses to, so please for those who downvoted and saved the criticism for his comment, I would like to hear your thoughts and have it explained to me; for me, it is not trivial that he has no point in his first paragraph.

But to clarify, I'd restate my open questions on the subject which were partly described by his comment.

The original formulation of this principle is: "Entities should not be multiplied without necessity." This formulation is not that clear to me; what I can understand from it is that one shouldn't add unnecessary complexity to a theory unless he has to.

A clear example where Occam's razor may be used as intended is as following: assume I have a program that takes a single number as an input and returns a number. Now, if we observe the following sequence: f(1) = 2, f(4) = 16 and f(10) = 1024, we might be tempted say f(x) = 2^x. But this is not the only option; we could have: f(x) = {x > 0 -> 2^x, x <= 0 -> 10239999999} or even f(x) = {1 -> 2, 4 ->16, 10->1024, [ANY OTHER INPUT TO OUTPUT]}.

Since these examples all make the same predictions in all experimental tests so far, it follows we should choose the simplest one, being 2^x [and if more experimental tests will follow in the future, we could have chosen in advance similarly complex alternatives that would have predicted correct observations as 2^x for these tests just as well. In fact, we can only make a finite amount of experimental tests, and as such there are an infinite amount of hypotheses that would correctly predict these tests and have an additional, useless, layer of complexion added to them.]

What exactly entities mean, or how multiplication of them is defined, I could only guess based on my understanding of these concepts and the popular interpretations of this principle, such as: "Occam's razor says that when presented with competing hypotheses that make the same predictions, one should select the solution with the fewest assumptions"

In any case, I sense (after reading multiple sources that emphasize this) that there is an emphasis here that isn't properly addressed in this article and skipped over in these replies, and it is that Occam's razor is not meant to be a way of choosing between hypotheses that make different predictions.

In the article, the question of how to weight simplicity over precision arises; if we have two theorems, T1 and T2, which have different precision (say T1 has 90% success rate where T2 has 82%) and different complexity (but T1 is more complex than T2) how can we decide between the two?

From my understanding, and this is where I would like to hear your thoughts, this question cannot be solved by Occam’s razor. That being said, I think this question is even more interesting and important than the one that Occam's razor attempts at solving. And to answer that question, it appears that Occam's razor has been generalized, to something like: "The explanation requiring the fewest assumptions is most likely to be correct." These generalizations are even given a different name (the law of parsimony, or the rule of simplicity) to stress they are not the same as Occam's razor.

But that is neither the original purpose of the principle, nor is it a proven fact. The following quote stresses this issue: "The principle of simplicity works as a heuristic rule of thumb, but some people quote it as if it were an axiom of physics, which it is not. [...] The law of parsimony is no substitute for insight, logic and the scientific method.  It should never be relied upon to make or defend a conclusion.  As arbiters of correctness, only logical consistency and empirical evidence are absolute."

A usage of this principle that does appeal to my logic is to get rid of hypothetical absurdities, esp. if they cannot be tested using the scientific method. This has been done in the field of physics, and this quote illustrates my point:

"In physics we use the razor to shave away metaphysical concepts. [...] The principle has also been used to justify uncertainty in quantum mechanics.  Heisenberg deduced his uncertainty principle from the quantum nature of light and the effect of measurement.

Stephen Hawking writes in A Brief History of Time:
"We could still imagine that there is a set of laws that determines events completely for some supernatural being, who could observe the present state of the universe without disturbing it.  However, such models of the universe are not of much interest to us mortals.  It seems better to employ the principle known as Occam's razor and cut out all the features of the theory that cannot be observed.""

My point here is not to disagree with the rule of simplicity (and surely not the original razor) but to stress why it is somewhat philosophical (after all, it was invented in the 14th century, much before the scientific method,) or at least, that it isn't proven that this law is right for all cases; there are strong cases in history that support it, but that is not the same as being proven.

I think that this law is a very good heuristic. Especially when we try to locate our belief in belief-space. But I believe this razor is wielded with less care than it should be - please let me know if and why you disagree.

Additionally, I do not think I have gained a practical tool to evaluate precision vs. simplicity. Solomonoff's induction seems highly impossible to use in real life, esp. when evaluating theories outside of the laboratories (in our actual life!) I do understand it's a very hard problem, but Rationality's purpose is all about using our brains, with all its weaknesses and biases, to the best of our abilities, in order to have the maximum chance to reach Truth. This implies practical, however imperfect they may be (hopefully, as least imperfect as possible,) tools to deal with these kinds of problems in our private lives. I do not think that Solomonoff's induction is such a tool, and I do think we could use some heuristic to help us in this task.

To dudeicus: one cannot argue a theory by an example to it and then conclude by saying "if it would be tested with proper research, it will be proven." This is not the scientific method at work. What I do take from your comment is only that this has not been formally proven - thus relating to the philosophy discussion again.

I might be 12 years late, but I am only now reading Rationality and taking the time to properly address these issues.

What I have really found to be missing is the reason why availability is, in most cases, a bias; why generalizing a limited set of personal-experiences and memories is statistically wrong.

That reason, of course, lays in the fact that the available examples we rely on when this heuristic comes into play does not form a valid statistical sample. Nor in sample size (we would need dozens, if not hundreds of examples to reach proper confidence level and intervals, where we usually rely on only few [<10].) and nor in sampling frame (our observations are highly subjective and do not equally cover all sub-populations; in fact, they most-likely only cover a very specific subset of the population that revolves around our neighborhood and social-group.)

Additionally, I found an action-plan to fighting this missing (both here and in "we change our minds less often than we think".) My personal advice is to use our motivation to combat it in the following way: notice whenever we form a belief and ask ourselves: am I generalizing a limited-set of examples that come into mind from memories and past experiences? am I falling to the availability heuristic?

When you catch yourself, like I now do daily, rate how important the conclusion is, and if so - avoid reaching it through this heuristic (and choose deliberate, rational analysis instead.) If not, you may reach it using this generalization as long as you label that belief as non-trustworthy.

I believe that labeling your beliefs with trust-levels could be a very productive approach; when, in the future, you rely on a previous belief, you can incorporate the trust-level you have in that belief into play and consider if you may or may not trust it towards your current goal.

I would love to hear from you guys about all of this. For more, you can read what I've written in my Psychology OneNote notebook, in the page about this very bias.