The answer seems circular: because it works. The experience of people using Occam's razor (e.g. scientists) find MDL to be more likely to lead to correct answers than any other formulation.
I don't see that that makes other formulations "not Occam's razor", it just makes them less useful attempts at formalizing Occam's razor. If an alternative formalization was found to work better, it would not be MDL - would MDL cease to be "Occam's razor"? Or would the new, better formalization "not be Occam's razor"? Of the latter, by what metric, since the new one "works better"?
For the record, I certainly agree that "space complexity alone" is a poor metric. I just don't see that it should clearly be excluded entirely. I'm generally happy to exclude it on the grounds of parsimony, but this whole subthread was "How could MWI not be the most reasonable choice...?"
The MWI requires fewer rules than Copenhagen, and therefore its description is smaller, and therefore it is the strictly simpler theory.
Is there anything in particular that leads you to claim Minimum Description Length is the only legitimate claimaint to the title "Occam's razor"? It was introduced much later, and the wikipedia article claims it is "a forumlation of Occam's razor".
Certainly, William of Occam wasn't dealing in terms of information compression.
That's would not be Occam's razor...
What particular gold-standard "Occam's razor" are you adhering to, then? It seems to fit well with "entities must not be multiplied beyond necessity" and "pluralities must never be posited without necessity".
Note that I'm not saying there is no gold-standard "Occam's razor" to which we should be adhering (in terms of denotation of the term or more generally); I'm just unaware of an interpretaton that clearly lays out how "entities" or "assumptions" are counted, or how the complexity of a hypothesis is otherwise measured, which is clearly "the canonical Occam's razor" as opposed to having some other name. If there is one, by all means please make me aware!
There'd be no reason to expect it to torture people at less than the maximum rate its hardware was capable of.
But good reason to expect it not to torture people at greater than the maximum rate its hardware was capable of, so if you can bound that there exist some positive values of belief that cannot be inflated into something meaningful by upping copies.
Your numbers are still wrong I'm afraid - guessing you mean ~70.98%...
You can prefer that state, sure. But that doesn't mean that it is an accurate reflection of reality. The abstract idea of my daughters existence beyond the light cone is comforting, and would make me happy. But the abstract idea of my daughters existence in heaven is also comforting and would make me happy. I wish it were true that she existed. But I don't believe things just because they would be nice to believe.
This is what I meant when I said that thought experiments were a bad way to think about these things. You've confused values and epistemology as a result of the ludicrously abstract nature of this discussion and the emotionally charged thought experiment that I had thrust upon me.
I am not saying, "You value her continued existence, therefore you should believe in it." I am rather saying that your values may extend to things you do not (and will not, ever) know about, and therefore it may be necessary to make estimations about likelihoods of things that you do not (and will not, ever) know about. In this case, the epistemological work is being done by an assumption of regularity and a non-privileging of your particular position in the physical laws of the universe, which make it seem more likely that there is not anything special about crossing your light cone as opposed to just moving somewhere else where she will happen to have no communication with you in the future.
Nope. The outcome is functionally the same to me either way. I can't tell the difference between whether she died on the verge of the cone, or if she made it out and lived forever, or if she made it out and died two days later. Therefore the difference is meaningless. People are only meaningful to me insofar as I can interact with them (directly or indirectly), otherwise they're just abstract ideas with no predictive power.
Values aren't things which have predictive power. I don't necessarily have to be able to verify it to prefer one state of the universe over another.
Then the obvious strategy is to start feeling lots of loyalty toward Easily Affected Country, and donate lots to organizations in Powerful Country that effect change in Easily Affected Country. This diminishes your political bonus but the extra leverage compensates. Bot-swa-na! Bot-swa-na!
I actually think the apple pie reason is an unusually good one. There's nothing wrong with cheering for things.
You're assuming that display of loyalty can radically increase your influence. My model was that your initial influence is determined situationally, and your disposition can decrease it more easily than increase it.
That said, let's run with your interpretation; Bot-swa-na! Bot-swa-na!
Why is it your messed-up country?
- Because its laws treat you well, and you want to support that system out of gratitude?
- Because you've lived there a while, and you're attached to things in it?
- Because you were born there, and... that matters for some reason?
- Because you have relative from there, and ditto?
- Because you have relatives from elsewhere, and it sucked, so you cheer for the least-bad country?
- Because bald eagles look awesome and apple pie is delicious, so you have positive emotional associations to the corresponding countries?
Because states are still a powerful force for (or against) change in this world, you are limited in the number of them you can directly affect (determined largely by where you and relatives were born), and for political and psychological reasons that ability is diminished when you fail to display loyalty (of the appropriate sort, which varies by group) to those states.
Also, apple pie is delicious.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
There's an intent behind Occam's razor. When Einstein improved on Newton's gravity, gravity itself didn't change. Rather, our understanding of gravity was improved by a better model. We could say though that Newton's model is not gravity because we have found instances where gravity does not behave the way Newton predicted.
Underlying Occam's razor is the simple idea that we should prefer simple ideas. Over time we have found ways to formalize this statement in ways that are universally applicable. These formalizations are getting closer and closer to what Occam's razor is.
I'll accept that.