Posts

Sorted by New

Wiki Contributions

Comments

When I design a toaster oven, I don't design one part that tries to get electricity to the coils and a second part that tries to prevent electricity from getting to the coils. It would be a waste of effort. Who designed the ecosystem, with its predators and prey, viruses and bacteria? Even the cactus plant, which you might think well-designed to provide water fruit to desert animals, is covered with inconvenient spines.

I understand your point and your examples, but it is wrong to infer that conflicting subsystems are evidence of poor design or no design at all. For instance, in CMOS design of logical ports, we use PMOS(es) to pull-up and NMOS(es) to pull-down the output voltage(s). More generally, when we want to design something able to change its state in a certain state-space, we often put sub-systems which go one against the other and let the contour conditions decide where the balance will be (in the CMOS example, the contour conditions are the input(s) of the logical gate). We as human designers do this a lot, actually.

I agree with all you other points, though.

They have to worry about the edge because they get their base vote no matter what

that's true... but it is still a fact that

they would rather appeal to their base than cater to "the enemy".

I think because in this way they charge up their base voters, which are then more willing to do some work for them, such as proselyte around, sharing on facebook, talking only about the good things of their party and the bad things of the opposite one, and that kind of stuff. In this way they can easily catch the edge voters who see the distinction between voters and politicians, because the proselytes come from other voters "just like me" and not from politicians.

Read up on Feyerabend

Aehm, was Feyerabend a scientist?

The fear of losing a moral compass is itself a moral compass

Doesn't this sound like a belief in belief?

I don't want God to be my moral compass, because I don't believe in it and I don't want my good behaviour (and others' behaviour, too) to be built upon a sand castle. But I don't like this foundation of morality, too: it sounds absolute, which makes it incomparable with others'. Also, what about sociopaths who don't have this moral commanding hard wired in their brain? Should them be allowed to kill?

I prefer to give value to human life, *just because I acknowledge that it has a great potentiality*, and then maximize the utility function. This kind of argument for morality is the safest (at least, among intelligent people: intelligent sociopaths would understand it, dumb ones would not)

could you please make an EPUB version, as for your Harry Potter fanfiction? With PDFs you can't change font size, so it's a big pain to read with an ebook reader. thanks

No, I don't: actually we probably agree about that, with that sentence I was just trying to underline the "being understood" requirement for an effective theory. That was meant to introduce my following objection that the order in which you teach or learn two facts is not irrelevant. The human brain has memory, so a Markovian model for the effectiveness of theories is too simple.

I disagree on five points. The first is my conclusion too; the second leads to the third and the third explains the fourth. The fifth one is the most interesting.

1) In contrast with the title, you did not show that the MWI is falsifiable nor testable; I know the title mentions decoherence (which is falsifiable and testable), but decoherence is very different from the MWI and for the rest of the article you talked about the MWI, though calling it decoherence. You just showed that MWI is "better" according to your "goodness" index, but that index is not so good. Also, the MWI is not at all a consequence of the superposition principle: it is rather an ad-hoc hypothesis made to "explain" why we don't experience a macroscopic superposition, despite we would expect it because macroscopic objects are made of microscopic ones. But, as I will mention in the last point, the superposition of macroscopic objects in not an inevitable consequence of the superposition principle applied to microscopic objects.

2) You say that postulating a new object is better than postulating a new law: so why teach Galileo's relativity by postulating its transformations, while they could be derived as a special case of Lorents transformations for slow speeds? The answer is because they are just models, which gotta be easy enough for us to understand them: in order to well understand relativity you first have to understand non-relativistic mechanics, and you can only do it observing and measuring slow objects and then making the simplest theory which describes that (i.e., postulating the shortest mathematical rules experimentally compatible with the "slow" experiences: Galileo's); THEN you can proceed in something more difficult and more accurate, postulating new rules to get a refined theory. You calculate the probability of a theory and use this as an index of the "truthness" of it, but that's confusing the reality with the model of it. You can't measure how a theory is "true", maybe there is no "Ultimate True Theory": you can just measure how a theory is effective and clean in describing the reality and being understood. So, in order to index how good a theory is, you should instead calculate the probability that a person understands that theory and uses it to correctly make anticipations about reality: that means P(Galileo) >> P(first Lorentz, then show Galileo as a special case); and also P(first Galileo, after Lorentz) != P(first Lorentz, after Galileo), because you can't expect people to be perfect rationalists: they can be just as rational as possible. The model is just an approximation of the reality, so you can't force the reality of people to be the "perfect rational person" model, you gotta take in account that nobody's perfect.

3) Because nobody's perfect, you must take in account the needed RAM too. You said in the previous post that "Occam's Razor was raised as an objection to the suggestion that nebulae were actually distant galaxies—it seemed to vastly multiply the number of entities in the universe", in order to justify that the RAM account is irrelevant. But that argument is not valid: we rejected the hypothesis that nebulae are not distant galaxies not because the Occam's Razor is irrelevant, but because we measured their distance and found that they are inside our galaxy; without this information, the simpler hypothesis would be that they are distant galaxies. The Occam's Razor IS relevant not only about the laws, but about the objects too. Yes, given a limited amount of information, it could shift toward a "simpler yet wrong model", but it doesn't annihilate the probability of the "right" model: with new information you would find out that you were previously wrong. But how often does the Occam's Razor induce you to neglect a good model, as opposed to how often it let us neglect bad models? Also, Occam's Razor may mislead you not only when applied to objects, but when applied to laws too, so your argument discriminating Occam's Razor applicability doesn't stand.

4) The collapse of the wave function is a way to represent a fact: if a microscopic system S is in an eigenstate of some observable A and you measure on S an observable B which is non commuting with A, your apparatus doesn't end up in a superposition of states but gives you a unique result, and the system S ends up in the eigenstate of B corresponding to the result the apparatus gave you. That's the fact. As the classical behavior of macroscopic objects and the stochastic irreversible collapse seems in contradiction with the linearity, predictability and reversibility of the Schrödinger equation ruling the microscopic systems, it appears as if there's an uncomfortable demarcation line between microscopic and macroscopic physics. So, attempts have been made in order to either find this demarcation line, or show a mechanism for the emergence of the classical behavior from the quantum mechanics, or solve or formalize this problem however. The Copenhagen interpretation (CI) just says: "there are classical behaving macroscopic objects, and quantum behaving microscopic ones, the interaction of a microscopic object with a macroscopic apparatus causes the stochastic and irreversible collapse of the wave function, whose probabilities are given by the Born rule, now shut up and do the math"; it is a rather unsatisfactory answer, primarily because it doesn't explain what gives rise to this demarcation line and where should it be drawn; but indeed it is useful to represent effectively what are the results of the typical educational experiments, where the difference between "big" and "small" is in no way ambiguous, and allows you to familiarize fast with the bra-ket math. The Many Worlds Interpretation (MWI) just says: "there is indeed the superposition of states in the macroscopic scale too, but this is not seen because the other parts of the wave function stay in parallel invisible universes". Now imagine Einstein did not develop the General Relativity, but we anyway developed the tools to measure the precession of Mercury and we have to face the inconsistency with our predictions through Newton's Laws: the analogous of the CI would be "the orbit of Mercury is not the one anticipated by Newton's Laws but this other one, now if you want to calculate the transits of Mercury as seen from the Earth for the next million years you gotta do THIS math and shut up"; the analogous of the MWI would be something like "we expect the orbit of Mercury to precede at this rate X but we observe this rate Y; well, there is another parallel universe in which the preceding rate of Mercury is Z such that the average between Y and Z is the expected X due to our beautiful indefeasible Newton's Law". Both are unsatisfactory and curiosity stoppers, but the first one avoids to introduce new objects. The MWI, instead, while explaining exactly the same experimental results, introduces not only other universes: it also introduces the concept itself that there are other universes which proliferate at each electron's cough attack. And it does just for the sake of human pursuit of beauty and loyalty to a (yes, beautiful, but that's not the point) theory.

5) you talk of MWI and of decoherence as they are the same thing, but they are quite different. Decoherence is about the loss of coherence that a microscopic system (an electron, for instance) experiences when interacting with a macroscopic chaotic environment. As this sounds rather relevant to the demarcation line and interaction between microscopic and macroscopic, it has been suggested that maybe these are related phenomenons, that is: maybe the classical behavior of macroscopic objects and the collapse of the wave function of a microscopic object interacting with a macroscopic apparatus are emergent phenomenons, which arise from the microscopic quantum one through some interaction mechanism. Of course this is not an answer to the problem: it is just a road to be walked in order to find a mechanism, but we gotta find it. As you say, "emergence" without an underlying mechanism is like "magic". Anyway, decoherence has nothing to do with MWI, though both try (or pretend) to "explain" the (apparent?) collapse of the wave function. In the last decades decoherence has been probed and the results look promising. Though I'm not an expert in the field, I took a course about it last year and made a seminar as exam for the course, describing the results of an article I read (http://arxiv.org/abs/1107.2138v1). They presented a toy model of a Curie-Weiss apparatus (a magnet in a thermal bath), prepared in an initial isotropic metastable state, measuring the z-axis spin component of a 1/2 spin particle through induced symmetry breaking. Though I wasn't totally persuaded by the Hamiltonian they wrote and I'm sure there are better toy models, the general ideas behind it were quite convincing. In particular, they computationally showed HOW the stochastic indeterministic collapse can emerge from just: a) Schrödinger's equation; b) statistical effects due to the "large size" of the apparatus (a magnet composed by a large number N of elementary magnets, coupled to a thermal bath); c) an appropriate initial state of the apparatus. They did not postulate neither new laws nor new objects: they just made a model of a measurement apparatus within the framework of quantum mechanics (without the postulation of the collapse) and showed how the collapse naturally arose from it. I think that's a pretty impressive result worth of further research, more than the MWI. This explains the collapse without postulating it, nor postulating unseen worlds.