@Mike Plotz: It's true that you can't do better than random in predicting (theoretical nonphysical) coin tosses, but you also can't do worse than random. As Eliezer pointed out, the claim isn't "it is always possible to to better than random", but "any algorithm which can be improved by adding randomness, can be improved even more without adding randomness."
@Ken: I am interested in your claim. You can understand that your personal testimony is not really enough to convince, but I will assume that you are posting in good faith and are serious about (dis)proving your psychic abilities to your own satisfaction.
You may wish to attempt the following modification on the rock-paper-scissors experiment: Your wife (or another party) will roll a six-sided die. 1-2, she will throw rock; 3-4, she will throw paper; 5-6, she will throw scissors. In this way, her throw will be entirely random (and so not predictable through...
Eliezer, to steal one of your phrases: You know, you're right.
That said, I was already quite willing to call Watson mistaken. He was mistaken about other things---in particular, he latched onto classical conditioning and treated it as the One Simple Principle That Can Explain All Behavior---so it's not terrifically surprising. One gets the impression that he was primarily interested in making a name for himself.
Amusingly, Skinner gets most of the flak for the sort of ridiculosity that Watson espoused, even though he explicitly stated in his monographs that internal mental life exists (in particular, he stated that it is a type of behavior, not an explanation for behavior).
I agree that this post's introduction to behaviorism is no more than a common mischaracterization. It is the sort of mischaracterization that has spread farther than the original idea, to the point that psychology textbooks (which are more often than not terribly inaccurate) repeat the error and psychology graduates write Wikipedia articles saying that "Behaviorists believe consciousness does not exist".
Behaviorism is a methodology, not a hypothesis. It is the methodology that attempts to explain behavior without recourse to internal mental state...
I had thought of that particular plot hole solution. In fact, however, most violations of thermodynamics and other physical laws seem to occur within the Matrix, not outside. That is, the rules of the Matrix do not add up to normality.
There actually is a cover in the movie, though: the human energy source is "combined with a source of fusion". This is, as one review stated, like elaborately explaining how a 747 is powered by rubber-bands and then mentioning that this is combined with four jet engines.
If I understand this model correctly, it has the consequence that from a typical point in the configuration space there are not only many futures (i.e. paths starting at this point, along which entropy is strictly increasing), but many pasts (i.e. paths starting at this point, along which entropy is strictly decreasing). Does this sound correct?
Bog: You are correct. That is, you do not understand this article at all. Pay attention to the first word, "Suppose..."
We are not talking about how calculators are designed in reality. We are discussing how they are designed in a hypothetical world where the mechanism of arithmetic is not well-understood.
This old post led me to an interesting question: will AI find itself in the position of our fictional philosophers of addition? The basic four functions of arithmetic are so fundamental to the operation of the digital computer that an intelligence built on digital circuitry might well have no idea of how it adds numbers together (unless told by a computer scientist, of course).
This argument makes no sense to me:
If you've been cryocrastinating, putting off signing up for cryonics "until later", don't think that you've "gotten away with it so far". Many worlds, remember? There are branched versions of you that are dying of cancer, and not signed up for cryonics, and it's too late for them to get life insurance.
This is only happening in the scenarios where I didn't sign up for cryonics. In the ones where I did sign up, I'm safe and cozy in my very cold bed. These universes don't exist contingent on my behavior in this one; what possible impact could my choice here to sign up for cryonics have on my alternate-universe Doppelgängeren?
It seems to me that there is an important distinction between these scenarios. Of course, it could be that I'm just not enlightened enough to see the total similarity.
In the first scenario, 'you' are at least attempting to explain yourself to the shaman. In fact, you have answered, both literally with "yes" and to the shaman's intent by explaining. That he does not believe your explanation is a separate matter.
In the second scenario, I imagine your literal answer to John would be "no"---because there is no such thing as "same stuff...
This is the first clear explanation of the phenomenon of quantum entanglement that I have ever read (though I gather it's still a simplification since we're assuming the mirrors aren't actually made out of particles like everything else). I have never really understood this phenomenon of "observation", but suddenly it's obvious why it should make a difference. Thank you.
I agree with some others that Eliezer is here arguing against a fairly naïve form of anti-reductionism, and indeed is explaining rather than refuting it. However, I assume, Eliezer, that the point of your entry is (in keeping with the theme of the blog) to illustrate a certain sort of bias through its effects, rather than to prove to everyone that reductionism is really truly true. So explanation over refutation is entirely appropriate here.
If harm aggregates less-than-linearly in general, then the difference between the harm caused by 6271 murders and that caused by 6270 is less than the difference between the harm caused by one murder and that caused by zero. That is, it is worse to put a dust mote in someone's eye if no one else has one, than it is if lots of other people have one.
If relative utility is as nonlocal as that, it's entirely incalculable anyway. No one has any idea of how many beings are in the universe. It may be that murdering a few thousand people barely registers as harm, ...
"Nainodelac and Tarleton Nick": This is not about risk aversion. I agree that if it is vital to gain at least $20,000, 1A is a superior choice to 1B. However, in that case, 2A is also a superior choice to 2B. The error is not in preferring 1A, but in simultaneously preferring 1A and 2B.
I don't see the relevancy of Mr. Burrows' statement (correct, of course) that "Very wealthy people give less, as a percentage of their wealth and income, than people of much more limited means. For wealthy philanthropists, the value from giving may be in status from the publicity of large gifts."
This is certainly of concern if our goal is to maximize the virtue of rich people. If it is to maximize general welfare, it is of no concern at all. The recipients of charity don't need a percentage's worth of food, but a certain absolute amount.
Has anyone else noticed that in this particular 'compromise', the superhappies don't seem to be actually sacrificing anything?
I mean, their highest values are being ultra super happy and having sex all the time, and they still get to do that. It's not as if they wanted not to create literature or eat hundreds of pseudochildren. Whereas humans will no longer get to feel frustrated or exhausted, and babyeaters will no longer get to eat real children.
I don't think the superhappies are quite as fair-minded as Akon thought. They agreed to take on traits of humanity and babyeating in an attempt to placate everyone, not because it was a fair trade.