Ian_Maxwell

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Has anyone else noticed that in this particular 'compromise', the superhappies don't seem to be actually sacrificing anything?

I mean, their highest values are being ultra super happy and having sex all the time, and they still get to do that. It's not as if they wanted not to create literature or eat hundreds of pseudochildren. Whereas humans will no longer get to feel frustrated or exhausted, and babyeaters will no longer get to eat real children.

I don't think the superhappies are quite as fair-minded as Akon thought. They agreed to take on traits of humanity and babyeating in an attempt to placate everyone, not because it was a fair trade.

@Mike Plotz: It's true that you can't do better than random in predicting (theoretical nonphysical) coin tosses, but you also can't do worse than random. As Eliezer pointed out, the claim isn't "it is always possible to to better than random", but "any algorithm which can be improved by adding randomness, can be improved even more without adding randomness."

@Ken: I am interested in your claim. You can understand that your personal testimony is not really enough to convince, but I will assume that you are posting in good faith and are serious about (dis)proving your psychic abilities to your own satisfaction.

You may wish to attempt the following modification on the rock-paper-scissors experiment: Your wife (or another party) will roll a six-sided die. 1-2, she will throw rock; 3-4, she will throw paper; 5-6, she will throw scissors. In this way, her throw will be entirely random (and so not predictable through ordinary mental reasoning), and yet she will know in advance what she plans to throw (and so it will be predictable given sufficient access to her inner mental state). If over a large number of trials you are able to guess her throws more often than expected, you are probably onto something.

Eliezer, to steal one of your phrases: You know, you're right.

That said, I was already quite willing to call Watson mistaken. He was mistaken about other things---in particular, he latched onto classical conditioning and treated it as the One Simple Principle That Can Explain All Behavior---so it's not terrifically surprising. One gets the impression that he was primarily interested in making a name for himself.

Amusingly, Skinner gets most of the flak for the sort of ridiculosity that Watson espoused, even though he explicitly stated in his monographs that internal mental life exists (in particular, he stated that it is a type of behavior, not an explanation for behavior).

I agree that this post's introduction to behaviorism is no more than a common mischaracterization. It is the sort of mischaracterization that has spread farther than the original idea, to the point that psychology textbooks (which are more often than not terribly inaccurate) repeat the error and psychology graduates write Wikipedia articles saying that "Behaviorists believe consciousness does not exist".

Behaviorism is a methodology, not a hypothesis. It is the methodology that attempts to explain behavior without recourse to internal mental states. The basis for this approach is that internal mental states can only be inferred from behavior in the first place, so that they offer no additional predictive power. That said, it may turn out that a certain class of behaviors tend to lump together, and there would be no problem in labelling these "angry behaviors" or "vengeful behaviors" and describing an organism as "angry" when it exhibits angry behaviors. A behaviorist will not hypothesize that there is an internal angry feeling corresponding to this angry state. He will not hypothesize that there is not an internal angry feeling corresponding to this angry state. He will not hypothesize about internal feelings at all, because he has no way of testing his hypothesis if he does.

It may be that modern neuroscience makes certain "internal explanations" testable after all. This does not make behaviorism a bad methodology! It works quite well if you don't happen to have an MRI scanner on hand. It works a lot better than ascribing a subject's lashing out to "rage" and, when asking how you know he's enraged, saying, "Because he's lashing out."

I had thought of that particular plot hole solution. In fact, however, most violations of thermodynamics and other physical laws seem to occur within the Matrix, not outside. That is, the rules of the Matrix do not add up to normality.

There actually is a cover in the movie, though: the human energy source is "combined with a source of fusion". This is, as one review stated, like elaborately explaining how a 747 is powered by rubber-bands and then mentioning that this is combined with four jet engines.

If I understand this model correctly, it has the consequence that from a typical point in the configuration space there are not only many futures (i.e. paths starting at this point, along which entropy is strictly increasing), but many pasts (i.e. paths starting at this point, along which entropy is strictly decreasing). Does this sound correct?

Bog: You are correct. That is, you do not understand this article at all. Pay attention to the first word, "Suppose..."

We are not talking about how calculators are designed in reality. We are discussing how they are designed in a hypothetical world where the mechanism of arithmetic is not well-understood.

This old post led me to an interesting question: will AI find itself in the position of our fictional philosophers of addition? The basic four functions of arithmetic are so fundamental to the operation of the digital computer that an intelligence built on digital circuitry might well have no idea of how it adds numbers together (unless told by a computer scientist, of course).

This argument makes no sense to me:

If you've been cryocrastinating, putting off signing up for cryonics "until later", don't think that you've "gotten away with it so far". Many worlds, remember? There are branched versions of you that are dying of cancer, and not signed up for cryonics, and it's too late for them to get life insurance.

This is only happening in the scenarios where I didn't sign up for cryonics. In the ones where I did sign up, I'm safe and cozy in my very cold bed. These universes don't exist contingent on my behavior in this one; what possible impact could my choice here to sign up for cryonics have on my alternate-universe Doppelgängeren?

Load More